Recent comments in /f/deeplearning
IshanDandekar t1_ixlugz1 wrote
Reply to Keras metrics and losses by Sadness24_7
Hi, if you really want to use RMSE as a metric, here's the link RMSE
First_Bullfrog_4861 t1_ixjazoo wrote
Reply to comment by JH4mmer in How to efficiently re-train a classification model with an addition of a new class? by kingfung1120
ok, got it. however, in my experience the number of labels is far less obvious in real world datasets than one might expect. consider an example with images of bottles, cups and glasses, so three labels.
a model trained on these three labels will need revision if further down after the deployment process ‚bottles‘ need to be split in ‚plastic bottles‘ and ‚glass bottles‘. both label sets are perfectly valid, due to the hierarchical nature of things.
anyway, my point is actually another one: afaik this will require dataset relabeling and fully iterate the training process on the newly labeled dataset.
or is there a faster way to make the model aware of the more finegrained bottle labels?
i mean, without access to data of cups and glasses, basically inform it of more finegrained bottle types but let it still keep its knowledge of cups and glasses.
LinuxSpinach t1_ixiy1zy wrote
Reply to comment by JH4mmer in How to efficiently re-train a classification model with an addition of a new class? by kingfung1120
>main problem with that approach is that a catch-all class like that has infinite variance
Sometimes it doesn't, and I've seen an 'other' class work well in these cases. In cases where the data being fed to the model already constrains the variance, then an other class won't have infinite variance. Eg. you know that all of the data will be pictures of fruit, but you only want to label apples, bananas and oranges. In this case, there is a finite number of fruits to take pictures of.
If you are going to use an 'other' label, I think it should be ok in cases where you could label the data, but the labels that the other class comprises are unimportant to your application.
saw79 t1_ixiusbb wrote
Reply to How to efficiently re-train a classification model with an addition of a new class? by kingfung1120
Your model should output 3 logits, one for class_a, one for class_b, and one for class_c.
When you use data from the 1st dataset,
- penalize
class_aoutputs for samples withclass_bandanything_but_a_blabels - penalize
class_boutputs for samples withclass_aandanything_but_a_blabels - penalize
class_coutputs for samples withclass_aandclass_blabels
When you use data from the 2nd dataset,
- penalize
class_aoutputs for samples withclass_clabels - penalize
class_boutputs for samples withclass_clabels - penalize
class_coutputs for samples withnot_class_clabels
JH4mmer t1_ixhvl5k wrote
Reply to comment by First_Bullfrog_4861 in How to efficiently re-train a classification model with an addition of a new class? by kingfung1120
You may be making a different set of assumptions about the training data than I am, so let me clarify a bit. :-)
If you start with images that truly do contain just one class, the addition of a new class label wouldn't change anything. Your level vector for the exisiting images would migrate from [1, 0] to [1, 0, 0], something that can be done automatically without additional human intervention. Your new images (used for training the new class) would have a label of [0, 0, 1].
If, however, your images do already contain more than one possible class (which is far and away much more common in real-world data), the original labels would be already invalid, since the original labeling assumed that there was only one correct answer. Those images that do contain multiple classes would have to be relabeled, yes.
The process I'm describing is a mechanical one that doesn't involve a separate knowledge distillation step. It's a technique my team has used successfully in industrial retail applications, where the number of classes is truly an unknown, and we have to add or remove classes from our trained models frequently.
First_Bullfrog_4861 t1_ixhi8ki wrote
Reply to comment by JH4mmer in How to efficiently re-train a classification model with an addition of a new class? by kingfung1120
but finetuning requires the relabeling of the original dataset to include both old and the new label, which op specifically does not want to do.
i don’t think what op wants is doable. or is there some approach i’m missing? i think what op basically wants to do is retrain, but with only data from the new class, and still avoid catastrophic forgetting of the other labels.
is there a way to do this?
JH4mmer t1_ixhfmew wrote
Reply to How to efficiently re-train a classification model with an addition of a new class? by kingfung1120
My recommendation would be to drop the "other" class entirely. Thats a classic mistake that I've seen juniors do many times, and it doesnt really work out like you expect it to in the real world. The main problem with that approach is that a catch-all class like that has infinite variance (theoretically requiring infinite training data). Plus your labels often become massively unbalanced relative to the positive classes.
Instead, think of your model as having multiple tails, one for each class you actually care about (e.g. what is the probability that a dog is in this image?, what is the probability that a cat is in this image? Etc.) Each output has its own logistical activation that's independent of the other classes. Where before you might have had a softmax layer that returned [0.2, 0.3, 0.5] for (dog, cat, other), you might now have [0.8, 0.7] for (dog, cat). The labels will not sum to 1 because they are independent of one another.
Note that this is the approach you would take for multi-class classification as well, so you might want to read up on that pattern for more information.
Lastly, if you have a trained model in this format, adding a new class is very easy. The first N layers of the network are shared for all classes and so are already pretrained for you. You would add a new tail to the model using whichever weight initialization strategy you care about, add some samples of the new class, and then do some fine tuning on the new tail layer(s) to make sure that your network can effectively detect the new class.
Of course there are many variations to this training approach. You may choose to also do some fine tuning of the entire network with a dataset that includes samples of the new class, but hopefully you get the idea.
I hope this points you in the right direction! Cheers.
emad_eldeen t1_ixh95fx wrote
Reply to How to efficiently re-train a classification model with an addition of a new class? by kingfung1120
I think this falls under incremental learning, where you seek to learn from the new dataset without forgetting the old classes.
suflaj t1_ixgrm1f wrote
Reply to How to efficiently re-train a classification model with an addition of a new class? by kingfung1120
You can add a new class to the classifier and then do model surgery to transfer old model weights onto part of your new model.
C0demunkee t1_ixd9fdq wrote
Reply to comment by Star-Bandit in GPU QUESTION by Nerveregenerator
yeah you can easily use it all up from both image scale and batch size. Also some models are a bit heavy and don't leave any for the actual generation.
Try "pruned" models, they are smaller.
since the training sets are all on 512x512 images it makes the most sense to generate at that res and then upscale.
treksis t1_ixcd92w wrote
Start with Andrew NG's DL course. This course is good for engineering dude. He doesn't dive super deep into the fundamentals but he helps you build minimum intuition to get the idea of what's going on behind DL. Reading heavy DL books at the beginning will make you feel exhausted.
You can also check out https://d2l.ai/ and video lectures for reference.
Also, if you really feel uncomfortable with all these DL library things __ init__super() thing with OOP stuff combined with tensors, Tensforflow, PyTorch and huggingface etc..., check out Andrej Karpathy's youtube channel.
wisescience t1_ixbuzi7 wrote
If you’re a book learner, you might appreciate “Inside Deep Learning” by Edward Raff. Raff’s text uses PyTorch but really works at it from the ground up with math + code. Sebastian Raschka has some free online content as well, and his recent book covers DL from ch. 11+ — e.g., you’ll build a neural network from scratch and move on from there (also PyTorch-focused).
Glad to hear others’ comments as well as their reactions to these specific suggestions.
drsimonz t1_ixbsnwm wrote
First you might want to start with Shallow Learning :D
paarulakan t1_ixbqny1 wrote
's book. Such a fantastic read for a beginner.
IshanDandekar t1_ixba9ql wrote
Also, see keras examples. They have great tutorials on how to solve various deep learning problems, which can give you a basic understanding on how you can approach projects.
Nerveregenerator OP t1_ixb60cu wrote
Reply to comment by incrediblediy in GPU QUESTION by Nerveregenerator
oh, i just was thinking it could be useful (possibly making one myself) as I feel like this is a common issue for people.
incrediblediy t1_ixayv0c wrote
Reply to comment by Nerveregenerator in GPU QUESTION by Nerveregenerator
Sorry I don't know any package for bench marking, if you find any I can run it and tell you the results though if needed, but only use Win 10 Pro for training if that matters.
q-rka t1_ixav4ld wrote
I have written some machine learning algorithms from scratch including neural network and CNNs and some of them might be in top 10 of Google search. If I have to start again, I woukd do follows:
- Learn Python and OOP.
- Master the NumPy and little of Pandas.
- Understanding equations and how to code them like matrix multiplications, dot product..
- Understanding of forward and back propagation and writing simple example in copy and trying to write code for it.
- Trying to solve XOR problem. Its fun!
- Learning about activation functions and how error propagates on different function and error functions.
Nerveregenerator OP t1_ixadbam wrote
Reply to comment by incrediblediy in GPU QUESTION by Nerveregenerator
thanks for the thoughtful feedback! Also lmk if you have any feedback on the pip package idea above that I added to the post!
SayOnlyWhatYouMeme t1_ixad67p wrote
I am still an amateur but I started with that book I think it's a great place to start. Also the tensorflow website has some good tutorials. Finally I would create a project for yourself and start building your own networks. That's what I did!
incrediblediy t1_ix9xbce wrote
Reply to comment by Nerveregenerator in GPU QUESTION by Nerveregenerator
if your CPU/Motherboard support PCIe 4.0 16x slot, that is all needed for a RTX3090. I have a 5600x with cheap B550M-DS3H motherboard running RTX3090 + RTX3060. I also got an used RTX3090 from ebay after decline of mining. Just make sure your PSU can support it, draws 370 W at max.
Star-Bandit t1_ix9toom wrote
Reply to comment by C0demunkee in GPU QUESTION by Nerveregenerator
Interesting, to I'll have to look into the specs of the M40, have you had any issues with running out of space with vram? All my models seem to gobble it up, though I've done almost no optimizations since I've just recently gotten into ML stuff
suflaj t1_ixlvb01 wrote
Reply to I'd like to build a deep learning home server - any resources? by The_Poor_Jew
You'll probably need 2 computers or a server rack then (which also means the drastically more expensive Threadripper platform)
There is no commercial consumer PSU like that, but you can always buy 2 PSUs.