Recent comments in /f/deeplearning
Lee8846 t1_it5hqba wrote
Reply to comment by suflaj in EMA / SWA / SAM by Ttttrrrroooowwww
I wouldn't say so. One cannot judge the value for a specific method by whether it's old or new. For example, in self-supervised learning, like in the work of MOCO, people still use moving average. It's a nice technique to maintain the consistency of query encoder. By the way, EMA actually helps to smooth the weights fluctuation in some case, which may be caused by the patterns of the data. In this case, an ensemble of models might not help.
suflaj t1_it4h2vs wrote
Reply to EMA / SWA / SAM by Ttttrrrroooowwww
I wouldn't use any of them because they don't seem to be worth it, and they're generally unproven on modern, relevant models. If I wanted to minimize variance, I'd just build an ensemble of models.
The best advice I can give you is to disregard older papers, model averaging is like a 4 year old idea and doesn't seem to be used much in practice.
LuckyLuke87b t1_it19qkj wrote
Reply to comment by grid_world in Variational Autoencoder automatic latent dimension selection by grid_world
Have you tried to generate samples by sampling from your latent space prior and feeding it to the decoder? In my experience it is often necessary to tune the weight of the KL-Loss such, that the decoder is a proper generator. Once this is done, some of the latent representations from the decoder get very close to the prior distributions, while other represent the relevant information. Next step is, to compare, if these relevant latent dimensions are the same on various encoded samples. Finally, prune all dimensions, which basically never differ from the prior up to some tolerance.
grid_world OP t1_isx22tm wrote
Reply to comment by LuckyLuke87b in Variational Autoencoder automatic latent dimension selection by grid_world
I have been running some experiments on toy datasets (MNIST, CIFAR-10) and for now it seems that very few of the latent variables z measured with mu and logvar vectors are almost never 0. And mathematically it makes sense since all of the latent variables will learn at least some information which is not garbage (standard Gaussian). So deciding the optimal number of latent space dimensionality is still eluding
LuckyLuke87b t1_iswyslx wrote
I fully agree with your idea and observed similar behavior. I'm not aware of literature regarding VAE, but I believe that there was quite some fundamental work beffore deep learning on pruning bayesian neural network weights based on the posterior entropy or "information length". Similalry I would consider this latent dimension selection as a way of pruning, based on how much information is represented.
Stor_bjorn t1_isum684 wrote
Reply to Minimizing the number of inputs by Sadness24_7
Maybe you could try some feature selection for example, tree based feature selection?
Prestigious_Boat_386 t1_istdg9h wrote
Reply to comment by Sadness24_7 in Minimizing the number of inputs by Sadness24_7
Yea dure, I'm just grumpy rn
Sadness24_7 OP t1_ist98vt wrote
Reply to comment by thePedrix in Minimizing the number of inputs by Sadness24_7
oh, this looks promising, i'll give it a try and see what comes up.
Sadness24_7 OP t1_ist8qzo wrote
Reply to comment by Prestigious_Boat_386 in Minimizing the number of inputs by Sadness24_7
I see, i was not sure about that since you wrote it twice... and also you phrased the reply into two paragraphs.
Prestigious_Boat_386 t1_ist5imi wrote
Reply to comment by Sadness24_7 in Minimizing the number of inputs by Sadness24_7
Pda is clearly a typo
syntheticdataguy t1_ist56d8 wrote
Reply to comment by Remet0n in Diffusion model for synthetc data generation by Remet0n
Interesting, please keep us posted.
thePedrix t1_ist0li1 wrote
Reply to comment by Sadness24_7 in Minimizing the number of inputs by Sadness24_7
thePedrix t1_ist0fv6 wrote
Reply to comment by Sadness24_7 in Minimizing the number of inputs by Sadness24_7
I can’t be sure that it would work, but I would try this:
-PCA for N components
-Plot a graph with the 2 or 3 first principal components (depending on the cumulative explained variance, if 2 is enough, a 2D plot)
-Plot the magnitude of the variables and see which are the most impactful. Pick the X features you want.
-Train the network with those X features.
Sadness24_7 OP t1_isszoev wrote
Reply to comment by thePedrix in Minimizing the number of inputs by Sadness24_7
But what am i looking for tho. i've been looking at loadings matrix for couple minutes but cant really figure out the connections. Lets say i want to select 7 feature out of 38, so i performa pca for 7 components and im looking at loading matrix (correlation between 38 feature's and 7 pca's . do i just look at the component with best correlation with the input features and the 7 highest correlation with that pca component ?
Sadness24_7 OP t1_isstumy wrote
Reply to comment by Prestigious_Boat_386 in Minimizing the number of inputs by Sadness24_7
whats PDA ?
thePedrix t1_isstazu wrote
Reply to comment by Sadness24_7 in Minimizing the number of inputs by Sadness24_7
Maybe you can do the PCA and then check the loadings?
Sadness24_7 OP t1_issrioz wrote
Reply to comment by _triszt in Minimizing the number of inputs by Sadness24_7
I dont think PCA will help me, i need to reduce the number of feature in order to simplify the system im working with. those removed feature will no longer be aquired and thus i cant retrain the model in the future. i need to somehow pick 2-10 features out of 38 for which i can finetune the model and deploy it. only those selected features will be logged for future.
Prestigious_Boat_386 t1_issl12g wrote
Reply to Minimizing the number of inputs by Sadness24_7
Pca for moderate dimension reduction. Straight up throwing away half of the highly correlated dimensions for very high dimension numbers.
Youd reject the worst dimensions until thw size is low enought to use pda then use pda to reduce to a size your network can handle.
_triszt t1_issbxym wrote
Reply to Minimizing the number of inputs by Sadness24_7
pca
Remet0n OP t1_isryy89 wrote
Reply to comment by syntheticdataguy in Diffusion model for synthetc data generation by Remet0n
About the label, I was thinking of modifying the net to generate RGBL images instead of RGB,where L stands for label.
I guess spatial info should thus be well "linked" and the net shoud be able to generate labels.
Remet0n OP t1_isrywag wrote
Reply to comment by vraGG_ in Diffusion model for synthetc data generation by Remet0n
Sorry for initial confusing post, 5K is the number of train sample, I edited the post.
Remet0n OP t1_isq4lei wrote
Reply to comment by SnowFP in Diffusion model for synthetc data generation by Remet0n
Thanks, I indeed have 5000 images. About the label, I was thinking of modifying the net to generate RGBL images instead of RGB,where L stands for label. I guess spatial info should thus be well "linked" and the net able to generate Label.
no 3 Is a good point, I'll try to digg into it. Thanks for your point of view :)
SnowFP t1_ispmm89 wrote
Reply to Diffusion model for synthetc data generation by Remet0n
So, a couple of points. I'm sure there are several computer vision experts on this sub but here are some of my opnions. Anyone please feel free to correct me.
- If you mean you have 5000 images for segmentation then I think this would be sufficient data to train, for example, a UNET. If you are not getting the accuracy you want, perhaps look at how other people have been segmenting images in your domain for ideas.
- If you mean, you have image at 5k resolution, how many images do you have? You would likely run into the problem of small data for training generative models as well. I assume you are already using domain-specific image augmentation techniques.
- When training a generative model (such as a diffusion model) you are inherently learning the distribution of data. If you are able to generate high integrity images using this method, is there a way you could directly use this model to perform the segmentation task? (I am not familiar with the literature of diffusion models but I know other generative models, such as GANs have been used to perform image segmentation).
- I'm not sure how you could also generate labels with a generative model (perhaps there are smart ways of modifying the architecture to facilitate this) in addition to the images. Perhaps other people can chime in here.
These points account for performing this segmentation task to a high accuracy for this specific task and not for developing a novel segmentation algorithm. If the latter is what you are looking for, then these points will not be very useful for you. Good luck!
doodlesandyac t1_ispizld wrote
Reply to comment by syntheticdataguy in Diffusion model for synthetc data generation by Remet0n
I think you would have it generate within the distribution of a given label
Ttttrrrroooowwww OP t1_it5qizp wrote
Reply to comment by suflaj in EMA / SWA / SAM by Ttttrrrroooowwww
Currently my research focuses mostly on the semi-supervised space, and especially EMA is still relevant. Apparently its good to reduce confirmation biased on the inherent noisyness of pseudo labels.
While that agrees with your statement and answers my question (that I should use EMA because its relevant), I found some codes that dont mention all methods in their publications but they exist in their codebase.