Recent comments in /f/deeplearning
Niu_Davinci OP t1_iuvlfhz wrote
Reply to comment by nutpeabutter in Can someone help me to create a STYLEGAN (1/2 or 3) with a dataset of my psychedelic handrawn/ A.I. Colored artworks? (280 in dataset, I have more iterations, maybe 600 total) by Niu_Davinci
If I can give you enough iterations do you know how to do it for me? would somebody do it?
nutpeabutter t1_iuvl7on wrote
Reply to comment by Niu_Davinci in Can someone help me to create a STYLEGAN (1/2 or 3) with a dataset of my psychedelic handrawn/ A.I. Colored artworks? (280 in dataset, I have more iterations, maybe 600 total) by Niu_Davinci
I really can't give a solid answer, but 600 is definetly too few (a few thousand is already on the low end).
Niu_Davinci OP t1_iuvkw6k wrote
Reply to comment by nutpeabutter in Can someone help me to create a STYLEGAN (1/2 or 3) with a dataset of my psychedelic handrawn/ A.I. Colored artworks? (280 in dataset, I have more iterations, maybe 600 total) by Niu_Davinci
how many are good for a stylegan?
nutpeabutter t1_iuvkakt wrote
Reply to Can someone help me to create a STYLEGAN (1/2 or 3) with a dataset of my psychedelic handrawn/ A.I. Colored artworks? (280 in dataset, I have more iterations, maybe 600 total) by Niu_Davinci
Try dreambooth. 600 is far too little for training a stylegan from scratch (heavy augmentation could help but I doubt it)
mr_birrd t1_iuved4p wrote
Reply to comment by [deleted] in How to solve CUDA Out of Memory error by Nike_Zoldyck
yeah right that's not a bug it's an error simply
Nike_Zoldyck OP t1_iuvdz7i wrote
Reply to comment by zalperst in How to solve CUDA Out of Memory error by Nike_Zoldyck
>This was mostly an attempt to collect more info from people who might see their usual trick not mentioned there.
Yes, trust me I learnt that the hard way. I tried to include multiple scenarios and will keep updating it
Nike_Zoldyck OP t1_iuvdvsa wrote
Reply to comment by Ttttrrrroooowwww in How to solve CUDA Out of Memory error by Nike_Zoldyck
Thanks for your insight. We're you even able to access the link? Turns out it was behind a membership thing. I updated the link url so it should be free now. I couldn't find any helpful solutions to my problem and had to try everything, until the last paragraph which finally solved it and I had to figure that out through trial and error. So instead of someone new opening 35 tabs the next time , I figured I'd consolidate everything I attempted into a post that I can keep editing if I come across anything more, or if someone decides to share anything useful about their experience with this issue, along with what sort of models they were running.
This was mostly an attempt to collect more info from people who might see their usual trick not mentioned there. I'm glad I could cover everything you already know
Ttttrrrroooowwww t1_iuvceuv wrote
Reply to How to solve CUDA Out of Memory error by Nike_Zoldyck
Article which scratches the surface and has been discussed thousands of times without adding any value. Another piece of clutter on the internet.
[deleted] t1_iuvarxr wrote
Reply to How to solve CUDA Out of Memory error by Nike_Zoldyck
[deleted]
jegerarthur t1_iuv8j5n wrote
What about the performances? Can it handle big dataset, like imagenet 21k ?
Good_Helicopter4073 t1_iuv6fiy wrote
Reply to How to solve CUDA Out of Memory error by Nike_Zoldyck
A100 will fix it
Leopiney OP t1_iuup3jm wrote
Reply to comment by SameerMohair in Vectory: a tool for tracking and comparing embedding spaces by Leopiney
Hi there! Great question.
In these examples it happens exactly the same, the embeddings have large dimensionalities. What we show is a 2D projection of the embeddings so that we can plot them, regardless of the dimension of the embeddings.
You can choose between UMAP or PCA projections by default.
zalperst t1_iuu828x wrote
Reply to How to solve CUDA Out of Memory error by Nike_Zoldyck
Lol, the solution to this will be different for everyone
SameerMohair t1_iut070w wrote
How does this work when you have embeddings on a vector space of many many dimensions, like google’s name2vec.
randomforest___ t1_iurq1g1 wrote
Reply to comment by omg_bread in Vectory: a tool for tracking and comparing embedding spaces by Leopiney
Might be this https://github.com/pentoai/vectory
[deleted] t1_iurpycu wrote
[deleted]
omg_bread t1_iurkaa1 wrote
Is there a link to the project page?
suflaj t1_iuk9d9w wrote
Reply to comment by Rare_Lingonberry289 in Does the length/size of a dimension affect accuracy? (CNN) by Rare_Lingonberry289
As long as you keep the jumps the same it should be fine.
Rare_Lingonberry289 OP t1_iuk3uxk wrote
Reply to comment by suflaj in Does the length/size of a dimension affect accuracy? (CNN) by Rare_Lingonberry289
Ok, that makes sense. One more thing though. According to my research, temporal points during the spring and autumn are more helpful for what I'm trying to do. However, I'm afraid that large jumps like this will confuse my model. Like it will have a hard time detecting features when time jumps like this happen. Is this a real concern?
suflaj t1_iuk2gfj wrote
Reply to comment by Rare_Lingonberry289 in Does the length/size of a dimension affect accuracy? (CNN) by Rare_Lingonberry289
Yeah, just experiment with it. Like I said, I would start with 4. Then go higher or lower depending on your needs. I have personally not seen a temporally sensitive neural network to go beyond 6 or 8 time points. As with anything, there are tradeoffs.
Although if you have x, y and c, you will be doing 3D convolutions, not 4D. A 4D convolution on 4D data is essentially a linear layer.
Rare_Lingonberry289 OP t1_iujpxs1 wrote
Reply to comment by suflaj in Does the length/size of a dimension affect accuracy? (CNN) by Rare_Lingonberry289
Ok so what if I have something like 10 temporal points, assuming you I have the necessary computing power. The other dimensions are pixels (x and y) and channels (bands).
x11ry0 t1_iujku64 wrote
Reply to Is Colab still the place to go? by CodingButStillAlive
Colab is still the best option for short notebooks online. You may saturate the RAM of the free version if you are working on very very big models but in other cases, it works pretty well.
suflaj t1_iuji61k wrote
Reply to comment by Rare_Lingonberry289 in Does the length/size of a dimension affect accuracy? (CNN) by Rare_Lingonberry289
Probably not.
- I am almost certain you don't have data that would take advantage of this dimensionality or the resources to process it
- you can't accumulate so many features and remember all of them in recurrent models
- I am almost certain you don't have the hardware to house such a large transformer model that could process it
- I am almost certain you will not get a 365 day history of a sample during inference, 4 days seems more reasonable
suflaj t1_iujhefz wrote
Reply to comment by GPUaccelerated in Do companies actually care about their model's training/inference speed? by GPUaccelerated
I asked for the specific law so I could show you that it cannot apply to end-to-end encrypted systems, which either have partly destroyed information, or the information that leaves the premises is not comprehensible to anything but the model and there is formal proof that it is infeasible to crack it.
These are all long solved problems, the only hard part is doing hashing without losing too much information, or encryption compact enough to both fit into the model and be comprehensible to it.
TheDigitalRhino t1_iuw2bum wrote
Reply to Can someone help me to create a STYLEGAN (1/2 or 3) with a dataset of my psychedelic handrawn/ A.I. Colored artworks? (280 in dataset, I have more iterations, maybe 600 total) by Niu_Davinci
It maybe worth it to use Stable Diffusion. Since the data set is smaller, it may not be realistic to expect you to create thousands of more.