Recent comments in /f/deeplearning
gizcard t1_iv12w8x wrote
Reply to Lawsuit challenging GitHub Copilot by px05j
I hope this fails spectacularly
sweeetscience t1_iv0q0xr wrote
Reply to Lawsuit challenging GitHub Copilot by px05j
This should fail since the original work is not being redistributed. To wholly recreate a repo on which Codex was trained you’d have to literally start typing the original code, and even then the contextual suggestions would likely yield a different result from the original anyways. I could be mistaken but I remember reading about some litigation in this space concerning a model trained on copyrighted data. The court ruled in favor of the defendant because the resulting model couldn’t possibly reproduce the original work. It’s tricker here because technically you could recreate the original work, but you would have to know very well what the original work was to begin with to actually recreate it, and if that’s the case what’s the point of using copilot to begin with. I could be (and probably am) wrong.
Imagine trying to recreate PyTorch from scratch using Codex or copilot. IF, and that’s a big if, one did so the author of the recreation would still have to attribute it.
Not legal advice
px05j OP t1_iv0hl5e wrote
Reply to Lawsuit challenging GitHub Copilot by px05j
I believe there could be other models which will fall in this category, image generation models specially.
This particular lawsuit is interesting as it says it violates github's own terms.
tttsang t1_iv08iru wrote
Reply to BlogNLP: AI Writing Tool by britdev
Is sentence rewrite available?
thePsychonautDad t1_iuyz5ob wrote
Reply to Can someone help me to create a STYLEGAN (1/2 or 3) with a dataset of my psychedelic handrawn/ A.I. Colored artworks? (280 in dataset, I have more iterations, maybe 600 total) by Niu_Davinci
As others have said, you don't have enough examples to train a stylegan, but you can retrain Stable Diffusion on that few examples. That'll allow you to render anything you can imagine in your style too for example.
britdev OP t1_iuywxqr wrote
Reply to comment by Suolucidir in BlogNLP: AI Writing Tool by britdev
I am financing it. Free for everyone else.
Suolucidir t1_iuyvr6h wrote
Reply to BlogNLP: AI Writing Tool by britdev
How is it free? I thought they charged per token...
arhetorical t1_iuyrp4f wrote
Reply to comment by Niu_Davinci in Can someone help me to create a STYLEGAN (1/2 or 3) with a dataset of my psychedelic handrawn/ A.I. Colored artworks? (280 in dataset, I have more iterations, maybe 600 total) by Niu_Davinci
Are you going to pay for it?
suflaj t1_iuyg2am wrote
Reply to comment by [deleted] in What to tell about a model you make deeper and deeper, doesn't make better results but doesn't overfit as well? by [deleted]
Well I couldn't understand what your task was when you didn't say what it was until now.
Other than that, skimming through the paper it quite clearly says the following:
> Our present results do not indicate our procedure can generalize to motifs that are not present in the training set
Because what they're doing doesn't generalize, I think the starting assumptions (that there will be imprevements with a larger model) are wrong, and so the question is unnecessary... The issue is with the method or the data, they do not elaborate more than that.
tivotox t1_iuyfoup wrote
Reply to comment by cma_4204 in What to tell about a model you make deeper and deeper, doesn't make better results but doesn't overfit as well? by [deleted]
I mean the dataset is extremely diverse. like millions clusters and every entry is noised when loaded on GPUs
[deleted] OP t1_iuyf897 wrote
Reply to comment by suflaj in What to tell about a model you make deeper and deeper, doesn't make better results but doesn't overfit as well? by [deleted]
[deleted]
suflaj t1_iuybshu wrote
Reply to comment by tivotox in What to tell about a model you make deeper and deeper, doesn't make better results but doesn't overfit as well? by [deleted]
That seems very bad. You want your train-dev-test to be different samples of the same distribution, so, not very different sets.
Furthermore, if you're using test for model validation, that means you will have no dataset to finally evaluate your model on. Reconsider your process.
Finally, again, I urge you to evaluate your dataset on an established evaluation metric for the task, not the loss you use to train the model. What is the exact task?
cma_4204 t1_iuybjkm wrote
Reply to comment by tivotox in What to tell about a model you make deeper and deeper, doesn't make better results but doesn't overfit as well? by [deleted]
My best guess is coding mistake on your part. Good luck tivo
tivotox t1_iuybact wrote
Reply to comment by cma_4204 in What to tell about a model you make deeper and deeper, doesn't make better results but doesn't overfit as well? by [deleted]
But DO will prevent overfitting, I don't have any overfitting it's not the relevant tool
tivotox t1_iuyb2oh wrote
Reply to comment by suflaj in What to tell about a model you make deeper and deeper, doesn't make better results but doesn't overfit as well? by [deleted]
The split has been done such as the train and test are highly different. the loss are almost equal on both datasets.
cma_4204 t1_iuyb08l wrote
Reply to comment by tivotox in What to tell about a model you make deeper and deeper, doesn't make better results but doesn't overfit as well? by [deleted]
Well, clearly it’s not getting any better with what you’re trying. Maybe time to rethink
tivotox t1_iuyavzv wrote
Reply to comment by cma_4204 in What to tell about a model you make deeper and deeper, doesn't make better results but doesn't overfit as well? by [deleted]
The model is equivariant no dataset augmentation, no DO as well. The model doesn't overfit as I said
cma_4204 t1_iuyamtp wrote
Reply to What to tell about a model you make deeper and deeper, doesn't make better results but doesn't overfit as well? by [deleted]
Data augmentation, dropout?
suflaj t1_iuya2f9 wrote
Reply to comment by tivotox in What to tell about a model you make deeper and deeper, doesn't make better results but doesn't overfit as well? by [deleted]
It can be seen as an approximation of the variance between the noise and the noise predicted conditioned on some data.
If it's on the training set it is not even usable as a metric, and if it is not directly related to the performance it is not a good metric. You want to see how it acts on unseen data.
tivotox t1_iuy798z wrote
Reply to comment by suflaj in What to tell about a model you make deeper and deeper, doesn't make better results but doesn't overfit as well? by [deleted]
The loss here is for a denoiser, it can be seen as the variance between the noise and the noise predicted. So it's in this case a good metric
suflaj t1_iuy2y71 wrote
Reply to What to tell about a model you make deeper and deeper, doesn't make better results but doesn't overfit as well? by [deleted]
Loss doesn't matter, what are the validation metrics?
Leopiney OP t1_iuxyxm4 wrote
Hey everyone! We released this open-source tool for tracking and comparing embedding experiments, which we regularly do.
Feel free to give it a try! Getting started is easy: https://github.com/pentoai/vectory
[deleted] t1_iuxgmwb wrote
[deleted]
Current-Basket3920 t1_iuxezmx wrote
Reply to Can someone help me to create a STYLEGAN (1/2 or 3) with a dataset of my psychedelic handrawn/ A.I. Colored artworks? (280 in dataset, I have more iterations, maybe 600 total) by Niu_Davinci
The code is freely available and on Youtube you find videos on „How to use Stylegan“.
But:
- As already mentioned you‘ll need more data. 5-10k are already at the low end. You need more like 50-100k I think.
- You need some serious hardware to train something like this. For Stylegan they used 8 high end GPUs for a week. I guess for Stylegan2/3 even more. You might be happy with less than NVIDIA, but it‘s no quick thing to do.
- And even then there‘s no guarantee that it will turn out perfectly. It might need some fine-tuning, it‘s not a simple algorithm.
britdev OP t1_iv15dw2 wrote
Reply to comment by tttsang in BlogNLP: AI Writing Tool by britdev
I can add it!