Recent comments in /f/deeplearning
arhetorical t1_iv8fays wrote
Reply to bought legion 7i: Intel i9 12th gen, rtx 3080 ti 16 gb vram, 32 GB ddr5. need some confirmation bias (or opposite) to understand if I made the right decision by macORnvidia
You already got the advice not to buy a laptop for deep learning. But if you're determined and understand that it's not a great idea to begin with, then any laptop with a compatible GPU is fine. You're prototyping, not actually training on it. If you like the one you got then just stick with it.
Hamster729 t1_iv7swx8 wrote
Reply to Are AMD GPUs an option? by xyrlor
Absolutely. In fact, you typically get more DL performance per $USD with AMD GPUs, than with NVIDIA.
However, there are caveats:
- The primary target scenario for ROCm is Linux + docker container + gfx9 server SKUs (Radeon Instinct MIxxx). The further you move from this optimal target, the more uncertain things become. You can install the whole thing directly into your Ubuntu system, or, if you really want to waste lots of time, to compile everything from source, but it is best to install just the kernel-mode driver, and then do "docker run --privileged" to pull a complete VM with every package already in place. I am not sure what the situation is with Windows support. Support of consumer grade GPUs usually comes with some delay. E.g. Navi 21 support was only "officially" added last winter. The new chips announced last week may not be officially supported for months after they hit the shelves.
- You occasionally run into third party packages that expect CUDA and only CUDA. I just had to go through the process of hacking pytorch3d (the visualization package from FB) because it had issues with it.
kmanchel t1_iv7dix3 wrote
Reply to Are AMD GPUs an option? by xyrlor
ROCm is much less mature of a deep learning stack than what nvidia has (by atleast 5 years). However your choice depends on what your scope of usage is, and if you’re willing to trade off usability for cost (I’m assuming amd hardware is significantly cheaper).
GoodNeighborhood1017 t1_iv60m0z wrote
Reply to Are AMD GPUs an option? by xyrlor
Check out Tensorflow DirectML and PyTorch DirectML Works pretty well on amd gpus
xyrlor OP t1_iv5axmk wrote
Reply to comment by fjodpod in Are AMD GPUs an option? by xyrlor
Thanks! I’m currently running a 3070, but have some deep learning unrelated errors so I’m looking around for options while I send my card in for repairs. Since new gpus are announced from both Nvidia and AMD, I was curious about the perspective on both gaming and deep learning for side projects.
Hiant t1_iv5adoh wrote
Reply to Are AMD GPUs an option? by xyrlor
not for mortals
fjodpod t1_iv52eke wrote
Reply to Are AMD GPUs an option? by xyrlor
Is it possible?
Yes, but you probably need newer Linux distros and some basic Linux knowledge.
Do I do it myself?
Yes in pytorch with a 6600, but it was a bit annoying to set up with some errors, however now it just works (haven't benchmarks it yet).
Do I recommend it for the average user?
No, you should only do it if you suddenly want to do machine learning but you're stuck with an amd card.
If you haven't bought a gpu yet and you consider doing machine learning avoid the setup hassle and just pay a bit more for Nvidia gpus. 3060 12GB is a good value graphics card for machine learning
nutpeabutter t1_iv50p8e wrote
Reply to Are AMD GPUs an option? by xyrlor
If you are a masochist then yes
MaximumOrdinary t1_iv4vdhe wrote
Reply to Lawsuit challenging GitHub Copilot by px05j
The open source complaint about Attribution can be fixed with a huge meta file full of attributions to everything copilot has read.
puppet_pals t1_iv4jtpc wrote
Reply to comment by sgjoesg in U-Net architecture by Competitive-Good4690
>So model.fit is an inbuilt function for training the model which I don’t want to use. I want to define the model on my own
just use the UNet using that runs with model.fit() and implement your own training loop following this guide: https://lukewood.xyz/blog/keras-model-walkthrough
sgjoesg t1_iv4hum4 wrote
Reply to comment by Competitive-Good4690 in U-Net architecture by Competitive-Good4690
Glad i could help(in some way lol)💯
Competitive-Good4690 OP t1_iv4fdo1 wrote
Reply to comment by sgjoesg in U-Net architecture by Competitive-Good4690
Yes thank you I’m Referring to Connor Shorten’s video on U-Net (Keras) he’s saying exactly what u just said.. thank you for the response.. really appreciated.. Abhisek Thakur is using PyTorch but he did explain the concept well
sgjoesg t1_iv4f6ng wrote
sgjoesg t1_iv4ezuu wrote
Reply to comment by Competitive-Good4690 in U-Net architecture by Competitive-Good4690
As far as i know, if you create your own class inheriting from tf.keras, then that class can use .fit function. Eg class unet(tf.keras) Model_definition_as_per_arch model = unet() model.fit(data)
So you have the control to use your own model, and use keras's easy training loop as well.
[deleted] t1_iv4ey7n wrote
Reply to U-Net architecture by Competitive-Good4690
[deleted]
Competitive-Good4690 OP t1_iv4e6ll wrote
Reply to comment by sgjoesg in U-Net architecture by Competitive-Good4690
Thank you so much
Competitive-Good4690 OP t1_iv4e66u wrote
Reply to comment by sgjoesg in U-Net architecture by Competitive-Good4690
So model.fit is an inbuilt function for training the model which I don’t want to use. I want to define the model on my own
sgjoesg t1_iv42snj wrote
Reply to U-Net architecture by Competitive-Good4690
Also, what do you mean by non model.fit? And why?
sgjoesg t1_iv42h3k wrote
Reply to U-Net architecture by Competitive-Good4690
You can see abhishek thakur's video on the implementation of unet architecture. He explains it very well step by step. edit: i didnt see you wanted tf2.0. If i get some resources, will fwd it to you.
Competitive-Good4690 OP t1_iv3s97k wrote
Reply to U-Net architecture by Competitive-Good4690
Any guidance will be much appreciated.
tttsang t1_iv1ojfq wrote
Reply to comment by britdev in BlogNLP: AI Writing Tool by britdev
Wow cool, I'll try it
britdev OP t1_iv1o8bx wrote
Reply to comment by tttsang in BlogNLP: AI Writing Tool by britdev
Added.
suflaj t1_iv1ftad wrote
Reply to Lawsuit challenging GitHub Copilot by px05j
Well based on the complaint, they probably have a case. However, the solution to the problem may not really be feasible, since it would imply that the copilot also generates a disclaimer based on all the licenses used, so then if a user deletes that, he is breaking the license.
Now, given that this may affect like 100k repositories, the disclaimer file must be in the megabytes.
obsoletelearner t1_iv17agp wrote
Reply to Lawsuit challenging GitHub Copilot by px05j
I for one really want copilot. Hope this fails.
macORnvidia OP t1_iv8l6za wrote
Reply to comment by arhetorical in bought legion 7i: Intel i9 12th gen, rtx 3080 ti 16 gb vram, 32 GB ddr5. need some confirmation bias (or opposite) to understand if I made the right decision by macORnvidia
Still under that 15 day period. I wanted to get my hands on a machine instead of constantly wondering. What should I look out for to basically validate or discredit my decision over the next week?
Say if it comes to returning it, I'd be down to buying a 32 gb laptop without gpu, but a desktop gpu that I can plug n play and use accordingly.