Recent comments in /f/deeplearning

macORnvidia OP t1_iv8l6za wrote

Still under that 15 day period. I wanted to get my hands on a machine instead of constantly wondering. What should I look out for to basically validate or discredit my decision over the next week?

Say if it comes to returning it, I'd be down to buying a 32 gb laptop without gpu, but a desktop gpu that I can plug n play and use accordingly.

1

arhetorical t1_iv8fays wrote

You already got the advice not to buy a laptop for deep learning. But if you're determined and understand that it's not a great idea to begin with, then any laptop with a compatible GPU is fine. You're prototyping, not actually training on it. If you like the one you got then just stick with it.

3

Hamster729 t1_iv7swx8 wrote

Absolutely. In fact, you typically get more DL performance per $USD with AMD GPUs, than with NVIDIA.

However, there are caveats:

  1. The primary target scenario for ROCm is Linux + docker container + gfx9 server SKUs (Radeon Instinct MIxxx). The further you move from this optimal target, the more uncertain things become. You can install the whole thing directly into your Ubuntu system, or, if you really want to waste lots of time, to compile everything from source, but it is best to install just the kernel-mode driver, and then do "docker run --privileged" to pull a complete VM with every package already in place. I am not sure what the situation is with Windows support. Support of consumer grade GPUs usually comes with some delay. E.g. Navi 21 support was only "officially" added last winter. The new chips announced last week may not be officially supported for months after they hit the shelves.
  2. You occasionally run into third party packages that expect CUDA and only CUDA. I just had to go through the process of hacking pytorch3d (the visualization package from FB) because it had issues with it.
1

kmanchel t1_iv7dix3 wrote

ROCm is much less mature of a deep learning stack than what nvidia has (by atleast 5 years). However your choice depends on what your scope of usage is, and if you’re willing to trade off usability for cost (I’m assuming amd hardware is significantly cheaper).

3

xyrlor OP t1_iv5axmk wrote

Reply to comment by fjodpod in Are AMD GPUs an option? by xyrlor

Thanks! I’m currently running a 3070, but have some deep learning unrelated errors so I’m looking around for options while I send my card in for repairs. Since new gpus are announced from both Nvidia and AMD, I was curious about the perspective on both gaming and deep learning for side projects.

3

fjodpod t1_iv52eke wrote

Is it possible?

Yes, but you probably need newer Linux distros and some basic Linux knowledge.

Do I do it myself?

Yes in pytorch with a 6600, but it was a bit annoying to set up with some errors, however now it just works (haven't benchmarks it yet).

Do I recommend it for the average user?

No, you should only do it if you suddenly want to do machine learning but you're stuck with an amd card.

If you haven't bought a gpu yet and you consider doing machine learning avoid the setup hassle and just pay a bit more for Nvidia gpus. 3060 12GB is a good value graphics card for machine learning

14

Competitive-Good4690 OP t1_iv4fdo1 wrote

Reply to comment by sgjoesg in U-Net architecture by Competitive-Good4690

Yes thank you I’m Referring to Connor Shorten’s video on U-Net (Keras) he’s saying exactly what u just said.. thank you for the response.. really appreciated.. Abhisek Thakur is using PyTorch but he did explain the concept well

1

sgjoesg t1_iv4ezuu wrote

As far as i know, if you create your own class inheriting from tf.keras, then that class can use .fit function. Eg class unet(tf.keras) Model_definition_as_per_arch model = unet() model.fit(data)

So you have the control to use your own model, and use keras's easy training loop as well.

2

sgjoesg t1_iv42h3k wrote

You can see abhishek thakur's video on the implementation of unet architecture. He explains it very well step by step. edit: i didnt see you wanted tf2.0. If i get some resources, will fwd it to you.

2

suflaj t1_iv1ftad wrote

Well based on the complaint, they probably have a case. However, the solution to the problem may not really be feasible, since it would imply that the copilot also generates a disclaimer based on all the licenses used, so then if a user deletes that, he is breaking the license.

Now, given that this may affect like 100k repositories, the disclaimer file must be in the megabytes.

1