Recent comments in /f/deeplearning

computing_professor t1_iynllw4 wrote

I'm far from an expert but remember the 4090s are powerful but won't pool memory. I'm actually looking into a lighter setup than you with either an A6000 or, more likely, 2x 3090s with nvlink so I can get access to 48GB of vRAM. While the 4090 is much faster, you won't have access to as much vRAM. But if you can make do with 24GB and/or can parallelize your model, 2x 4090s would be awesome.

edit: Just re-read your post and I see I missed you mention parallelizing already. Still, if you can manage, 2x 4090 seems incredibly fast. I would do that if it was me, but I don't care much about computer vision.

4

trajo123 t1_iymuivu wrote

To answer you question concretely: in classification you want your model output to reflect a probability distribution over the classes. If you have only 2 classes this can be achieved with 1 output unit producing values ranging from 0 to 1. If you have more than 2 classes then you need 1 unit per class so that each one produces a value in the (0,1) range and also that the sum of all units adds up to 1 to pass as a probability distribution. In case of 1 output unit the sigmoid function ensures that the output is 0,1 and in case of multiple output units softmax ensures the conditions mentioned above. Now, in practice, classification models don't use an explicit activation function after the last layer, instead the loss incorporates the appropriate activation function due to efficiency and numerical stability reasons. So in case of binary classification you have two equivalent options:

  • use 1 output unit with torch.nn.BCEWithLogitsLoss

>This loss combines a Sigmoid layer and the BCELoss in one single class. This version is more numerically stable than using a plain Sigmoid followed by a BCELoss as, by combining the operations into one layer, we take advantage of the log-sum-exp trick for numerical stability.

  • use 2 output units with torch.nn.CrossEntropyLoss

>This criterion computes the cross entropy loss between input logits and target

Both of these approaches are mathematically equivalent and should produce the same results up to numerical considerations. If you get wildly different predictions, it means you did something wrong.

On another note, using accuracy when looking at credit card fraud detection is not a good idea because the dataset is most likely highly unbalanced. Probably more than 99% of the data samples are labelled as "not fraud". In this case, having a stupid model always produce "not fraud" regardless of input will already give you 99% accuracy. You may want to look into metrics for unbalanced datasets, e.g. F1 score, false positive rate, false negative rate, etc.

Have fun on your (deep) learning journey!

2

normie1990 t1_iyma2hh wrote

>Be as it be, using Pytorch itself, NVLink gets you less than 5% gains. Obviously not worth compared to 30-90% gains from a 4090.

Thanks, I think I have my answer.

Obviously I'm new to ML and didn't understand everything that you tried to explain (which I appreciate). I know that much - I will be freezing layers when fine-tuning, so from your earlier comment I guess I won't need more than 24GB.

1

suflaj t1_iym94jr wrote

> I probably should have specified that I'll do fine tuning, not training from scratch, if that makes any difference.

Unless you're freezing layers, it doesn't.

> I know it's a software feature, AFAIK pytorch supports it, right?

No. PyTorch supports Data Parallelism. To get pooling in its full meaning, you need Model Parallelism, for which you'd have to write your own multi-GPU layers and a load balancing heuristic.

Be as it be, using Pytorch itself, NVLink gets you less than 5% gains. Obviously not worth compared to 30-90% gains from a 4090. You need stuff like Apex to see visible improvements, but they do not compare to generational leaps, nor do they parallelize the model (you still have to do it yourself). Apex' data parallelism is similar to PyTorches anyways.

Once you parallelize your model, however, you're bound to be bottlenecked by bandwidth. This is the reason it's not done more often, as it makes sense only if the model itself is very large, yet its gradients fit in pooled memory. NVLink provides only 300 GB/s of bandwidth in the best case scenario, amounting to roughly 30% performance gains in bandwidth bottlenecked tasks in the best case.

1

suflaj t1_iym8sa5 wrote

NVLink itself does not pool memory. It just increases bandwidth. Memory pools are a software feature, partially made easier by NVLink.

> Could you elaborate?

Those model are trained with batch sizes that are too large to fit on any commercial GPU, meaning you will have to accumulate them either way.

1