Recent comments in /f/deeplearning

Final-Rush759 t1_izx45t2 wrote

A lot of Stanford classes are free on youtube. They are probably among the best. Andrew Ng coursera classes are modified from his Stanford class. Most of math are not difficult, linear algebra, caculus and some statistics like maximum likelihood etc. Math can be more difficult if you want to study some branches of deep learning. The goal is to establish approximate functions with deep learning which is stacking up basic simple units into multiple layers of a deep network.

1

MightyDuck35 OP t1_izwz8tj wrote

I heard good things about Andrew Ng's course. I will definitely check it out!

I started with FastAI because he said they'd go over the maths in the course and like I said it was something I was scared of. There's Khan Academy that has free courses for maths, which is pretty cool :D

I want to get good at it, not just copy paste things and hope it'll works.

0

UndecidedBoy t1_izww10i wrote

The univeristy of Tübingen has really great courses for DL/ Math for DL (recently the essentials for Math have been posted) on YouTube. It convers the theory in much more depth than other courses I've seen online.

1

91o291o t1_izwdjgr wrote

There's no way that you can understand DL unless you're proficient with some basic calculus (matrix multiplications, rank of a matrix, norms etc). You don't need to be good at math, but you really need to understand some concepts.

If you don't understand math, you won't improve, you will be just "imitating" people who know those concepts. You will be able to delay your complete failure, anyway.

2

trajo123 t1_izwd9xi wrote

The Coursera Deep Learning specialization is great. It starts with the basics, including a gentle introduction to the intuition behind the maths, then goes on to cover many important application areas. If you like a more structured approach (e.g. assignments, quizzes), then this is for you. It's quite a lot of work, but it will get you from completely clueless to comfortable with most of the concepts and ready to explore the field on your own.

I found the FastAI course too light on details and the Jupyter Notebook based deep learning framework they built abstracts too many details away ...and is yet another (not very popular / used in practice) framework to learn.

3

SimplePotentials t1_izw9ht8 wrote

It depends on how in depth you would like to go. The intro to PyTorch on udacity is free and a great tool to learn how to start coding deep learning projects.

For a deeper understanding it is probably best to start at the basics of linear calculus (I’d recommend three blue one brown on YouTube), and understanding what happens to vectors visually when a transformation is applied.

1

IshanDandekar t1_izvpp8c wrote

I say, start with the FastAi course, it's great. Start somewhere, there isn't a defined roadmap for deep learning. Everybody has a different journey of learning these things. Also, start exploring the domains of deep learning problems, like computer vision and natural language processing. Find out what interests you the most and learn more about it.

2

Blasket_Basket t1_izv8icg wrote

I see a lot of people mentioning needing a GPU for DL, but it appears no one has yet clarified you only need that for training.

If you're looking for the standard use case of training a model, saving it, and then productionizing that model by exposing an API for model inference only, then you only need a GPU for the training phase. For inference, you do not need a GPU. AWS rents specialized EC2 instances with fast CPUs optimized specifically for model inference.

Another major difference may be that business requirements may preclude the use of Deep Learning in the solution. For instance, business areas like credit risk are regulated and require a level of model explainability that we can't provide with neural networks.

Others have already made great comments regarding tabular vs unstructured data, no other comments to add there.

One final thing area is the sheer volume of data needed for a DL solution vs a "Shallow" ML solution. You need orders of magnitude more data to successfully train most DL models than you do to get good performance with most other ML algorithms.

3

chengstark t1_iztu0do wrote

Sorry for being blunt, wtf is productization in this context, what does this word include? This is way too broad of a question, there are many nuances in ml/dl development, too many varibles could change based on a specific use case.

Simple models can be used just with the trained model and some API calls, this is the same between DL and ML. Non computational intensive tasks don’t even need GPUs/TPUs, most can even run on embedded hardwares. However they differ in amount of data required for training; data formats/ types also matter, typical ml algorithms work better with tabular data, but you wouldn’t use them for images. I mean what kind of garbage question is this lol. You can write a whole book on this.

If I get asked this question I’d ask back for a more concrete example, throwing out a generalized question only indicate the interviewer does not have the know how in ml/dl operations.

2

suflaj t1_iztjolh wrote

Then it's strange. Unless you're using a similarly sized student model, there is no reason why a no_grad teacher and a student are similarly resource intensive as a teacher with backprop.

As a rule of the thumb, you should expend several times less memory. How much less are you expending for the same batch size in your case?

1