Recent comments in /f/deeplearning

errorr_unknown OP t1_j77mfxr wrote

We tried to convert the coco format to yolo Also tried yolact Crossed some really helpful tools but almost everything needed payment We separated the fish species we want in the csv file

−2

errorr_unknown OP t1_j77kvsd wrote

I guarantee you there is nothing fishy about this Its just a robotics contest with bunch of tasks and points for each one We happened to finish almost every task but this one due to as i mentioned no prior experience in machine learning or dealing with models/datasets We are in juniors team that's explain the less experience Not to mention that we are in a third world country?

−8

sloikalamos t1_j77jfl8 wrote

This sounds fishy tbh. First if all, I'm not sure why were you assigned to a task that you never had any experience with. If this is an exam/school task, there's no way you haven't had any lesson related to it. As well as, multiple students work on it? One of you should have at least some ideas how to approach it. If this is a work test, you either overshoot something or saying you could do something that you couldn't.

4

errorr_unknown OP t1_j77h8bh wrote

I have dealt with computer vision tasks before and used OpencCV!! but this is different it's a pre-annotated dataset to train a model to estimate length ? It's not our specialty and the time is short ,we just need to get to the core of it ..we already done the search so if you can give more clarification??

−7

jnfinity t1_j77g6g6 wrote

I think this isn’t too complex to tackle. I suggest though, you try yourself, trust me, it will be fun.

Be analytical. Try dissecting the problems into smaller tasks. And I give you one hint: since you’re evaluating images, I suggest too read up on computer vision. Maybe PyTorch Vision or OpenCV might be your friends here ;)

2

errorr_unknown OP t1_j77fph2 wrote

I wouldn't have asked here if i haven't tried myself , ML is not our specialist also no , it's not considered cheating as I'm willing to build the system by myself asking on reddit is no difference than searching on Google

−9

Appropriate_Ant_4629 t1_j75sc61 wrote

Note that some models are extremely RAM intensive; while others aren't.

A common issue you may run into are errors like RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 8.00 GiB total capacity; 6.13 GiB already allocated; 0 bytes free; 6.73 GiB reserved in total by PyTorch), and it can be pretty tricky to refactor models to work with less RAM than they expect (see examples in that link).

1

Open-Dragonfly6825 OP t1_j74oes8 wrote

I guess the suitability of the acceleration devices change depending on your specific context of development and/or application. Deep learning is such a broad field with so many applications, it may be reasonable that different applications benefit from different accelerators better.

Thank you for your comment.

2

Open-Dragonfly6825 OP t1_j74ntpw wrote

Hey, maybe it's true that I know my fair lot about acceleration devices. But, until you mentioned it, I had actually forgotten about backpropagation, which is something basic for deep learning. (Or, rather than forget, I hadn't thought about it.)

Now that you mentioned it, it makes so much sense why FPGAs might be better suited but only for inference.

1

alex_bababu t1_j73ofte wrote

You know probably much more than me. My thoughts were, for inference you don't need the calculation power for backpropagation. The model is fixed and you can find a efficient way to program a fpga to run it.

Basically like an ASIC. And lsi more energy efficient.

You could map the model on the fpga in such a way, so you would not need to store intermediate results in memory.

2

BrotherAmazing t1_j73k2x8 wrote

It’s very specific to what you are doing. GPUs are absolutely superior hands down for the kind of R&D and early offline prototyping I do when you consider all the practical business aspects with efficiency, cost, flexibility, and practicality given our business’ and staff pedigree and history.

2

AzureNostalgia t1_j732f33 wrote

The claim that FPGAs have better power efficiency than GPUs is a reminiscent of the past. In the real world and industry (and not in scientific papers which are written by PhDs) GPUs achieve way higher performance. The simple reason is FPGAs as devices are way behind in architecture, compute capacity and capabilities.

A very simple way to see my point is this. Check one of the largest FPGAs from Xilinx, the Alveo U280 (https://www.xilinx.com/products/boards-and-kits/alveo/u280.html#specifications). It theoretically can achieve up to 24.5 INT8 TOPs AI performance and it's a 225W card. Now check a similar architecture (in nm) embedded GPU, the AGX xavier (https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-agx-xavier/). Check the specs on the bottom. Up to 22TOPs in a 30W device. That's why FPGAs are obsolete. I have countless examples like that but you get the idea.

2