Recent comments in /f/deeplearning
suflaj t1_jadb19p wrote
Reply to comment by catesnake in Elon plans to build an OpenAI competitor by pyactee
That's simply not true. Musk is not a open source or public domain advocate. He is a profit-driven entrepreneur.
His current ambitions to create a OpenAI alternative is not for open source or public domain, it's to counter the leftist agenda (coincidentally also racist and sexist) that OpenAI has been pushing with their latest products, as he has multiple times in the past proclaimed he is politically neutral, antipolitical/anti-establishment and expressed views that could be understood as conservative.
Obviously, he counts on it being profitable, as OpenAI has demonstrated. The question only is - how does he do this without entering a conflict of interest with Tesla, which is mostly an AI company itself?
It's not much different from the reason he provided for buying Twitter (other than being forced to), which is countering Twitters control of the narrative into one that stifles conservative views and promotes liberal ones.
catesnake t1_jad9mxm wrote
Reply to comment by suflaj in Elon plans to build an OpenAI competitor by pyactee
I think he left when it became clear that other members of OpenAI weren't interested in the "Open" part of it.
bigfoot1144 t1_jad8fw8 wrote
OS doesn't matter for the most part. I would say windows has slightly more for it. Specifically because you can access windows only applications on windows and you can access all of Linux on windows via docker containers and WSL. You can't go wrong with either, however setting all that stuff up on windows is a task and a half if you don't know what you're doing.
incrediblediy t1_jacum67 wrote
I am similar to you, just passed first year of my PhD. I am using Win10 at home (RTX3090 + RTX3060) and Linux GPU servers at uni (command line only). At the end of the day, it really doesn't matter as I am using Python and other libraries which are cross platform. I am keeping conda environments in both systems similar though.
Apprehensive_Air8919 OP t1_jacst55 wrote
Reply to comment by trajo123 in Why does my validation loss suddenly fall dramatically while my training loss does not? by Apprehensive_Air8919
omg... I think I found the bug. I had used the depth estimation image as input for the model in the validation loop....................
jnfinity OP t1_jacno4o wrote
Reply to comment by CKtalon in Dual RTX3090 vs single 4090 for deep learning by jnfinity
I do run into VRAM constrains from time to time though... but I was thinking an A6000 Ada was a bit overkill.
Apprehensive_Air8919 OP t1_jackmpu wrote
Reply to comment by trajo123 in Why does my validation loss suddenly fall dramatically while my training loss does not? by Apprehensive_Air8919
I just did a run with test_size being 0.5. The same thing happend. Wtf is going on :/
ZaZaMood t1_jacingn wrote
Where you going to find two 3090 cards brand new?
Here is my post with some answers:
https://www.reddit.com/r/MachineLearning/comments/xiwc12/buy_rtx_3090x2_or_single_4090_d/
I just couldn't find two 3090 brand new for the price of one 4090
CKtalon t1_jacill4 wrote
Since you are working on smaller experiments, single 4090. NVLink is overhyped
sEi_ t1_jacieip wrote
Reply to Elon plans to build an OpenAI competitor by pyactee
Learn some journalistic ethics before relying on 'clickbait' titles, regurgitating stolen text and let Chad hallucinate on top.
101: References for ALL facts is a no brainer.
You want to be trustworthy or just another low effort AI blog for the sake of having an AI blog.
Now we got some nice tools, use them right. ffs
[deleted] t1_jaceg0l wrote
Reply to comment by immo_92_ in Linux vs Windows for Computer Vision by No_Difference9752
[removed]
immo_92_ t1_jacef3v wrote
I will suggest you to start with Linux (it's totally UpTo you if you want to install dual boot or standalone). Most of computer vision libraries based on Linux and you can find more support for it.
DJStillAlive t1_jac8pvj wrote
Reply to Elon plans to build an OpenAI competitor by pyactee
Instead of repeated spamming of your "blog" with its low effort posts, perhaps you could simply point to the Reuters article that you plagiarized (regurgitated), or even better, the original article that is referenced in that one.
suflaj t1_jac0rfd wrote
Reply to comment by MyHomeworkAteMyDog in Elon plans to build an OpenAI competitor by pyactee
Yes, but he left because it was a conflict of interest for Tesla (also says so in the article). I would assume he intends to create a Tesla subsidiary now or get rid of Trsla stock altogether (which would likely kill the company at this point, so it's unlikely).
BabbleGlibGlob t1_jabytek wrote
Reply to Elon plans to build an OpenAI competitor by pyactee
Well that guy is always talking about how much he wants to do stuff, who tf cares? With his money I'd also like to get into anything people talk about i guess
MyHomeworkAteMyDog t1_jabv5i0 wrote
Reply to Elon plans to build an OpenAI competitor by pyactee
Wasn’t he part of starting openAI?
nativedutch t1_jabq0m7 wrote
Reply to Elon plans to build an OpenAI competitor by pyactee
Oh fuck, a fascist AI . Thats fearsome.
EthanSayfo t1_jablw8v wrote
Reply to Elon plans to build an OpenAI competitor by pyactee
Like we need our AIs to be any more douchey and racist, FFS.
He's probably going to train it exclusively on Dilbert comics.
Electronic-Clerk868 OP t1_ja9uexo wrote
Reply to comment by trajo123 in CNN in R code for Parkinson Disease with MRI by Electronic-Clerk868
classification, between subjects with Parkinson Disease/Control
trajo123 t1_ja9aghn wrote
Reply to comment by Apprehensive_Air8919 in Why does my validation loss suddenly fall dramatically while my training loss does not? by Apprehensive_Air8919
Very strange.
Are you sure your dataset is shuffled before the split? Have you tried different random seeds, different split ratios?
Or maybe there a bug in how you calculate the loss, but that should affect the training set as well...
So my best guess is you either don't have your data shuffled and the validation samples are "easier" or maybe it's something more trivial, like a bug in the plotting code. Or maybe that's the point where your model become self-aware :)
Apprehensive_Air8919 OP t1_ja96vdu wrote
Reply to comment by trajo123 in Why does my validation loss suddenly fall dramatically while my training loss does not? by Apprehensive_Air8919
nn.MSELoss(), I used sklearn train_test_split() with test_size being = 0.2. It is consistent behavior across any split i've seen. The wierd thing is that it only happens when I run very low lr
Apprehensive_Air8919 OP t1_ja94rat wrote
Reply to comment by alam-ai in Why does my validation loss suddenly fall dramatically while my training loss does not? by Apprehensive_Air8919
good analogy! Yes I use model.eval() so dropout is removed when doing the forward pass on the validation set.
Apprehensive_Air8919 OP t1_ja8mxcj wrote
Reply to comment by Oceanboi in Why does my validation loss suddenly fall dramatically while my training loss does not? by Apprehensive_Air8919
The size of the testing data is 20%, and dropout is 10%. I am not sure how I could be leaking information
usesbinkvideo t1_ja8j299 wrote
Reply to comment by JJ_00ne in How would you approach this task? by JJ_00ne
I think you're right, not super helpful, sorry :(
dipd123 t1_jadt0ax wrote
Reply to Linux vs Windows for Computer Vision by No_Difference9752
My suggestion is for using Linux. We are already comfortable with windows.