Recent comments in /f/MachineLearning
Necessary-Meringue-1 t1_je2gvh9 wrote
Reply to comment by moleeech in [D] Prediction time! Lets update those Bayesian priors! How long until human-level AGI? by LanchestersLaw
GPT-4 outperforms my aunt Carol on the bar-exam, so AGI is here!
trajo123 t1_je2gie9 wrote
Reply to [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
How much of the code that devs write on a typical day is truly novel and not just a rehash / combination / adaptation of existing stuff?
He who has not copied code from stackoverflow, let him cast the first insult at ChatGPT.
[deleted] OP t1_je2g6rr wrote
Reply to comment by Necessary-Meringue-1 in [D] I've got a Job offer but I'm scared by [deleted]
[deleted]
Necessary-Meringue-1 t1_je2g0qi wrote
Reply to comment by [deleted] in [D] I've got a Job offer but I'm scared by [deleted]
Data engineering is already a lot of applied ML. Unless this is a research role, you don't necessarily need a whole lot of in-depth ML background knowledge.
They know you don't have an ML background, so they already factored that in.
You don't necessarily need to understand the maths behind things to apply them. Go play around with scikit-learn and numpy/pandas. They are pretty user friendly and give you a good baseline. Tensorflow is a bit rougher, that requires some understanding of how the model works internally. But, it's all things you can learn on the job.
It sounds like this could be a god opportunity for you to get into the field and see if it suits you or not.
jrkirby t1_je2f63r wrote
Reply to comment by Thorusss in [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
2 million dollars or 20 million dollars is greater than 20 thousand. And it makes the main thesis more salient - the more money you've spent training, the less willing you'll be to retrain the entire model from scratch just to run some benchmarks the "proper" way.
[deleted] OP t1_je2eram wrote
Reply to comment by Necessary-Meringue-1 in [D] I've got a Job offer but I'm scared by [deleted]
[deleted]
captglasspac t1_je2ekgh wrote
Reply to [D] I've got a Job offer but I'm scared by [deleted]
Are they offering you money? Do you need money to buy stuff? If you answered yes to both of those, then that's all the math you need.
korec1234 OP t1_je2dtze wrote
Reply to comment by Readorn in [P] nanoT5 - Inspired by Jonas Geiping's Cramming and Andrej Karpathy's nanoGPT, we fill the gap of a repository for pre-training T5-style "LLMs" under a limited budget in PyTorch by korec1234
Exactly, works great :)
AmbitiousTour t1_je2dmgx wrote
Reply to [D] I've got a Job offer but I'm scared by [deleted]
I would take the job and keep learning on your own, specifically TF, more python and the basics of linear algebra. Later, if you want to job hop, learn pytorch.
moleeech t1_je2ddzy wrote
Reply to [D] Prediction time! Lets update those Bayesian priors! How long until human-level AGI? by LanchestersLaw
Which human?
psythurism t1_je2ddja wrote
Long time developer and AI hobbyist here. I don't think developers have to worry for another decade at least.
We survived the inventions of high level languages, offshoring, no-code tools and probably dozens of other developer killers I forgot to list. We've been trying to automate ourselves out of jobs for at least half a century and the media takes note and reports the imminent death of the software industry, but so far, no luck.
LLMs right now might improve efficiency and reduce the need for some developers, just like the other inventions I've listed. Maybe an unforeseen invention will finally make developers obsolete, but that's always been the case. I can tell you with high confidence LLMs are not that invention.
SlowThePath t1_je2d9oi wrote
Reply to comment by visarga in [D] FOMO on the rapid pace of LLMs by 00001746
Yeah I don't see any startup being able to acquire the resources and time to catch up let alone compete or surpass. Unless they come up with some very novel new magic secret sauce which seems extremely unlikely.
Necessary-Meringue-1 t1_je2cy5j wrote
Reply to [D] I've got a Job offer but I'm scared by [deleted]
If you've passed the tests and the interviews, you're qualified. If you passed all the interviews and are somehow not qualified, that's on them and not on you.
If you want this job, take it.
SlowThePath t1_je2cknq wrote
Reply to comment by deepneuralnetwork in [D] FOMO on the rapid pace of LLMs by 00001746
The thing about magic is that it is only magic in the beginning. Eventually it becomes commonplace and it is no longer "magic" anymore. Right now it feels like magic to me though too.
bjj_starter t1_je2ckb0 wrote
Reply to comment by muskoxnotverydirty in [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
>If nothing else, it would be nice for those who publish test results to show how much they knew whether test data was in the training data.
Yes, we need this and much more information about how it was actually built, what the architecture is, what the training data was, etc. They're not telling us because trade secrets, which sucks. "Open" AI.
Necessary-Meringue-1 t1_je2chw6 wrote
Reply to [D] Prediction time! Lets update those Bayesian priors! How long until human-level AGI? by LanchestersLaw
>Leave a comment on your pet definition for “human-level AGI” which is
>
>testable
>
>falsifiable
>
>robust
I can't even give you a definition like that for "general human intelligence".
Obviously your timeline will also vary depending on your definition, so this needs to be two different discussions.
LLMs are at least relatively "general", as opposed to earlier approaches that were restricted to a specific task. So within the domain of language, we made some insane progress in the past 7 years. Whether that constitutes "intelligence" really depends on what you think that is, which nobody agrees on.
Unless someone can define "human general intelligence" and "artificial general intelligence" for me, the discussion of timeline just detracts from the actual progress and near-term implications of recent developments. That's my 2 cents
AquaBadger t1_je2c68z wrote
Reply to comment by WarmSignificance1 in [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
to be fair, google has gotten slower to find useful information due to the mass of ads and bought results clogging up searches now. But yes, google is still faster than chatgpt and if cleaned up would be even better
SlowThePath t1_je2buak wrote
Reply to comment by antonivs in [D] FOMO on the rapid pace of LLMs by 00001746
No models are trained on internet sized corpuses.That would take an infinite amount of time. I would think.
i_am__not_a_robot t1_je2biis wrote
I'm afraid this question is too non-specific for a serious answer.
What exactly do you mean by "software developer", "devs" and "stressing out"? Computer scientists generally welcome this development, as far as I can see in my own professional circles.
pale2hall t1_je2begu wrote
Reply to [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
Chat GPT-4 can't remember it's writing a FireFox add-on not a Chrome Extension.
It's like the most amazing coder ever, but always half-drunk, and completely confident, and always. Here's how almost every single response started after the first....
-
Apologies for the incomplete response. -
Apologies for the confusion. The Express server I provided earlier ... -
I apologize for the inconvenience. After reviewing the code, I've noticed some inconsistencies in the code -
I apologize for the confusion. It appears that the context menu was removed due to a typo in the content.js file. -
I apologize for the confusion. To make the changes you requested, follow the steps below: -
Apologies for the confusion, and thank you for providing the additional information. Here's an updated implementation that should resolve the issues: -
I apologize for the confusion. Here's an updated solution that should display the response in the popup window and clear the input field on submit. Additionally, I added an indicator that shows the addon is thinking. -
Apologies for the confusion, and thank you for the clarification. Based on your requirement, you can make the following changes: -
Apologies for the confusion. You are correct that you cannot trigger the reviseMyComment() function in the content script without sending a message from the background script. -
My apologies for the confusion. The error you are encountering is because the sendToOpenAI() function is not available in the content script content.js -
Apologies for the confusion. I made an error in my previous response.
AuspiciousApple t1_je2aij3 wrote
Reply to comment by Thorusss in [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
Idk, thousands of GPUs going brrrr for months, how much can it cost?
$10?
lambertb t1_je29sc5 wrote
Reply to comment by WokeAssBaller in [D] GPT4 and coding problems by enryu42
Isn’t it possible that your experience is not representative? Are you using ChatGPT or GitHub copilot?
reditum t1_je29q9y wrote
Reply to comment by shitasspetfuckers in [D] I just realised: GPT-4 with image input can interpret any computer screen, any userinterface and any combination of them. by Balance-
From a comment on HackerNews they made a Chrome extension, gathering all the training data from it, and it runs super slowly as well.
boonhet t1_je28xyl wrote
Reply to comment by braindead_in in [D] FOMO on the rapid pace of LLMs by 00001746
>UBI is coming anyways.
I do hope you have good weaponry, because UBI will have to be fought for. Trillionaires aren't going to be giving up their assets for the lulz.
liqui_date_me t1_je2h8vu wrote
Reply to [D] With ML tools progressing so fast, what are some ways you've taken advantage of them personally? by RedditLovingSun
I’ve used chatGPT-4 for very specific relationship and career advice, it’s surprisingly good at understanding corporate jargon