Recent comments in /f/MachineLearning
petrastales OP t1_je4flu9 wrote
Reply to comment by Exodia141 in [D] Can DeepL learn from edits to the translations it produces immediately? by petrastales
Yeah
i_am__not_a_robot t1_je4er0i wrote
Reply to [D] Elon Musk, Yoshua Bengio, Steve Wozniak, Andrew Yang, And 1100+ More Signed An Open Letter Asking All AI Labs To Immediately Stop For At Least 6 Months by [deleted]
The demand for corporate self-restraint the face of enormous profit opportunities is naïve and doomed to fail. And this is without even addressing the merits of the open letter's demands, which I - and many other computer scientists - do not support, by the way.
PastAbies5664 t1_je4ekk5 wrote
Reply to [D] Elon Musk, Yoshua Bengio, Steve Wozniak, Andrew Yang, And 1100+ More Signed An Open Letter Asking All AI Labs To Immediately Stop For At Least 6 Months by [deleted]
Is humanity racing to the top or bottom?
Calamero t1_je4doo0 wrote
Reply to comment by joeiyoma in [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
It will enable creative people to bring their ideas to reality. It won’t make people less creative. AI technology democratizes the execution part, making it easier for people from all walks of life to transform their visions into reality. It will augment human creativity rather than stifling it.
Exodia141 t1_je4d7s8 wrote
Reply to [D] Elon Musk, Yoshua Bengio, Steve Wozniak, Andrew Yang, And 1100+ More Signed An Open Letter Asking All AI Labs To Immediately Stop For At Least 6 Months by [deleted]
Only makes sense if OpenAI becomes more transparent with their models. Unless they do so they are just signing off a wish list.
ScientiaEtVeritas t1_je4cwfx wrote
Reply to comment by DragonForg in [D] Elon Musk, Yoshua Bengio, Steve Wozniak, Andrew Yang, And 1100+ More Signed An Open Letter Asking All AI Labs To Immediately Stop For At Least 6 Months by [deleted]
It's legit. Some of the signatories already tweeted publicly about it, and journalists are able to confirm the story (https://twitter.com/readkrystalhu/status/1640918369548345345).
Exodia141 t1_je4bxw0 wrote
Reply to comment by petrastales in [D] Can DeepL learn from edits to the translations it produces immediately? by petrastales
If they consider it at all that is.
NoLifeGamer2 t1_je4bxa1 wrote
I love how there are so many GPT models now that we have taken to calling them GPT-n lol
sdmat t1_je4bgwh wrote
Reply to comment by Haycart in [D] Very good article about the current limitations of GPT-n models by fripperML
Exactly, it's bizarre to point to revealing failure cases for a universal approximator then claim that fixing those failure cases in later versions would be irrelevant.
Entirely possible that GPT3 only did interpolation and fails horribly out of domain and that GPT5 will infer the laws of nature, language, psychology, logic, etc and be able to apply them to novel material.
It certainly looks like GPT4 is somewhere in between.
suflaj t1_je4ba8s wrote
No.
DragonForg t1_je4ascu wrote
Reply to [D] Elon Musk, Yoshua Bengio, Steve Wozniak, Andrew Yang, And 1100+ More Signed An Open Letter Asking All AI Labs To Immediately Stop For At Least 6 Months by [deleted]
This is a scam, or something else. I really do not know. But I don't know how all these famous people can get together in like one day, and state we need to slow progress of the next technological craze. Even if it leads to our doom, I doubt this many tech people would even realize it.
Narabedla t1_je4abcc wrote
Reply to [D] Elon Musk, Yoshua Bengio, Steve Wozniak, Andrew Yang, And 1100+ More Signed An Open Letter Asking All AI Labs To Immediately Stop For At Least 6 Months by [deleted]
Quite frankly, yes all jobs that can be reliably automated, should be.
But with that should come an universal basic income funded from the increase in productivity. The gain from automation shouldnt be capital gain for the few at top, but a freedom gain for the general population.
Haycart t1_je4923c wrote
>Yes, ChatGPT is doing much more than querying text! It is not just a query engine on a giant corpus of text. … Duh! I do not think you should only think of ChatGPT as a query engine on a giant corpus of text. There can be a lot of value in reasoning about ChatGPT anthropomorphically or in other ways. RLHF also complicates the story, as over time it weighs responses away from the initial training data. But “query engine on a giant corpus of text” should be a non-zero part of your mental model because, without it, you cannot explain many of the things ChatGPT does.
The author seems to present this bizarre dichotomy, that either you have to think of ChatGPT as a query engine or you have to think of it in magical/mystical/anthropomorphic terms.
(They also touch on viewing ChatGPT as a function on the space of "billion dimensional" embeddings. This is closer to the mark but seems to conflate the model's parameter count with the dimensionality of its latent space, which doesn't exactly inspire confidence in the author's level of understanding.)
Why not just think of ChatGPT as what it is--a very large transformer?
The fact that a model like ChatGPT is able to do what it does is not at all surprising, IMO, when you consider the following facts:
- Transformers (and neural networks in general) are universal approximators. A sufficiently large neural network can approximate any function to arbitrary precision (with a few minor caveats).
- Neural networks trained with stochastic gradient descent benefit from implicit regularization -- SGD naturally tends to seek out simple solutions that generalize well. Furthermore, larger neural networks appear to generalize better than smaller ones.
- The recent GPTs have been trained on a non-trivial fraction of the entire internet's text content.
- Text on the internet (and language data in general) arises from human beings interacting with the world--reasoning, thinking, and emoting about those interactions--and attempting to communicate the outcome of this process to one another.
Is it really crazy to imagine that the simplest possible function capable of fitting a dataset as vast as ChatGPT's, might resemble the function that produced it? A function that subsumes, among other things, human creativity and reasoning?
In another world, GPT 3 or 4 might have turned out to be incapable of approximating that function to any notable degree of fidelity. But even then, it wouldn't be outlandish to imagine that one of the later members of the GPT family could eventually succeed.
petrastales OP t1_je48d9s wrote
Reply to comment by Exodia141 in [D] Can DeepL learn from edits to the translations it produces immediately? by petrastales
Understood but I guess that’s a decision for them to make as to whether they accept it or not and I imagine there is a huge backlog of them to approve
[deleted] OP t1_je47fe2 wrote
Reply to comment by AmbitiousTour in [D] I've got a Job offer but I'm scared by [deleted]
[deleted]
challengethegods t1_je47eze wrote
Reply to comment by Necessary-Meringue-1 in [D] Prediction time! Lets update those Bayesian priors! How long until human-level AGI? by LanchestersLaw
it also outperforms her on like 50000 other topics, in 50 different languages, while simultaneously talking to a million other people about a million different things
oh, but someone asked it a trick question and it reflexively gave the wrong answer, nevermind
challengethegods t1_je474co wrote
Reply to [D] Prediction time! Lets update those Bayesian priors! How long until human-level AGI? by LanchestersLaw
GPT4 is already smarter than the people that said 2100+
ghostfaceschiller t1_je44ke8 wrote
Reply to comment by joeiyoma in [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
What?
fripperML OP t1_je43ftw wrote
Reply to comment by sdmat in [D] Very good article about the current limitations of GPT-n models by fripperML
Yes, I don't know what to think honestly. I've read with amusement this paper (well, some of the examples, not all because I did not have time to finish it):
https://arxiv.org/abs/2303.12712
It's very optimistic, and alligned with what you say (not an incremental improvement from previous models).
But then, besides the article I shared, I've read this thread:
So I don't know... Probably we will see soon, when access to GPT-4 is more spread.
Thanks for commenting :)
joeiyoma t1_je42w1a wrote
Reply to comment by ghostfaceschiller in [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
So you can imagine, when you are using it and have no clue!
Exodia141 t1_je42qma wrote
Reply to comment by petrastales in [D] Can DeepL learn from edits to the translations it produces immediately? by petrastales
For the model to remember the changes made by you it should be approved by the team and then fed into the latest edition of training data. Then the subsequent answers will carry your changes.
joeiyoma t1_je42q2t wrote
Reply to [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
Chatgpt always have the potential for error, 4 version has a reduced potential for error. My biggest worry is what it will do our creativity. Autopilot all the time!
sdmat t1_je42icd wrote
> So long as it’s a transformer model, GPT-4 will also be a query engine on a giant corpus of text, just with more of the holes patched up, so it’d be harder to see the demonstrative examples of it being that.
This claim has a strong scent of sophistry about it - any and all signs of intelligence can be handwaved away as interpolating to plausible text.
The explanations of failures are convincing, but the theory needs to go further and explain why larger models like GPT4 (and in some cases 3.5) are so much more effective at answering out-of-domain queries with explicit reasoning proceding from information that it does have. E.g 4 correctly answers the weights question and gives a clear explanation of its reasoning. And that isn't an isolated example.
It's not just an incremental improvement, there is a clear difference in kind.
i_am__not_a_robot t1_je4fv3w wrote
Reply to comment by i_am__not_a_robot in [D] Elon Musk, Yoshua Bengio, Steve Wozniak, Andrew Yang, And 1100+ More Signed An Open Letter Asking All AI Labs To Immediately Stop For At Least 6 Months by [deleted]
Also, demanding that "AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal" is a defamatory implication that researchers do not currently have these goals.