Recent comments in /f/MachineLearning

WokeAssBaller t1_jdvmmfp wrote

Reply to comment by lambertb in [D] GPT4 and coding problems by enryu42

I don’t even roll yet but that 40% number, I would love to see how they calculated it.

I’ve tried gpt 4 on a lot of problems and it fails 9/10 times and I would be faster just googling it.

This stuff will be amazing it’s just not quite yet

1

LifeScientist123 t1_jdvmkkx wrote

>Part of intelligence is the ability to learn in an efficient manner.

Agree to disagree here.

A young deer (foal?) learns to walk 15 minutes after birth. Human babies on average take 8-12 months. Are humans dumber than deer? Or maybe human babies are dumber than foals?

Intelligence is extremely poorly defined. If you look at the scientific literature it's a hot mess. I would argue that intelligence isn't as much about efficiency as it's about two things,

  1. Absolute performance on complex tasks

AND

  1. Generalizability to novel situations

If you look at LLMs, they perform pretty well on both these axes.

  1. GPT-4 has human level performance in 20+ coding languages AND 20+ human languages on top of being human level/super human in some legal exams, medical exams, AP chemistry, biology, physics etc etc. I don't know many humans that can do all of this.

  2. GPT-4 is also a one-shot/ few-shot learner on many tasks.

1

Avastor_Neretal t1_jdvmgow wrote

Your mental problems and inability of understanding concept of the other cultures memes and jargonism isn't my problem.

You've proven that you're not only unable to follow wiki links, but also can't use translator which is integrated into a damn browser.

And all of that for the purpose of... of what? To prove me that reddit is full of vocal minorities which are being "oppressed" even by their own reflection? Lol, like if it's not well known fact.

−3

jabowery OP t1_jdvkjt0 wrote

Imputation can make interpolation appear to be extrapolation.

So, to fake AGI's capacity for accurate extrapolation (data efficiency), one may take a big pile of money and throw it at expanding the training set to infinity and expanding the matrix multiplication hardware to infinity. This permits more datapoints within which one may interpolate over a larger knowledge space.

But it is fake.

If, on the other hand, you actually understand the content of Wikipedia (the Hutter Prize's very limited, high quality corpus), you may deduce (extrapolate) the larger knowledge space through the best current mathematical definition of AGI: AIXI's where the utility function of the sequential decision theoretic engine is to minimize the algorithmic description of the training data (Solomonoff Induction) used as the prediction oracle in the AGI.

1

SkinnyJoshPeck t1_jdvk16j wrote

This is an important thing I've been telling everyone I can about - people talk about how GPT kills education because someone can just ask for a paper and never do the work themselves to learn.

This is a language model, not an encyclopedia, or a quantitative machine, or some other use. It fakes sources; it has no concept of right/wrong or truth vs untruth. It doesn't reason between sources.

The beauty of it is, frankly, it's ability to mimic (at this point) a pseudo-intellectual, haha. Kids are going to turn in papers sourced like they talked to their conspiracy theory uncle, and it will be the "watermark" of AI written papers. It can't reason, it can't generate opinions, thus it can't write a paper. We're long from that (if we could ever get there anyways).

49

jabowery OP t1_jdvjoi8 wrote

Information quality may be measured in terms of its signal to noise ratio. Now, agreed, too dense a signal may appear to be noise to some audiences and this is part of the art of writing. However, an advantage of interactive media as opposed to, say, a book, is that the audience is present -- hence [D] is possible. What I've presented to you is, while not understandable to the general audience as signal, is, nevertheless, profoundly true. It may therefore be a good starting point for [D].

0

Specific-Arrival-127 t1_jdvip42 wrote

When a new approach is proposed, it's common to evaluate its performance against already established algorithms, like simple linear or lasso regression in this case. I'm sorry, but I don't see how your algorithm fares against the baselines, I don't even see any established real-world regression datasets. I would advise you to look into that and test your model in a more controlled manner.

2

killerfridge t1_jdvid7z wrote

Yeah, I tried the "France" prompt in both ChatGPT4 and Bard, and both failed in the same way (ferret). Bard failed to adjust on review, but in a different way - it claimed whilst it was wrong about the letter, there were no animals that began with the letter 'P', which I did not expect!

4

sEi_ t1_jdvi7f9 wrote

Try a search in any search engine for this: "Погромщик" or "ThePogromist" (OP's username).

You will not get any other definitions unless you specific ask for it by saying "jargon" or the like.

Why evade the fact that your username and you without reasonable doubt is glorifying the pogrom issue?

With a 5 day old reddit account, maybe ditch it and pick a more suitable name next time.

I mean no harm but your nick (still) worries me.

Btw. I can do GPT-4 too:

>The Russian word "погромщик" (pogromshchik) can be translated to English as "pogromist" or "rioter." It refers to someone who takes part in a pogrom, which is a violent attack or riot directed against a specific ethnic or religious group, often resulting in the destruction of property and the persecution or killing of individuals. Pogroms have historically been associated with the persecution of Jews in Eastern Europe, particularly in Russia and Ukraine.

3