Recent comments in /f/MachineLearning

Nhabls t1_je9anrq wrote

Which team is that? The one at Microsoft that made up the human performance figures in a completely ridiculous way? Basically "We didn't like that pass rates were too high for humans for the hard problems that the model fails on completely so we just divided the accepted number by the entire user base" oh yeah brilliant

The "human" pass rates are also composed of people learning to code trying to see if their solution works. Its a completely idiotic metric, why not go test randos on the street and declare that represents the human coding performance metric while we're at it

1

siherbie t1_je9ahak wrote

More than healthy hype/discussions on chatgpt tech (not surprisingly even yannis pointed out that chatgpt4's tech demo or paper didn't mention anything about parameters or actual tech specifications), there's increasingly misinformation about chatgpt to regular people. This is already troubling since visual AI algorithms are under fire for copying styles & let's face it - even chatgpt mimics literature styles. So whenever I hear a random "expert" telling how chatgpt works like that human language center in our brains, it makes me roll my eyes really hard. Having said that, chatgpt4's currently experimental visualAI feature sounds interesting but only time will tell once it's available.

2

LetGoAndBeReal t1_je9a3hb wrote

Of course, that’s what allows RAG to work in the first place. I didn’t say you couldn’t provide new knowledge through the prompt. I only said you cannot provide new knowledge through the fine-tuning data. These are two completely separate things. This distinction is the reason RAG works for this use case and fine-tuning does not.

1

pengo t1_je99h3k wrote

There are two meanings of understanding:

  1. My conscious sense of understanding which I can experience and I have no ability to measure in anyone else, unless someone solves the hard problem.
  2. Demonstrations of competence, which we say "show understanding", which can be measured, such as exam results. Test results might be a proxy for measuring conscious understanding in humans, but do not directly test is, and have no connection to it whatsoever in machines.

That's it. They're two different things. Two meanings of understanding. The subjective experience and the measurement of understanding.

Machines almost certainly have no consciousness, but can demonstrate understanding. There's no contradiction in that because showing understanding does not imply having (conscious) understanding. A tree falling doesn't mean someone has to experience the sensation of hearing it, that doesn't mean it didn't fall. And if you hear a recording of a tree falling, then no physical tree fell. They're simply separate things. A physical thing, and a mental state of mind. Just like conscious understanding and demonstrations of understanding.

Why pretend these are the same thing and quiz people about? Maybe the authors can write their next paper on the "debate" over whether season means a time of year or something you do with paprika.

Really sick of this fake "debate" popping up over and over.

6

darkbluetwilight OP t1_je99g95 wrote

Correct, it's for personal use only. I did look into a few different options - Huggingface, Alpaca, BERT, Chinchilla, Cerebras but they all appear to have charges too (with the exception of Alpaca which was taken down). I already had openai nicely implemented in my GUI so wasn't really drawn by any of them.
Can you suggest a model that is free or cheaper than openai that I could integrate into my python gui?
On the database side I tried Mongo DB and Atlas but found these very difficult to use. Since I only need to generate the database once, Llama index was fine to use

1

cc-test t1_je983fw wrote

Wouldn't use ChatGPT as a teacher given its issues around accuracy and hallucinations. Without having a good understanding of C++ how do you know what it's providing you is correct and makes sense as part of a larger codebase?

Even CoPilot, that has access to the entire repo for context, still chucks out nonsense on a regular basis which looks like the right solution but is far from it.

−3

sEi_ t1_je93rsp wrote

Nobody knows what is going on. Not even the 'creators' so my input is same valid is everybody else.

And yes I see big paradigm shifts ahead, and some have already happened and is happening now. Concerning "people/work/etc.".

There is much debate and stuff to do.

But focus should imho be moved away from the AI technical side. Everybody now know that current large model AIs have some kind of power we can use/misuse. What power and how to use, we have to find out.

So we need to take the "thinking hat" on and more look at what kind of new society we want as there is a possibility that AI can help us change the society to the better or even the worse. Especially worse if big tech is allowed a monopoly on development and deployment. Anyway this monopoly is close to be nullified and 'they' are scared.

I am sure technological advances will soon make it possible to make/train/run big models like ChatGPT in a distributed network and when that is possible the monopoly is broken. It will happen and is only a matter of (short) time, and again they know that.

As of now the only 'power' big tech has is that they have the infrastructure to create/run big models. The technology and code is out in the open so the only thing is, they have 'a big file' and now let's earn billions on the 'sheep'.

The big money is earned on having the monopoly and lease access to other companies that then in turn have to earn the little money you can earn by offering paid inference to some 'inhouse' implementation.

If 'big tech' is the only jumping on the wagon we for sure get to keep status quo, where you work for some boss so he can get car number 3 or a villa somewhere.

I know most of you love money, depend on it and have "earning money" as a goal. But there is a possible life where money is obsolete and as useless as the paper or numbers that we adore as a deity.

Ye ye, you need to open your mind, but many do not and instead laugh at statements like above because they (not blaming) are so stuck in the dogma that they do not even dare to think of alternatives.

AI is not a goal, its a tool.

Now is a good occasion to revise some old and outdated ways of doing while the society is in front of big changes like now. No doubt about changes, but what kind of changes only time can tell.

Emancipate yourself. (nothing new there btw.)

−1

sdmat t1_je93qz6 wrote

The arithmetic thing is interesting, GPT4 usually fails at multiplying a pair of 3 digit numbers as pure token prediction:

> Multiply 438 * 939 > > The product of 438 and 939 is 411,182.

But it does actually know how to perform long form multiplication and with the right prompt can get the correct answer this way:

>Multiply 438 by 939 using long form multiplication. Check each step for correctness as you perform it and revise if there is an error. Add trailing zeros to the parts. Perform the final addition in steps by successively adding each part. > >Alright, let's multiply 438 by 939 using long-form multiplication. I'll write out the steps and verify their correctness as I go along. > > 438 > x 939 > _________ >Step 1: Multiply 438 by 9 (the digit in the ones place of 939) > > 438 > x 9 > _________ > 3942 (9 * 438 = 3942, Correct) >Step 2: Multiply 438 by 30 (3 in the tens place of 939, so we multiply by 3 and add a trailing zero) > > 438 > x 3 > _________ > 1314 (3 * 438 = 1314, Correct) >Now, add a trailing zero to this product: 13140 > >Step 3: Multiply 438 by 900 (9 in the hundreds place of 939, so we multiply by 9 and add two trailing zeros) > > 438 > x 9 > _________ > 3942 (9 * 438 = 3942, Correct) >Now, add two trailing zeros to this product: 394200 > >Step 4: Add the three products from steps 1, 2, and 3 > > 3942 (Step 1) > +13140 (Step 2) > +394200 (Step 3) > __________ >Step 5: Add the products in steps > >First, add Step 1 and Step 2: > > 3942 > +13140 > _________ > 17082 >Now, add this sum to the product from Step 3: > > 17082 > +394200 > _________ > 411282 >So, the product of 438 and 939 is 411,282.

It still often makes mistakes doing this, but it clearly has a good grasp of the concept.

It's able to correctly perform the high level long multiplication procedure for large numbers (haven't had the patience to see just how large) but is let down by the reliability of the arithmetic in constituent steps.

A lot of humans have the same problem.

5