Recent comments in /f/MachineLearning

PM_ME_ENFP_MEMES t1_jcjubn0 wrote

I read something about LLMs and why they’re so bad at math: during the tokenisation process, numbers don’t automatically get tokenised as the actual number. So, 67 may be tokenised as a token representing ‘67’ and all would be well.

However, it’s also likely that 67 may be tokenised as being two tokens, ‘6’ and ’7’, which may confuse the bot if it’s asked to do 67^2.

0

Available_Lion_652 t1_jcjtc6h wrote

I don t understand why people down voted. I saw a claim that GPT 4 was trained on 25k Nvidia A100 for several months. It has used x100 more compute power than GPT3, based on that post. 20 B Llama model was trained on 1.4 trillions tokens. So yeah, I think that my post is based on these claims

0

pobtastic t1_jcjtadp wrote

I did try a few follow up prompts, but nothing changed the structure at all - I mean, it wasn’t for any purpose other than testing it, but I definitely would have felt it unsatisfactory if I’d really needed it for something work related

2

Available_Lion_652 t1_jcjt3yi wrote

The tokenizer of Llama from Facebook splits numbers into digits such that the model is better at math calculations. The question that I asked the model is more than adding or subtracting numbers. The model must understand what a perfect cube is, which it does, but also it must not hallucinate when reasoning, which it fails at

0

NotARedditUser3 t1_jcjsxlo wrote

Reply to comment by pobtastic in [D] GPT-4 is really dumb by [deleted]

i think there you just have to be more creative in your prompt.... I want you to restructure this code to where entirely different methods are called, the comments are different, but the result / output is still effectively the same.....

4

Single_Blueberry t1_jcjsxa1 wrote

>the fact that GPT 4 may be two magnitude orders bigger than GPT 3

I'm not aware of any reliable sources that claim that.

Intuitively I don't see why it would stop hallucinating. I imagine the corpus - as big as it may be - doesn't contain a lot of examples for the concept of "not knowing the answer".

That's something people use a lot in private conversation, but not in written language on the public internet or books. Which afaik is where most of the data comes from.

4

NotARedditUser3 t1_jcjsqta wrote

All language models are currently trash at math. It's not an issue of training material, it's a core flaw in how they function.

People have found some success in getting reasonable outputs from language models using language input-output chains , breaking the task up into smaller increments. Still possible to hallucinate though and i saw one really good article that explained how even tool-assisted language chains (where a language model is able to print a token in one output, to call a function in a powershell or python script to appear in the next input, to generate the correct output later on) , when generating funny unexpected numbers from a 'trusted' tool in the input, the language model sometimes still disregards it, if it's drastically farther off than what the model's own training would lead it to expect the answer to look like.

Which also makes sense - the way the language model works , as we all know, it's just calculating which words look appropriate next to each other. Or tokens, to be more exact. The language model very likely doesn't distinguish much of a difference from 123,456,789 and 123,684,849 , both probably evaluate to roughly the same accuracy stat when it's looking for answers to a math question, in that both are higher than some wildly different answer such as.... 4.

9

pobtastic t1_jcjs0yn wrote

I asked it to rewrite a simple bash script “so it doesn’t look like I stole it” (just for kicks) and all it did was to rename functions… literally everything else, even the comments were exactly identical… Not very impressive.

0