Recent comments in /f/MachineLearning

tonicinhibition t1_jdn4v86 wrote

There's a YouTuber named Letitia, with a little Miss Coffee Bean character, who covers new models at a decent level.

CodeEmporium does a great job at introducing aspects of the GPT/ChatGPT architecture with increasing depth. Some of the videos have code.

Andrej Karpathy walks you through building GPT in code

As for the lesser known models, I just read the abstracts and skim the papers. It's a lot of the same stuff with slight variations.

6

gamerx88 t1_jdn1dd3 wrote

> In the long run I expect this will flip; computers will get very fast and data will be the limiting factor.

I agree but I think data is already a limiting factor today, with the largest (that is public knowledge) models at 175B. The data used to train these models supposedly already cover a majority of the open internet.

1

yaru22 t1_jdn17j5 wrote

Hello,

GPT4 has context length of 32K tokens while some others have 2-4K tokens. What decides the limit on these context lengths? Is it simply bigger the model, larger the context length? Or is it possible to have a large context length even on a smaller model like LLaMA 7/13/30B?

Thank you!

1

currentscurrents t1_jdn0opn wrote

The Nvidia H100 marketing material does advertise a configuration for linking 256 of them to train trillion-parameter language models:

>With NVIDIA NVLink® Switch System, up to 256 H100 GPUs can be connected to accelerate exascale workloads. The GPU also includes a dedicated Transformer Engine to solve trillion-parameter language models.

Doesn't necessarily mean GPT-4 is that big, but it's possible. Microsoft and Nvidia were working closely to build the new Azure GPU cloud.

7

currentscurrents t1_jdmzphs wrote

That's true, but only for the given compute budget used in training.

Right now we're really limited by compute power, while training data is cheap. Chinchilla and LLaMA are intentionally trading more data for less compute. Larger models still perform better than smaller ones given the same amount of data.

In the long run I expect this will flip; computers will get very fast and data will be the limiting factor.

3

currentscurrents t1_jdmyjrb wrote

Bigger models are more sample efficient, so it should need less data.

But - didn't the Chinchilla paper say bigger models need more data? Yes, but that's only true because right now compute is the limiting factor. They're intentionally trading off more data for less model size.

As computers get faster and models bigger, data will increasingly become the limiting factor, and people will trade off in the opposite direction instead.

7

DigThatData t1_jdmvjyb wrote

dolly is important precisely because the foundation model is old. they were able to get chatgpt level performance out of it and they only trained it for three hours. just because the base model is old doesn't mean this isn't recent research. it demonstrates:

  • the efficacy of instruct finetuning
  • that instruct finetuning doesn't require the worlds biggest most modern model or even all that much data

dolly isn't research from a year ago, it was only just described for the first time a few days ago.

EDIT: ok I just noticed you have an ERNIE model up there so this "no old foundation models" thing is just inconsistent.

5

Fit-Recognition9795 t1_jdmsxa9 wrote

As an AI language model, both GPT-4 and its predecessors like me, ChatGPT, are designed to process and generate text, not to analyze images or visual data. Giving an ASCII representation of a screenshot to GPT-4 or any text-based language model would likely result in a poor understanding of the actual image, as the model doesn't have the capability to process images in the same way that a human or a dedicated image recognition AI can.

However, if the ASCII representation is clear enough and contains easily recognizable elements that are unique to a particular video game, there is a chance that GPT-4 might be able to make an educated guess about the game in question, but the accuracy would be significantly lower compared to proper image recognition AI.

Regarding the prediction of the next likely action or input, GPT-4 might be able to provide some generic suggestions based on the text description, but again, its ability to understand the actual visual information would be limited.

For analyzing images and making predictions about visual content, you would be better off using a dedicated image recognition AI model, such as OpenAI's DALL-E or an AI model specifically trained for video game analysis.

8