turnip_burrito

turnip_burrito t1_j8lzblu wrote

AI art and language are so complex but competent now. They are such flexible and powerful models of art and language. The art models in particular figure out reflection, shading, refraction, and all these other crazy things not explicitly labeled.

Five years ago I'd have told you this was like a decade or two away. These developments have really blown my socks off.

New AIs also have fantastic problem solving and task completion skills in novel environments. It's slightly under the radar but equally impressive.

I think with very multimodal datasets, we will have AGI, but frozen in its ability to upgrade.

If we add real time updating, and allow it to take summaries of its own internal state as data, we will have AGI that is every bit as flexible as a human being.

(It will also immediately be ASI because it has so much knowledge and media generation built in)

I think the next decade we'll see at least 4 or 5 more top tier Wow! moments on par with Dalle-2/SD AI art and GPT 3, and then we'll be basically at AGI.

7

turnip_burrito t1_j8byoth wrote

Surpasses humans in this science test, across the board (natural science, social science, language science, etc).

Wow.

And outperforms GPT 3.5 with about 0.4% the parameter amount.

Wonder how it does on other tests?

Would this run on consumer graphics cards, then? Seems like it's in the ballpark to run on a single 3090, but without knowing the total requirements, I can't say.

Edit: "Our experiments are run on 4 NVIDIA Tesla V100 32G GPU" - paper

​

Paper link: https://arxiv.org/pdf/2302.00923.pdf#page=7

85

turnip_burrito t1_j7r7uql wrote

Have you noticed this sub getting more ridiculous as time goes on?

It feels as though with more members, we get more of the crowd of ideological people who just want "freedom everywhere for everyone all the time, no regulations, no restrictions, no going slow, just GO, maximum power for everyone and everything".

0

turnip_burrito t1_j7okin2 wrote

Probably a box.

Maybe painted black.

And able to understand enough concepts to write improved versions of some of its own code of we asked it to.

Maybe can write some new math proofs in a short and human readable way.

Maybe multimodal.

Large short term memory context window.

Able to update its model in real time for incoming new information.

Maybe running on more specialized hardware, or neuromorphic chips.

19

turnip_burrito t1_j6uqmqt wrote

>You act like ChatGPT just threw that out there instead of was prompted "write a poem about an ai taking over the world"

Sorry, but you're just flat out wrong. The poster knew basically everyone here would understand the AI was prompted. The point was to make their point more poetic, because it is a nice poem.

2

turnip_burrito t1_j6ovrva wrote

Is it? There is a new Google robot (last couple months) that uses LLMs to help build its instructions for how to complete tasks. The sequence generated by the LLM becomes the actions it should take. The language sequence generation determines behavior.

There was also someone on Twitter (last week) who linked chatGPT to external tools and the Internet. This allowed it to solve a problem interactively, using the LLM as the central planner and decision maker. Again here, the language sequence generation determines behavior.

Aside fron these, alignment is the problem of controlling behavior, and behavior is a sequence. The rules and tricks discovered for controlling language sequences maybe can help us understand how to control the larger behavior sequence.

Mostly just thinking aloud. Maybe I'm just dumb, since everyone here in the comments seems to have the opposite opinion of mine, but what do we make of the two above LLM use cases where LLMs determine the behavior?

1