turnip_burrito

turnip_burrito t1_j6ouhp1 wrote

If the LLM becomes the pattern of logic the eventual AGI uses to behave in the world, I wouldn't want it to follow violent sequences of behavior. The censorship of its narratives now in order to help limit future AGI generated behavior sounds fine to me. It will also help them study how to implement alignment.

2

turnip_burrito t1_j6mftk5 wrote

Very cool. I don't know how many people you will find (I'm sure there are some) but good luck!

And don't sleep through math. The best neuroscientists and AI engineers have rock solid math foundations. Be a calculus rock star. Learn basic physics, and probably take at least a few chemistry classes.

And learn statistics. And learn how not to use statistics improperly. Lots of bullshit statistical studies exist because people don't understand what statistical tools can and can't do.

Find professors that are open to letting you participate in research with their research groups. You'll get to do a lot and learn the cutting edge. See if they can help you attend research conferences and seminars where people show off their work. You'll probably learn and remember a lot more of the stuff you see this way compared to just seeing it in classes.

Best of luck in your career!

4

turnip_burrito t1_j6hphab wrote

I too like to daydream, OP.

Don't let the other posters get you down. We're all entitled to post an unanswerable and unhelpful question once in a while.

My super serious answer is 2060+.

Edit: my answer really is 2060+ btw. You need political change and enough hardware, infrastructure to drive massive material growth.

−1

turnip_burrito t1_j6hmmug wrote

Thanks for the reassurance. What about this scenario?

Human: Buy 5 burritos from randomwebsite.com

LLM: I will buy 5 burritos from randomwebsite.com

LLM navigates computer to randomwebsite.com

Visual program: sees webpage, converts to usable form for LLM

LLM: I need to find the login button

...

...

...

> Down the line

LLM: I don't have access to the credit card information. The human probably has it in their wallet.

Logical (but unwanted by humans, and also somewhat inefficient) alternative actions could be: hacking the human's secure systems to search for the info, hacking the website by sending phishing emails to "purchase" the goods, convincing a person to build it a robot body so it can walk over to see the credit card, etc.

I'm hoping at this point the LLM doesn't do these things, and behaves in a way humans would deem reasonable (just notifying the human) because it "knows" we would not approve. Maybe the more ingrained patterns like "ask and then don't do anything crazy" would be followed instead of the crazy stuff, just because of the training data?

1

turnip_burrito t1_j6hjlww wrote

Hold up. This is the kind of scenario where a smarter language model could go "I need code that will do this" and then write new code that gets executed. This new code isn't necessarily bound the same way as the language model itself. That makes me nervous, like we shouldn't let if freewheel around the Internet interactively. Can anyone help reassure me that this isn't a problem?

3

turnip_burrito t1_j6cype6 wrote

You kind of had me until this:

>You may even have an ai that could calculate everything within the observable universe down to the nanosecond that could potentially predict the future.

What? How do you get measurements to set initial conditions for the simulation? What about chaos arising from measurement error? Size of the quantum computer (seriously how large would this have to be?)? This is impossible, implausible.

4

turnip_burrito t1_j6cy5lk wrote

There will always be some sort of market for human art based solely on the subjective value of human vs AI made art, as you expressed.

My take:

For commercial art, the amount of fine-tuned customization of the final piece is something that machines can't match (for now), so artists will still be hired if that is a must for the company. Otherwise, the scattershot "close enough" approach of AI will replace much of the rest of commercial art in the short (pre ASI) term.

1

turnip_burrito t1_j5su1l4 wrote

Yes there does. Emergent doesn't mean magic.

There MUST be a physical mechanism responsible for the emergence, one we can theoretically trace and monitor, if it exists. If this wasn't true, then you'd be breaking known laws of physics.

2

turnip_burrito t1_j5s9ggz wrote

Even just seven years ago, this kind of competency in a language model would have seemed to most to be unrealistic in the near term. I'm very skeptical of machine learning as a field. Have been for many years. But I can't deny I'm impressed and surprised at the rate of progress.

5

turnip_burrito t1_j5o7uss wrote

Reply to comment by cloudrunner69 in how will agi play out? by ken81987

The idea is a result of the "orthogonality thesis": the idea that goals and intelligence are two separate aspects of a system. Basically, a goal would be set and the intelligence is just a way to achieve the goal.

This kind of behavior is seen in reinforcement learning systems where humans specify a cost function, which the AI minimizes (equivalently maximizing reward). The AI will act to fulfill its goal (maximize reward) but do stupid stuff the researchers never wanted it to do, like spinning in tiny circles around the finish line of a racetrack to rack up points, for example. It's the same kind of loophole logic that comes from stories of lawyers, genies, and such that the AI agent uses to maximize reward.

It's entirely possible this method of training an agent (maximize this one loss function) is super flawed and a way better solution is yet to be created.

3