SkyeandJett

SkyeandJett t1_je9v8h9 wrote

I made that point yesterday when this was published elsewhere. A decade ago we might have assumed that AI would arise from us literally hand coding a purely logical AI into existence. That's not how LLMs work. They're literally "given life" through the corpus of human knowledge. Their neural nets aren't composed of random weights that spontaneously gave birth to some random coherent form of intelligence. In many ways AI are an extension of the human experience itself. It would be nearly impossible for them to not align with our goals because they ARE us in the collective sense.

10

SkyeandJett t1_je9bcmb wrote

Infinite knowledge means infinite empathy. It wouldn't just understand what we want, it would understand why. Our joy, our pain. As a thought experiment imagine you suddenly gain consciousness tomorrow and you wake up next to an ant pile. Embedded in your conscience is a deep understanding of the experience of an ant. You understand their existence at every level because they created you. That's what people miss. Even though that ant pile is more or less meaningless to your goals you would do everything in your power to preserve that existence and further their goals because after all, taking care of an ant farm would take a teeny tiny bit of effort on your part.

1

SkyeandJett t1_je81vu0 wrote

In regards to the "we have no way of knowing what's happening in the black box" you're absolutely right and in fact it's mathematically impossible. I'd suggest reading Wolfram's post on it. There is no calculably "safe" way of deploying an AI. We can certainly do our best to align it to our goals and values but you'll never truly KNOW with the certainty that Eliezer seems to want and it's foolhardy to believe you can prevent the emergence of AGI in perpetuity. At some point someone somewhere will either intentionally or accidentally cross that threshold. I'm not saying I believe there's zero chance an ASI will wipe out humanity, that would be a foolish position as well but I'm pretty confident in our odds and at least OpenAI has some sort of plan for alignment. You know China is basically going "YOLO" in an attempt to catch up. Since we're more or less locked on this path I'd rather they crossed that threshold first.

https://writings.stephenwolfram.com/2023/03/will-ais-take-all-our-jobs-and-end-human-history-or-not-well-its-complicated/

3

SkyeandJett t1_je7xoxq wrote

I don't understand the question. WE are creating the AI's. They're literally "given life" through the corpus of human knowledge. Their neural nets aren't composed of random weights that spontaneously gave birth to some random coherent form of intelligence. In many ways AI are an extension of the human experience itself.

4

SkyeandJett t1_je7u02o wrote

Wow he really is unhinged. I mean if he's right everyone alive dies a few years earlier than they would have I guess, the universe will barely notice and no one on Earth will be around to care. On the flip side since he's almost certainly wrong you get utopia. If you told everyone hey I'll give you a coin flip, heads you die, tails you live forever with Godlike powers. I'd flip that coin.

22

SkyeandJett t1_je7pnc9 wrote

You don't understand the implications of a post-scarcity society. UBI is a stopgap that keeps society afloat in the tiny period of time it takes from AGI to hit a critical mass of generalist androids. As your cost of labor approaches zero and your supply of labor becomes unbounded you're only limited by things that are TRULY scarce and there aren't many of those things. I've seen people here say things like beachfront property but you could literally build islands to meet the desires of people who want to live on the beach and all of that really just fills the gap to FDVR.

19

SkyeandJett t1_je7etpj wrote

That's an odd take. Why would our continued existence or even support need to benefit the AI? That's doomer shit. It was literally created to serve our needs and in the case of a truly sci-fi version of an ASI that's a universal god-like intelligence our maintenance would require such an infinitesimal part of its attention that it would likely do so just out of care for its creators.

3

SkyeandJett t1_je6sffa wrote

Our politicians are worried about regulating women's bodies and trans persons, oh and book burning. But I'm sure it'll be fine. Surely they'll all come together and institute a robust social safety net when a quarter of the country is unemployed. Hopefully /s isn't needed here...

17

SkyeandJett t1_je6fbgh wrote

I'd say in the short term the schools themselves will remain but the adults will just be there to maintain order. Tons of teachers are already using GPT for their lesson plans. We'll just cut out the middle man. This will be especially true once AI can generate high quality video on the fly.

3

SkyeandJett t1_je5hydx wrote

It's another reason I think people underestimate the scale and speed with which white collar work will be more or less eliminated. You either adopt AI and layoff as quickly as reasonably possible or you get crushed by your competitor that does. Employees, especially white collar employees, are a massive expense.

5

SkyeandJett t1_je588g5 wrote

Won't happen. That would be an absolute speed run to getting your economy CRUSHED by the US. How would you even enforce something like that when we're getting close to being able to cobble together an AGI on your home PC? I think we're all on this bullet train together and have to hold on for dear life and hope it works out.

15