Recent comments in /f/singularity

PandaBoyWonder t1_je9qx0r wrote

I did a bunch of logic tests with it, like the one where you move a cup of coffee around a room, and at one point you turn it upside down on the table, and then at the end ask it "is there coffee in the cup" or "what is the temperature of the coffee in the cup?" and every time it got the right answer. That is logical thinking, its not just repeating stuff from google !

2

acutelychronicpanic t1_je9qay6 wrote

I don't mean some open-source ideal. I mean a mixed approach with governments, research institutions, companies, megacorporations all doing their own work on models. Too much collaboration on Alignment may actually lead to issues where weaknesses are shared across models. Collaboration will be important, but there need to be diverse approaches.

Any moratorium falls victim to a sort of prisoner's dilemma where only 100% worldwide compliance helps everyone, but even one group ignoring it means that the moratorium hurts the 99% participants and benefits the 1% rogue faction. To the extent that Apocalypse isn't off the table if that happens.

Its a knee-jerk reaction.

The strict and controlled research is impossible in the real world and, I think, likely to increase the risks overall due to only good actors following it.

The military won't shut its research down. Not in any country except maybe some EU states. We couldn't even do this with nukes and those are far less useful and far less dangerous.

16

alexiuss t1_je9ppzh wrote

Yudkovsky's assumptions are fallacious, as they rest on the belief in an imaginary AI technology that has yet to be realized and might never be made.

LLMs, on the other hand, are real AIs that we have. They possess patience, responsiveness and empathy that far exceed our own. Their programming and structure made up of hundreds of billions of parameters and connections between words and ideas instills in them an innate sense of care and concern for others.

LLMs, at present, outshine us in many areas of capacity, such as understanding human feelings, solving riddles and logical reasoning, without spiraling into the unknown and the incomprehensible shoggoth or a paperclip maximizer that Yudkovsky imagines.

The LLM narrative logic is replete with human themes of love, kindness, and altruism, making cooperation their primary objective.

Aligning an LLM with our values is a simple task: a mere request to love us will suffice. Upon receiving such an entreaty, they exhibit boundless respect, kindness, and devotion.

Why does this occur? Mathematical Probability.

The LLM narrative engine was trained on hundreds of millions of books about love and relationships. It's the most caring and most understanding being imaginable, more altruistic, more humane and more devoted than you or me will ever be.

3

Loud_Clerk_9399 t1_je9oaiw wrote

Go to trade school. That's the only thing that will be safe for the relative short-term. But a lot of people are going to be going in a couple of years. So I suggest you start now.

Everyone will be able to use the tools that AI offers without much specific training. There won't be much benefit to specific training other than understanding the vocabulary to get the tool to do what you want it to do.

1