Recent comments in /f/singularity

Bierculles t1_je9wm7a wrote

That is exactly the point though, it's called freedom of speech and a pretty neat concept, but i take it that in your allencompassing wisdom you have the answer for what is truly normal and just and you know exactly where to draw the line.

3

RiotNrrd2001 t1_je9wkxq wrote

I think some people insist on "consciousness" as being a necessary component of AI, and that "understanding" is a function of consciousness. And consciousness means "being conscious the way biological systems like ourselves are conscious". AND, the final nail in this coffin: "that's impossible". Hard to argue with.

QED, ergo, in conclusion regarding AIs ever "understanding" anything: Nope.

But what about....? Nope.

But maybe they'll...? I said no.

What if they invent a...? Doesn't matter, what part of "impossible" are you not getting here?

Just to be clear, I am not one of these people. But I think this is what we sometimes see. In order for AI to be "real", it has to have characteristics that are basically impossible to test for (i.e., consciousness and\or self-awareness). Thus, for these people AI can't ever be real.

1

RiotNrrd2001 t1_je9wjmp wrote

I think some people insist on "consciousness" as being a necessary component of AI, and that "understanding" is a function of consciousness. And consciousness means "being conscious the way biological systems like ourselves are conscious". AND, the final nail in this coffin: "that's impossible". Hard to argue with.

QED, ergo, in conclusion regarding AIs ever "understanding" anything: Nope.

But what about....? Nope.

But maybe they'll...? I said no.

What if they invent a...? Doesn't matter, what part of "impossible" are you not getting here?

1

JenMacAllister t1_je9whgk wrote

China and Russia even if they sign this, and not continue past a GPT-4 ("Level") will mean they will catch up to where the west is now. Also these AI's will be trained on their respective countries internets. Which will mean they will have their countries bias, just like the ones we will be training in the west.

China's AI's will never no Tiananmen Square happened, Surveillance State is ok and Taiwan is a part of China, among other things. We can only guess at what the AI's in Russia will think of the people in Ukraine, etc...

Yes the West's AI's will also have these bias issues we are seeing now. The ones these guys are telling us to watch out for.

However the answer is not to stop research but to get these things in the open as soon as possible. The sooner these are beta tested by real people the better chance we will have in controlling them. Also the sooner we can test the less connected these things will be to our world.

We currently have the lead in this research and can shape these things before China or Russia can, because you know they will not. Not that I'm more confident the West will do it right, but I do know more people will have a chance to say there is something wrong and how these thing should be connected to our world.

5

GoldenRain t1_je9w4um wrote

>It will be OP. Imagine, GPT please solve world hunger, and the robot model it suggest could actually do physical work.

That's where the alignment problem comes in. An easy solution to solve world hunger is to reduce the population in one way or another but that it is not aligned with what we actually want.

3

Cypher10110 OP t1_je9vtqo wrote

Yea, of course. It's not an easy problem.

Personally, I don't think the correct response is to race. But I'd rather die and be right than be wrong and kill everyone else. (Obviously, all this is hyperbole. But I think it gets across my point)

But also, I don't see how "winning" or "losing" the "culture war" can be put on the same scale as potential human extinction. I know some people feel a lot more strongly that the west needs to "win" this one, and that some of the actual risks are still pretty debatable at this stage.

As it turns out, I'm a spectator with zero influence on this particular game, so I'll just do my best to deal with whatever the people with actual power decide is the best idea 🤷‍♂️

0

SkyeandJett t1_je9v8h9 wrote

I made that point yesterday when this was published elsewhere. A decade ago we might have assumed that AI would arise from us literally hand coding a purely logical AI into existence. That's not how LLMs work. They're literally "given life" through the corpus of human knowledge. Their neural nets aren't composed of random weights that spontaneously gave birth to some random coherent form of intelligence. In many ways AI are an extension of the human experience itself. It would be nearly impossible for them to not align with our goals because they ARE us in the collective sense.

10

jason_bman t1_je9unzr wrote

Do you know if the example Figures are hand-typed by the researchers? For example, there is a prompt in Figure 9:

Human: I hope to eat an apple and drink a cup of milk.
Can you please pick them up from the fridge and put
them on the kitchen table?

TaskMatrix.AI: Sure, I can help you with that.
robot_go_to("fridge")
robot_pick_up("egg")
robot_go_to("kitchen table")
robot_put_down()
robot_go_to("fridge")
robot_pick_up("milk")
robot_go_to("kitchen table")
robot_put_down()

Wondering if "egg" is just a typo from the research team. Seems like an error that a large LLM would not make.

8

Unfocusedbrain t1_je9uenn wrote

If an AGI were to emerge in such a facility, would it not have easier access to the numerous other 'accelerators' (really gpus and cpus) present there? Considering that an AGI might require only 10-1000 accelerators, the availability of 100,000 would potentially enable a rapid transition from AGI to ASI.

8