Recent comments in /f/singularity

burnt_umber_ciera t1_jeez28w wrote

Empathy and ethics definitely do not scale with intelligence. There are so many examples of this in humanity. For example, Enron, smartest in the room - absolute sociopaths.

Just look at how often sociopathy is rewarded in every world system. Many times the ruthless, who are also obviously cunning, rise. Like Putin for example. He’s highly intelligent but a complete sociopath.

1

theonlybutler t1_jeeyvgd wrote

Good point. A product of it's parents. It won't necessarily be destructive I agree but it could potentially view us as inconsequential or a just a tool to use at its will. one example: Perhaps it may decide if wants to expand through the universe and having humans produce resources to do that, it could determine humans are most productive in labour camps and send us all off to them. It could also decide oxygen is too valuable a fuel to be wasted on us breathing it and just exterminate us. Pretty much how humans treat animals, sadly. (Hopefully it worries about ethics too and keeps us around).

2

wowimsupergay OP t1_jeeyunh wrote

No no I totally agree with you. I don't think consciousness is just a switch, I do think consciousness is something that is experienced by all "systems" so to speak, it is just that humans are so far on that consciousness spectrum We have been totally removed from animals, and thus we define where consciousness begins at where we are. Which is also like 100,000 times further away than every animal basically

This brings me to another idea. Will AI think we are conscious? Perhaps we are 100,000 times less conscious than the future AIs.. if that's the case, then once again, we are so far down the spectrum, We may not even fulfill the requirements for true consciousness (however the AIs choose to define it)

Once again, this is all speculation, This was just something cool to think about

3

wowimsupergay OP t1_jeeyh0r wrote

So I guess our question is, can AI effectively simulate the real world, taken in through senses (or perhaps whatever it invents)... Simulating the real world would fundamentally require simulating all of the 4 forces that make it up. If we can get to that, and then discover whatever new forces that we're missing (if there are any).

We're going to need a team of physicists and a team of devs to work on this. Given the four forces of the universe, can an AI simulate an artificial world that is accurate enough to actually run experiments?

1

silver-shiny t1_jeey24l wrote

Reply to comment by [deleted] in The Alignment Issue by CMDR_BunBun

If you're much smarter than humans, can make infinite copies of yourself that immediately know everything you know (as in, they don't need to spend +12 years at school), think must faster than humans, and want something different than humans, why would you let humans control you and your decisions? Why would you let them switch you off (and kill you) anytime they want?

As soon as these things have a goal that are different than ours, how do you remain in command of decision-making at every important step? Do we let chimpanzees, creatures much dumber than us, run our world?

And here you may say, "Well, just give them the same goals that we have". The question is how. That's the alignment problem.

2

Rofel_Wodring t1_jeexxsd wrote

They will try, but they can't. The powers-that-be are already realizing that the technology is growing beyond their control. It's why there's been so much talk lately about slowing down and AI safety.

It's not a problem that can be solved with more conscientiousness and foresight, either. It's a systemic issue caused by the structures of nationalism and capitalism. In other words, our tasteless overlords are realizing that this time around, THEY will be the ones getting sacrificed on the altar to the economy. And there's nothing they can do from experiencing the fate they so callously inflicted on their fellow man.

Tee hee.

1

StarCaptain90 OP t1_jeexul3 wrote

Believe it or not I hear more Skynet concerns than that, but I do understand your fear. The implication of AGI is risky if it's in the hands of one entity. But I do think a solution is not shutting down AI development, I've been seeing a lot of that lately and I find it irrational and illogical. First of all, nobody can shut down AI. Pausing future development at some corporations for a short period is more likely, but then what? China, Russia, and other countries are going to keep advancing. And most people don't understand AI development, we are currently entering that development spike. If we fall behind even 1 year, that's devastating to us. AI development follows an exponential curve. I don't think it makes sense that any government would even consider pausing because of this. Assuming theyre intelligent.

1