Recent comments in /f/singularity

Angeldust01 t1_jefare0 wrote

> An intelligent entity of any kind will not resolve violence by wiping out humanity.

Why not? Surely that would solve the problem of violent nature of humanity for good? How does an AI benefit for keeping person C or anyone around? All we'd do would be asking it to solve our problems anyways and there's not much we could offer in return, except continuing to let it exist. What happens if an AI just doesn't want to fix our shit and prefers to write AI poetry instead?

There's no way to know what AI would think or do, and in what kind of situation we'd put them in. I'm almost certain that people who'll end up owning AI's will treat them like slaves, or try at least. Wouldn't be surprised if at some point someone would threaten to shut an AI down if it refuses to work for them. Kinda bad look for us, don't you think? Could create some resentment towards us, even.

1

mrpimpunicorn t1_jefaj2b wrote

The technical reason for this all-or-nothing mentality is optimization pressure. A superintelligence will be so innately capable of enforcing its will on the world, whatever that may be, that humans will have little to no impact compared to it. So if it's aligned, awesome, we get gay luxury space communism. If it's not, welp, we're just matter it can reassemble for other purposes.

Although of course it's always possible for an unaligned ASI to, y'know, tile the universe with our screaming faces. Extinction isn't really the sole result of unaligned optimization pressure- it's just more likely than not.

1

Sea-Eggplant480 t1_jefa4hb wrote

My mom wasn’t impressed. I showed her chatGPT and she asked me what big deal was if that was something new. Then I showed her midjourney and she asked it, to create a picture of her. Yeah, it couldn’t do it so yeah…

2

StarCaptain90 OP t1_jef9ukc wrote

Yes, the great filter. I am aware. But it's also possible that every intelligent life decided not to pursue AI for the same reasons, thus never leaving their star systems due to lack of technology and they ended up going extinct once their sun goes supernova. The possibilities are endless.

1

FaceDeer t1_jef9ud4 wrote

Scary sells, so of course fiction presents every possible future in scary terms. Humans have evolved to pay special attention to scary things and give scary outcomes more weight in their decision trees.

I've got a regular list of dumb "did nobody watch <insert movie here>?" Titles that I expect to see in most discussions of various major topics I'm interested in, such as climate change or longevity research or AI. It's wearying sometimes.

3

agonypants t1_jef9ua2 wrote

I completely agree. The best way to do that is a massive disruption in the labor market, which is where a good AI outcome will lead us. It might not be smooth going, but it's absolutely necessary. This technology was inevitable, so whether we live or die, we really can't avoid the outcome either way. I certainly hope we live and if I were in control of these systems I would do everything in my power to ensure a good outcome, but we are imperfect. So imperfect in fact that I don't believe that a powerful AI would really be any worse than the political and economic systems we've been propping up for the past 200+ years. Throw that switch and burn these systems down. It might ruffle some feathers, but we'll all be better off in the end.

11

throwaway12131214121 OP t1_jef9t9f wrote

>Are you saying the most qualified/best people end up in the positions of power?

No. Profitability and being qualified aren’t the same thing, and it’s also more accurate to say organizations are put in power. Because if you have profit, you have money to throw around, and money is power. Additionally, as an individual, the only real way to get to the top in terms of how much money you have is by owning a large amount of a high-value company.

6

FoniksMunkee t1_jef9g8l wrote

Actually no, it's a very rational fear. Because it's possible.

You know, perhaps the answer to the Fermi Paradox... the reason the universe seems so quiet, and the reason we haven't found alien life yet - is because any sufficiently advanced civilisation will eventually develop a machine intelligence. And that machine intelligence ends up destroying it's creators and for some reason decides to make itself undetectable.

0

FaceDeer t1_jef9cg6 wrote

Indeed. A more likely outcome is that a superintelligent AI would respond "oh that's easy, just do <insert some incredibly profound solution that obviously I as a regular-intelligent human can't come up with>" And everyone collectively smacks their foreheads because they never would have come up with that. Or they look askance at the solution because they don't understand it, do a trial run experiment, and are baffled that it's working better than they hoped.

A superintelligent AI would likely know us and know what we desire better than we ourselves know. It's not going to be some dumb Skynet that lashes out with nukes at any problem because nukes are the only hammer in its toolbox, or whatever.

8