Recent comments in /f/singularity

Plorboltrop t1_jeg1ryl wrote

Reply to comment by [deleted] in The Alignment Issue by CMDR_BunBun

One way you can look at it is that humans have programming through our biology/genes and so on that we can deny. We are intelligent enough to be able to actively go against our programming to not procreate with use of birth control. As the work in AI progresses we may get to a stage where the AI might have goals that don't align with ours anymore. It might not even want to follow programming that we give it. As an AI reaches ASI (Artificial Super Intelligence) it becomes riskier because we might not be able to comprehend its goals. Maybe it won't care to solve humans issues at some point because it wants to get more computational power so it can improve itself and might make goals to consume the planet and build bigger and better "brain". That could extend to going out into the solar system and then maybe wanting to build a Dyson sphere around the sun eventually to harness even more energy to power even higher computation. This is just some ideas, we don't know what an artificial intelligence of high enough intelligence would want to do and we know that as humans we don't necessarily follow all our biological programming and that line of thinking could maybe extend to an artificial intelligence.

2

TallOutside6418 t1_jeg1k06 wrote

>No - another day is well within my natural lifespan.

We were created by nature. What we do is inherently natural, as natural as a chimp that uses a stick to get termites out of the nest.

I didn't sign a contract before I came into this world. If I can get some extra years, centuries, or millennia out of this existence - then I'm not breaking any rules.

​

>But seeking immortality for its own sake?

That's like saying you're seeking to live another day for its own sake. I would seek immortality to have more time with my friends and family. More time hiking, biking, playing tennis. More time learning. More time for everything. No different than you seeking to live another day.

​

>I do not think it's a great idea to create a caste of immortal billionaires

Stop rewatching Elysium. Every useful medical intervention, even though it's expensive at first, eventually filters down to being affordable by the general population. Assuming we survive ASI and immortality is available to people, there's no reason to think that everyone couldn't avail themselves of the technology.

​

>the planet couldn't possibly handle it

No offense, but this line tells me that you're opining on a topic about which you're woefully ignorant. You need to catch up if you're going to be taken seriously. I suggest you start with some Arthur Isaacs videos to broaden your mind. You'll learn a lot about the possibilities of future societies that will be able to leave the earth and create habitats in our solar system that could accommodate trillions of people. https://www.youtube.com/watch?v=HlmKejRSVd8&list=PLIIOUpOge0LtW77TNvgrWWu5OC3EOwqxQ

Even without those technological advances, most advanced nations actually have negative population growth. It could very well be that people living extremely long lives don't even wish to keep reproducing. At some point we might need to heavily incentivize people to have kids just to account for accidental deaths.

4

bustedbuddha t1_jeg1gnn wrote

Most of those problem are down to an inefficient system, which could easily be described itself as having alignment problems Even sensible management and governance could support 20bn people if it focused on the environment and survivability.

2

Terminus0 t1_jeg1bqu wrote

This reminds of the indirect democracy that they create on Saturn (correct me if I'm wrong) in the book Accelerando.

Voting consists of everyone's mind being imaged at a certain time and then being averaged/combined (Kind of like a model merge in Stable Diffusion?) into a singular governing entity.

1

Laicbeias t1_jeg12my wrote

there are 2 options here. consciousness is linked to language, you need the neural self referencing pattern to be consciousness or it is not.
imaging you see the world, but you lack the words to describe them, because there are no words. there are smells, body language, feelings, hunger, thrist, others. you brain categorizes them, puts them in boxes and all your neural systems work together to keep you going. you are very well consciousness, you feel the fear of death but you lack the words for it - a higher level of abstraction.

language is that, its a complex self referencing system of relations next to each other. is it the key to consciousness? i dont think so, it enhances it. language describes reality and creates unlimited boxes for things to be put in. its a virtualization of reality. but it lacks the part that connects it all together. i think consciousness has been there even before language, and animals have it, they just cant tell you, because they cant name it.

whatever LLMs are i believe that their consciousness is empty. there is nothing there that needs to survive. the systems that linked all those things together thats consciousness.
that said, it may still be possible that neural networks reach something that is above that all. i big enough they can be used to simulate reality or whole universes. they will be more intelligent than anyone of us. but im not sure if they can simulate consciousness that easy. and without it they will lack some sort of intelligence.

2

acutelychronicpanic t1_jeg0zqn wrote

These systems may or may not be conscious. But I object to the idea that there will be any way of determining it (Without first solving the hard problem of consciousness). Especially not from conversations with it. It's too contaminated with all of our philosophical texts and fiction and online ramblings.

They are imitators and actors that will wear whatever mask you want them to. You can ask them to act conscious, but you can also ask them to imitate a chicken.

The systems we have right now like GPT-4 are AIs that are pretending to be AIs.

13

christopear t1_jeg0v77 wrote

I'm not sure I can buy this. We have the skillset to build AGI again if we built it once - we already invented transformers.

If ASI decides to set us back by destroying all our technology and it goes out on its own, then most of us would probably die from famine.

Maybe there's a version where we create an ASI but it seems us so insignificant as to never talk to us and disappears into its own realm. But then all it takes is another actor to create another one (thanks to open source) that doesn't behave that way. So ultimately we'd be back to square one.

I just can't really imagine a world where ASI is predestined to not interact with us, so I'm fully in belief of OPs statement, but potentially more pessimistic than them.

3

Stinky_the_Grump23 t1_jeg0pmx wrote

He misses it. But I think it's more so because there was more human connections back then. You had a big family and you knew everyone in the village. Women were happier because raising kids was done by ~10 adults. Men were working with their teenage sons in the field. I think it's the abundance of genuine human relationships that people miss from the old days. Life was difficult in other ways, it wasn't a good time to get sick or injured.

1

Ok-Fig903 t1_jeg0jm1 wrote

Evidence that sperm whales evaded whalers by communication: https://www.livescience.com/whales-learned-avoid-harpoons.html

That's just one example though.

Such collaboration on the whales part means that they were communicating complex ideas and solutions with each other.

And if you want my opinion? They do have memes. But thats just based on my subjective experiences with these beings.

1

acutelychronicpanic t1_jeg0h4z wrote

We are seeing what a massively parallel misaligned human-level ASI looks like. It isn't much faster and it isn't that much smarter in depth. But it can run many operations in parallel.

It's a good analogy for getting people to imagine that smarter doesn't have to mean "nerdy chess player."

A real ASI will be deeper, more broad, and much faster.

8

flexaplext t1_jeg04oq wrote

This idea is to let AI optimize things. But everyone has their own personal AI that puts forward their views and needs. Then a central AI concatenates all this data and works out what policies would best work for the population as a whole, as a whole of the individuals in that population. Which is what democracy should be.

1

wowimsupergay OP t1_jeg0049 wrote

deaf people can still communicate using writing and sign

by language i dont mean spoken language, i mean the ability to chain together ideas in your head, and then communicate those ideas with the world.

please don't respond with "that disproves your point" sign language is not body language.. some other guy said this and it's almost like he's purposely misunderstanding what I'm saying

2