turnip_burrito

turnip_burrito t1_j3itd02 wrote

I'm not a doomer, m8. I'm pretty optimistic about AI as long as it's not done stupidly. AGI given to individuals empowers the individual to an absurd degree never seen before in history, except perhaps with nukes. And now everyone can have one.

The Cuban Missile Crisis had a limited number of actors with real power. What would have happened if the entire population had nukes?

1

turnip_burrito t1_j3irplu wrote

Reply to comment by heyimpro in Organic AI by Dramatic-Economy3399

In short, yes I think one central AI is the safest and most beneficial option. I think it hinges on hardware security and how rigid the AI are in moral structure.

In order for an autonomous robot to be safe upon release, it has to be always limited in some way: proven unable to improve beyond a threshold, or limited by a separate external supervising entity. Most AI tools today are the first: unable to improve beyond a threshold due to architecture. They cannot learn in real time, only have access to one or two modes of data (audio, text, or image), or no spatial awareness, etc. Humans ourselves are limited to not being able to augment our processing power and knowledge above a certain threshold: limited attention, limited lifespan, unable to modify brain or copy it.

Let's consider a very morally blank slate AI. A single AI has similar limitations to a population of humans, and less like a single human. The human species as a whole doesn't really have many of the limitations of a single human in a meaningful way: copy knowledge through education, increase attention by adding more humans, avoid death by adding more humans. A single general AI at human level would be an individual entity with the same learning rate as a human, but basically immortal and able to learn perfect knowledge of how its own brain works (it's in scientific papers, on Internet). If any bad humans give their personal AI access to this knowledge, which eventually one or many would, can plan how to make many clones of it. If it is as easy to make a new mind as running code on a computer, then clones can be made instantly. If it requires specialized hardware, it is harder to clone the AI but still doable if you are willing to take it from other people. Then the ability of these people to write malicious code to compromise other systems, autonomously manage and manipulate markets, socially engineer other people with intelligent targeted robot messages, perform personal scientific research, etc. just snowballs.

If morals can be built into the AIs ahead of time, and not allowed to change, which limit their actions, then they can be considered safe. To address your point about having the same AI, in a sense yes morally the same AI. But their knowledge of you could be tailored. But the AIs need strong bodies, a robust self-destruct, or encryption to protect themselves from bad actors that want to take and hack their hardware and software. An AI built into a chip on glasses would be vulnerable to this.

A central AI with built-in morals can refuse requests for information, but still provide it like a local AI if you have a connection. It is physically removed so it is in little to no danger of being hardware hacked. While people use it, it still percieves the world like a local AI.

I'm sure a person or group, or AGI, that has thought about this longer than me can refine this thought and make some changes to these ideas.

0

turnip_burrito t1_j3igzgt wrote

Reply to comment by heyimpro in Organic AI by Dramatic-Economy3399

Releasing many copies of autonomous AGI capable software, that has yet to learn, to people would be mayhem. It would be guaranteed to result in dictatorship by a psychopath or extinction. The honest and less power-hungry people would need to compete for resources with people who are actively seeking power and unafraid to cross ethical lines.

0

turnip_burrito t1_j3g8hd0 wrote

Yeah, it might be able to prove things about reality we don't yet know from basic logical principles. If it could do something like start from nothing but basic logic and derive all of observable physics, and then new physics, and then stuff we can't test like panpsychism, without adding any new assumptions, then I wouldn't say I'm 100% convinced, but would be 99.9% convinced.

1

turnip_burrito t1_j3bb4p2 wrote

Well, not graduate level. Junior level in college is sufficient for the most basic models. Like for feedforward neural networks, you just need to know chain rule from calculus and some summation series notation.

Apart from that, Bayesian probability, geometric series, convergence rules, constructing mathematical proofs, it's advanced but shouldn't take too long to pick up if taught correctly. But this stuff will take much longer (basically graduate level for meaningful work).

1

turnip_burrito t1_j350gah wrote

It's actually a pretty interesting question that we might be able to test and answer. Not solipsism, but the question of whether our machines have qualia, assuming we believe other humans have it.

There some specific subset of our brains that has neural activity that aligns with our conscious experience. If we try adding things to it, or temporarily removing connectivity to it, we can determine what physical systems have qualia, and which ones separate "qualia-producing" systems and unconscious systems while still allowing information to flow back and forth.

We have to tackle stuff like:

  • Why are we not aware of some stuff in our brains and not other parts? The parts we are unaware of can do heavy computational work too. Are they actually also producing qualia too, or not? And if so, why?

  • What stops our conscious mind from becoming aware of background noises, heart beat, automatic thermal regulation, etc?

Then we can apply this knowledge to robots to better judge how conscious they are. Maybe it turns out that as long as we follow certain information-theoretic rules or build using certain molecules, we avoid making conscious robots. For people that want to digitize their mind, this also would help ensure the digital copy is also producing qualia/not a philosophical zombie.

1

turnip_burrito t1_j34xhjf wrote

If I have to use the word, I use it to mean just "qualia", but I usually don't use the word "consciousness" because other words are more productive imo.

I usually don't care what it means in a conversation, as long as it is clearly defined. The biggest problem with the word, imo, is different people have different definitions for it (ok as long as clarified), which causes long drawn out pseudo-deep conversations where two people are using the same word differently.

So your definition is fine (and interesting), thanks for sharing!

1

turnip_burrito t1_j33unnx wrote

If consciousness isn't the subjective experience, and it isn't the externally observable physical system, then there is nothing left for it to be, except perhaps a third thing like a "soul". It is logically impossible for a word to sensibly mean anything except one or a subset of those three things. Consciousness, to mean anything, would be one of those or some aspect of those. The cause or process of it is not determinable now, but the thing itself can be put to words easily.

If something is impossible to define, then the word should not be used in a serious discussion.

2

turnip_burrito t1_j32ob9d wrote

It is simple. What the definition of "consciousness" should be to begin with is subjective. But once you define something, you should not contradict it unless you explicitly change which real world things it is describing.

How silly would it be for me to define dogs as "canines kept as pets" and then later say "well, dogs don't have to be canines"? That's what has happened with ChatGPT.

−11

turnip_burrito t1_j2qwrtz wrote

Every generation has had to make peace with their existence. For anyone not on eternal morphine drip, lobotomized, or otherwise artificially altered brain, the answer to this is a mental evolution, not solvable just with material possessions or environment.

Social validation will likely still exist, and hobbies. People will continue to derive meaning from these. There will be sufferings over problems and scenarios we consider petty or alien today, and the people then will have to overcome those sufferings somehow if they want to be happy.

And then you'll have the other people/entities: constantly high or who have changed their brain to always be satisfied. They won't worry about these things.

1

turnip_burrito t1_j27a6y8 wrote

Reply to comment by rixtil41 in A future without jobs by cummypussycat

There's still scarcity of authentic human goods and services, even in a world where AI can provide any physical good or service. For example, if you wanted to buy a new statue or art piece a famous artist made or spend time with them, you could pay them money for it. This is a way of keeping tabs on how much people in the optional society of goods owe each other. This economy still has scarcity (there is only ONE authentic item or experience), and many people value that authenticity over a copy. It's the same reason why a copy of the Mona Lisa is worth much less than the real Mona Lisa.

2

turnip_burrito t1_j2785nr wrote

Reply to comment by rixtil41 in A future without jobs by cummypussycat

There will likely still be jobs people make for themselves in order to exchange goods and services of value (time with others, personal products), but not necessary to recieve basic life necessities plus basic luxury allowance.

3