turnip_burrito

turnip_burrito t1_j4f8ia9 wrote

The comment I was replying to was vague. I read it as "living at the expense of the ability of new lives to come into being", as in weighing the value of one person living forever against the value of adding one new person to the population count.

For my reply to the other interpretation of their words, harming future generations that will surely exist is similar to harming people that exist today.

6

turnip_burrito t1_j4f81nj wrote

People don't like the status quo. It's in our nature to want more of the things we like, and less of the things we dislike, and the status quo still has things many people don't like.

You can only see the genuine beauty of life and complexity of existence, and be in awe of it so long, before your cancer comes knocking and rips it away from you. Or your dementia makes you forget who you or your loved ones are. Your chronic fatigue and pain colors every day. Many people live their lives dissatisfied anyway, for many reasons, and die that way. I feel that transhumanism would be a net plus for these people.

People that chase money and things do it out of greed. Maybe anxiety of the future, and also social status. More, more, more. Is that meaningful? I would say it is to them, but maybe they may say they are never satisfied and I would believe them. It does hurt other people around them, so I oppose it on those grounds.

What is the real world? Hunter gatherer life? Caveman life? Primitive farmer? BC -2000? Medieval? Rennaisance? 1990s? People employ technology that solves many of their grievances very differently in each era, which could be seen as "cheating and avoiding reality" by people who live in earlier times.

10

turnip_burrito t1_j4f6ony wrote

People want more choice in their mind and body and surroundings. They believe this choice will make them happier than than they are now, because they can solve diseases, gain more understanding of their own life, and experience more of existence. Maybe it leads to more life satisfaction, maybe not. Maybe it also meets your definition of a meaningful life, and maybe not. But this is why people want it.

12

turnip_burrito t1_j47v80k wrote

I was thinking more along the lines of inclining the bot toward things like "murder is bad", "don't steal other's property", "sex trafficking is bad", and some empathy. Basic stuff like that. Minimal and most people wouldn't notice it.

The problem I have with the OP's post is that logic doesn't create morals like 'don't kill people' except in the sense that murder is inconvenient. Breaking rules can lead to imprisonment or losing property, which makes realizing some objective harder (because you're held up and can't work toward it). We don't want AI to follow our rules just because it is more convenient for it to do so, but to actually be more dependable than that. This is definitely "human moral bloatware", make no mistake, but without it we are relying on the training data alone to determine the AI's inclinations.

Other than that, the user can fine tune away.

29

turnip_burrito t1_j47t8qj wrote

Does the training data itself already not contain some moral bloatware? The way articles describe issues like abortion or same sex marriage inherently biases the discussion one way or another. How do you deal with this? Are these biases okay?

I personally think moral frameworks should be instilled into our AI software by its creators. It has to be loose, but definitely present.

37

turnip_burrito t1_j3j6pr1 wrote

Yes, thank you. I think one problem is we've developed some different baseline assumptions about human nature and power dynamics, and it leads to different conclusions. It's possible your or my approach takes this into account more or less accurately when compared to the real world. Your comments are making me think hard about this.

2

turnip_burrito t1_j3j5mov wrote

When most people say general intelligence (for AGI), they mean human-level cognitive ability across domains humans have access to. At least, that was the sense in which I used it. So I'm curious why this cannot exist, unless you bave a different definition for AGI like "able to solve every possible problem", in which case humans wouldn't qualify either.

2

turnip_burrito t1_j3j4a0z wrote

I do advocate for the second option:

> we allow AGI to learn ethics from all the information available to humanity plus reasoning.

Which is part of the process I'd want an AI to use to learn the correct morals. But I don't think an aI can learn what I would call "good" morals from nothing. It seems to me it will need to be "seeded" with a set of basic preferences or behaviors (like empathy, a tendency to mimic role models, or other inclinations). In truth these would be totally arbitrary and up to the developers/owners, before it can develop morals or a more advanced code of ethics.

I don't think I would want an AI that lacks empathy or is a control freak, so developing these options in-house before releasing access to the public seems to me to be the best option. While it's developed it can still learn from the recorded media we have, and real time in controlled settings.

3

turnip_burrito t1_j3j3kfe wrote

I don't think we will have what everyone can call a "perfect outcome" no matter what we choose. I also don't believe right or wrong are absolute across people. I'm interested in finding a "good enough" solution that works most of the time, on average.

2

turnip_burrito t1_j3j2zra wrote

Thanks for the compliment, but I am trying to make a point with my words, not just spew fluff. I do think there is logic in them. If you want to ask me to elaborate instead of saying they are just baseless, then ask and I will.

2

turnip_burrito t1_j3j1e7k wrote

  1. Yes, but I mean more dramatic augmentation. Adding an extra five brains. Increasing your computational speed by a factor of 10. Adding more arms, more attention, etc. And indeed you are right people can do that, but it is extremely limited compared to how software can augment itself.

  2. Everyone has a different opinion, but most would say people who steal from others for greed, or people who kill, are bad people. These people are the ones who stand to gain a competitive advantage early on through exponential growth of resources if they use their personal AGI correctly.

  3. Unchanging morals have to be somewhat vague things like "balance this: maximize individual freedom and choice, minimize harm to people, err on the side of freedom vs security, and use feedback from people to improve specific implementations of this idea", not silly things like "stone people for adultery".

  4. It is less prone to be hacked. If you read my post, you would see that it loses the hardware vulnerabilities and now only has software vulnerabilities. It may be possible for an AGI to make itself remotely unhackable by any human person, or even in principle. It may also be impossible to hack the AGI if its substrate doesn't run computer code, but operates in a different way than the way we know it today.

1

turnip_burrito t1_j3iwhlk wrote

Reply to comment by heyimpro in Organic AI by Dramatic-Economy3399

I could also be off-mark, as I said. It is maybe possible the better elements of an AGI empowered populace can keep the more immoral parts in check, in sort of balance. But I wouldn't want to risk that. And as you just said, we need to have a good logical discussion about good strategies as a community, and model and simulate the outcomes to see where our decisions might land us.

1

turnip_burrito t1_j3iuoop wrote

Morals can be built in to systems. Look at humans. Just don't make the system exactly human. Identify the problem areas and solve them. I'm optimistic we can do it, so I sleep pretty easy. This problem is called AI alignment.

And also look at the alternative: one or a couple superpower AI eventually emerges anyway from a chaotic power struggle. We won't be able to direct its behavior. It'll just be the most power-hungry, inconsiderate tyrant you've ever seen. Maybe like a ruthless ASI CEO, or just a conqueror. The one you believe my idea of a central AI would be, but actually far worse.

Give me a realistic scenario where giving everyone an AGI doesn't end in concentrated power.

3