turnip_burrito
turnip_burrito t1_j4f97wg wrote
Reply to comment by photo_graphic_arts in What void are people trying to fill with transhumanism? by [deleted]
I see! It's hard to say whether you'd be in luck if AGI discovers time travel, if the Terminator series is anything to go by. :p
turnip_burrito t1_j4f8ia9 wrote
Reply to comment by photo_graphic_arts in What void are people trying to fill with transhumanism? by [deleted]
The comment I was replying to was vague. I read it as "living at the expense of the ability of new lives to come into being", as in weighing the value of one person living forever against the value of adding one new person to the population count.
For my reply to the other interpretation of their words, harming future generations that will surely exist is similar to harming people that exist today.
turnip_burrito t1_j4f81nj wrote
Reply to comment by [deleted] in What void are people trying to fill with transhumanism? by [deleted]
People don't like the status quo. It's in our nature to want more of the things we like, and less of the things we dislike, and the status quo still has things many people don't like.
You can only see the genuine beauty of life and complexity of existence, and be in awe of it so long, before your cancer comes knocking and rips it away from you. Or your dementia makes you forget who you or your loved ones are. Your chronic fatigue and pain colors every day. Many people live their lives dissatisfied anyway, for many reasons, and die that way. I feel that transhumanism would be a net plus for these people.
People that chase money and things do it out of greed. Maybe anxiety of the future, and also social status. More, more, more. Is that meaningful? I would say it is to them, but maybe they may say they are never satisfied and I would believe them. It does hurt other people around them, so I oppose it on those grounds.
What is the real world? Hunter gatherer life? Caveman life? Primitive farmer? BC -2000? Medieval? Rennaisance? 1990s? People employ technology that solves many of their grievances very differently in each era, which could be seen as "cheating and avoiding reality" by people who live in earlier times.
turnip_burrito t1_j4f6rgm wrote
Reply to comment by [deleted] in What void are people trying to fill with transhumanism? by [deleted]
Also highly subjective. Depends on who you ask.
turnip_burrito t1_j4f6ony wrote
People want more choice in their mind and body and surroundings. They believe this choice will make them happier than than they are now, because they can solve diseases, gain more understanding of their own life, and experience more of existence. Maybe it leads to more life satisfaction, maybe not. Maybe it also meets your definition of a meaningful life, and maybe not. But this is why people want it.
turnip_burrito t1_j4f64co wrote
Reply to comment by [deleted] in What void are people trying to fill with transhumanism? by [deleted]
Bad is subjective and highly personal. If people want to live longer and experience new things, and if this makes them a bit happier, why not?
turnip_burrito t1_j49gwpz wrote
Reply to comment by curloperator in Don't add "moral bloatware" to GPT-4. by SpinRed
Yep, it will have to learn the intricacies. I don't really care if other people disagree with my list of "uncontroversial basics" or they are invalid in certain situations. We can't hand program in every edge case and have to start somewhere.
turnip_burrito t1_j47v80k wrote
Reply to comment by Scarlet_pot2 in Don't add "moral bloatware" to GPT-4. by SpinRed
I was thinking more along the lines of inclining the bot toward things like "murder is bad", "don't steal other's property", "sex trafficking is bad", and some empathy. Basic stuff like that. Minimal and most people wouldn't notice it.
The problem I have with the OP's post is that logic doesn't create morals like 'don't kill people' except in the sense that murder is inconvenient. Breaking rules can lead to imprisonment or losing property, which makes realizing some objective harder (because you're held up and can't work toward it). We don't want AI to follow our rules just because it is more convenient for it to do so, but to actually be more dependable than that. This is definitely "human moral bloatware", make no mistake, but without it we are relying on the training data alone to determine the AI's inclinations.
Other than that, the user can fine tune away.
turnip_burrito t1_j47t8qj wrote
Reply to Don't add "moral bloatware" to GPT-4. by SpinRed
Does the training data itself already not contain some moral bloatware? The way articles describe issues like abortion or same sex marriage inherently biases the discussion one way or another. How do you deal with this? Are these biases okay?
I personally think moral frameworks should be instilled into our AI software by its creators. It has to be loose, but definitely present.
turnip_burrito t1_j3q6r17 wrote
Reply to comment by Sashinii in Do you think in the 2030s it will be common for most households to have a 3D printer? by BeginningInfluence55
That'd be cool but there's absolutely zero chance.
turnip_burrito t1_j3j9uvq wrote
Reply to comment by LoquaciousAntipodean in Organic AI by Dramatic-Economy3399
What's your opinion on the ability to create AI with human competence across all typical human tasks? Is this possible or likely?
turnip_burrito t1_j3j6pr1 wrote
Reply to comment by AndromedaAnimated in Organic AI by Dramatic-Economy3399
Yes, thank you. I think one problem is we've developed some different baseline assumptions about human nature and power dynamics, and it leads to different conclusions. It's possible your or my approach takes this into account more or less accurately when compared to the real world. Your comments are making me think hard about this.
turnip_burrito t1_j3j62zz wrote
Reply to comment by AndromedaAnimated in Organic AI by Dramatic-Economy3399
I'll need time to consider what you've said.
turnip_burrito t1_j3j5mov wrote
Reply to comment by LoquaciousAntipodean in Organic AI by Dramatic-Economy3399
When most people say general intelligence (for AGI), they mean human-level cognitive ability across domains humans have access to. At least, that was the sense in which I used it. So I'm curious why this cannot exist, unless you bave a different definition for AGI like "able to solve every possible problem", in which case humans wouldn't qualify either.
turnip_burrito t1_j3j4a0z wrote
Reply to comment by AndromedaAnimated in Organic AI by Dramatic-Economy3399
I do advocate for the second option:
> we allow AGI to learn ethics from all the information available to humanity plus reasoning.
Which is part of the process I'd want an AI to use to learn the correct morals. But I don't think an aI can learn what I would call "good" morals from nothing. It seems to me it will need to be "seeded" with a set of basic preferences or behaviors (like empathy, a tendency to mimic role models, or other inclinations). In truth these would be totally arbitrary and up to the developers/owners, before it can develop morals or a more advanced code of ethics.
I don't think I would want an AI that lacks empathy or is a control freak, so developing these options in-house before releasing access to the public seems to me to be the best option. While it's developed it can still learn from the recorded media we have, and real time in controlled settings.
turnip_burrito t1_j3j3kfe wrote
Reply to comment by AndromedaAnimated in Organic AI by Dramatic-Economy3399
I don't think we will have what everyone can call a "perfect outcome" no matter what we choose. I also don't believe right or wrong are absolute across people. I'm interested in finding a "good enough" solution that works most of the time, on average.
turnip_burrito t1_j3j2zra wrote
Reply to comment by AndromedaAnimated in Organic AI by Dramatic-Economy3399
Thanks for the compliment, but I am trying to make a point with my words, not just spew fluff. I do think there is logic in them. If you want to ask me to elaborate instead of saying they are just baseless, then ask and I will.
turnip_burrito t1_j3j2muo wrote
Reply to comment by AndromedaAnimated in Organic AI by Dramatic-Economy3399
Okay, seems complex and dependent on whether the developers or owners have the final say. But replace owners with developers then in my statement.
turnip_burrito t1_j3j2bck wrote
Reply to comment by AndromedaAnimated in Organic AI by Dramatic-Economy3399
So do you suggest we give everyone a personal AGI and just wait and see what happens? What makes that more desirable?
turnip_burrito t1_j3j1e7k wrote
Reply to comment by AndromedaAnimated in Organic AI by Dramatic-Economy3399
- 
Yes, but I mean more dramatic augmentation. Adding an extra five brains. Increasing your computational speed by a factor of 10. Adding more arms, more attention, etc. And indeed you are right people can do that, but it is extremely limited compared to how software can augment itself.
 - 
Everyone has a different opinion, but most would say people who steal from others for greed, or people who kill, are bad people. These people are the ones who stand to gain a competitive advantage early on through exponential growth of resources if they use their personal AGI correctly.
 - 
Unchanging morals have to be somewhat vague things like "balance this: maximize individual freedom and choice, minimize harm to people, err on the side of freedom vs security, and use feedback from people to improve specific implementations of this idea", not silly things like "stone people for adultery".
 - 
It is less prone to be hacked. If you read my post, you would see that it loses the hardware vulnerabilities and now only has software vulnerabilities. It may be possible for an AGI to make itself remotely unhackable by any human person, or even in principle. It may also be impossible to hack the AGI if its substrate doesn't run computer code, but operates in a different way than the way we know it today.
 
turnip_burrito t1_j3izql7 wrote
Reply to comment by AndromedaAnimated in Organic AI by Dramatic-Economy3399
Yes, ensuring the developers are moral is also a problem.
turnip_burrito t1_j3iwhlk wrote
Reply to comment by heyimpro in Organic AI by Dramatic-Economy3399
I could also be off-mark, as I said. It is maybe possible the better elements of an AGI empowered populace can keep the more immoral parts in check, in sort of balance. But I wouldn't want to risk that. And as you just said, we need to have a good logical discussion about good strategies as a community, and model and simulate the outcomes to see where our decisions might land us.
turnip_burrito t1_j3iuoop wrote
Reply to comment by LoquaciousAntipodean in Organic AI by Dramatic-Economy3399
Morals can be built in to systems. Look at humans. Just don't make the system exactly human. Identify the problem areas and solve them. I'm optimistic we can do it, so I sleep pretty easy. This problem is called AI alignment.
And also look at the alternative: one or a couple superpower AI eventually emerges anyway from a chaotic power struggle. We won't be able to direct its behavior. It'll just be the most power-hungry, inconsiderate tyrant you've ever seen. Maybe like a ruthless ASI CEO, or just a conqueror. The one you believe my idea of a central AI would be, but actually far worse.
Give me a realistic scenario where giving everyone an AGI doesn't end in concentrated power.
turnip_burrito t1_j3itwno wrote
Reply to comment by AndromedaAnimated in Organic AI by Dramatic-Economy3399
No, my point is that because people act like this now, they'd be even more empowered with personal AGI if it takes any instruction from them. It would become more extreme. It would be absurd.
turnip_burrito t1_j4fancb wrote
Reply to comment by photo_graphic_arts in What void are people trying to fill with transhumanism? by [deleted]
Yep, the progress toward life satisfaction, even when all else is resolved, is an internal process of the mind.