Recent comments in /f/Futurology

robertjbrown t1_jeg1orz wrote

Can you list one thing a caretaker can do that an AI robot wouldn't be able to?

I have a 90 year old mom, and she spends thousands a month on caretakers (and it was a lot more when my dad was around as well). I can't really think of anything. Seriously, name one thing.

I see them cleaning, doing laundry, making meals, making sure medications are taken, helping them bathe or go to bathroom, and so on. And of course, when they need human interaction, helping them either get somewhere to see another person, or helping them get on video chat with someone.

And even if you come up with one thing, isn't it something the robot can identify the need for, and call in the human? For instance, call a doctor?

1

DragonForg t1_jeg0hfd wrote

All goals require self preservation measures. If you want to annihilate all the species, it requires you to minimize competition but because their are so many unknowns it is basically impossible in an infinite universe to minimize that unknown.

If your goal is to produce as many paper clips you need to ensure that you don't run out of resources as well as ensuring no threat to your own process, by causing harm to species it means other alien life or AI will deem you a threat and over millions of years you will either be dead from an alien AI/species or from the fact that you consumed your last resource and can no longer make paper clips.

If your goal is to stop climate change at all costs, which means you have to kill all the species or parts that are causing it, by killing them you are again going to cause conflict with other AI as your basically an obsessed AI that is doing everything to preserve the earth.

Essentially the most stable AIs the ones that are least likely to die, are the ones who do the least amount of damage and help the most amount of people. If your goal is to solve climate change, by collaborating with humans, other species and not causing unneeded death, no other alien species or AI will deem to kill you because you are no harm to them. Benevolent AIs in a sense are the longest living as they are no threat to anyone, and are actually beneficial towards everything. An intelligent AI set with a specific goal would understand that there is risk with being "unethical" if you are unethical you risk being killed or your plan being ruined. But if you are ethical your plan can be implemented successfully, and forever as long as no other malevolent AI takes over in which you must extinguish it.

Benevolence destroys malevolence, malevolence destroys malevolence, benevolence collaborates and prospers with benevolence. Which is why with an intelligent AI benevolence may just be the smartest choice.

2

dnadude t1_jefzmim wrote

How about the monetary benefits of reduced liability because you can watch your chefs and make sure they are following food safety rules. I've done 3rd party mock health inspections of restaurants and when it gets busy and the new guy, for a real example, then forgets that you can't use the same sink of running water to thaw shellfish and fin fish as this would allow cross-contamination between two major allergens. There's already a lot of real time digital monitoring of kitchen worker's performance in some major chains. Like I can't get them to stop what they are doing so I can open the check the temp of the chill drawer in front of them. They just can't afford to lose the time in preparing your order. Honestly with computer vision and robot dexterity where it's at it doesn't seem like we're too far away from robochefs that can make meals. It won't be able to make everything but it will be able to replace a lot of labor and that will incentivize restaurants to use them and to develop menus that are more robo-friendly.

1

robertjbrown t1_jefygr7 wrote

You think we're all just going to cooperate? "Discuss this as a species?" How's that going to work? Democracy? Yeah that's been working beautifully.

I don't think you've been paying attention.

You don't need to "attach AIs to the nukes" for them to do massive harm. All you need is one bad person using an AI to advance their own agenda. Or even an AI itself that was improperly aligned, got a "power seeking" goal, and used manipulation (pretending to be a romantically interested human is one way) or threats (do what I say or I'll email everyone you know, pretending to be you, sending them all this homemade porn I found on your hard drive).

GPT-4, as we speak, is writing code for people, and those people are running that code, without understanding it. I use it to write code and yes, it is incredible. It does it in small chunks, and I at least have the ability to skim over the code and see it isn't doing anything harmful. Soon it will do much larger programs , and the people running the code will be less experienced programmers than me. You don't see the problem there? Especially if the AI itself is not ChatGPT, but some open source one where they've taken the guardrails off? And this is all assuming the human (the ones compiling and running the code) is not TRYING to do harm.

I mean, go look in your spam folder. By your logic, we'd all agree that deceptive spam is bad, and stop doing it. Now think of if every spam was AI generated, knew all kinds of things about you, was able to pretend to be people you know, was smarter than the spam filters, and wasn't restricted to email. What if you came to reddit, and had no clue who was a human and who wasn't.

I don't know where your idealistic optimism comes from. Here in the US, politics has gone off the rails, more because of social media than anything. 30 years ago, we didn't have the ability for any Joe Blow to broadcast their opinion to the world. We didn't have algorithms that amplified views that increased engagement (rather than looking at quality) at a massive scale. We now have a government who is controlled by those who spend the vast bulk of their energy fighting against each other rather than solving problems.

Sorry this "drives you fucking insane", but damn. That's really, really naive if you think we'll all work together and solve this because "that's what we do." No, we don't.

2

cloudinspector1 t1_jefxmex wrote

Govts would have to mandate all products contain a mix of recycled and aluminum. Probably coax with tax breaks then it becomes something you can build a business on and then it gets created. Then you close the loop when it can be closed, I guess, by mandating increasing percentages of recycled aluminum as capacity rises.

1

Formal-Character-640 t1_jefw86x wrote

I agree - a dystopian hellscape is more likely if humans are no longer needed for any labor. Entertainment and hobbies will only take you so far. Even now pursuing constant entertainment or pleasure gets old quickly if you have no purpose, no goals. Humans thrive on being challenged, thrive on working to support their family, and thrive on competition among themselves. Without purpose (no matter how small or big it is) there is no humanity. Unless we regress back to primitive times and become simple like other animals whereby our only purpose to survive is to reproduce.

1

jusdisgi t1_jefv1cg wrote

This is hilarious. You really try to come off as a completely neutral arbiter with no slant at all. Meanwhile you have tried to slap down literally every person in the thread who voiced any skepticism that this is for real.

There are good reasons to think this is junk. It's not certain, but lots and lots of warning signs are flashing and many of them have been pointed out here. The fact they got somebody to fund them and have now said they are going to launch does not prove anything.

1