AdditionalPizza
AdditionalPizza t1_iryh8h2 wrote
Reply to Everyone seems so worried about mis/disinformation created by AI in the future and what it could cause people to believe, but I feel the opposite is true. by sidianmsjones
>At this time in the world we don't quite have that in news media. Instead, the idea of 'fake news', disinformation, faked video, etc, are still seen as somewhat conspiratorial takes for most topics
I made a post about this, asking how prevalent and advanced bots are on social media. It's a matter of time, we're on a time bomb for the extinction of trust. I have no idea what's better, right now the grass is looking greener on the other side of it all, when we stop trusting everything that's shoved in our face. The conflict, arguing, it all sucks so much right now; every little thing explodes into an argument online, and some of the absolute backward steps our species has been taking in parts of the world. But you're right that the other side is going to suck. When the deep fakes start dropping, and the trust bomb goes off, I just don't know.
Let's just hope the internet shifts entirely to entertainment like the movies, and nothing except reputable sources can be trusted. Though that goldilocks scenario is hard to imagine now, what with how stupid the average person seems to be.
AdditionalPizza t1_irydgid wrote
Reply to comment by AsheyDS in How would you program Love into AI? by AutoMeta
>When we leave the biological aspects out of it, we're left with things like 'I love you like a friend' or 'I love this pizza', which are arguably more shallow forms of love that have less impulsive behaviors attached. You're typically more likely to defend your offspring, that you probably love without question, over a slice of pizza that you only claim to love.
What about adoption? I don't know from personal experience, but it's pretty taboo to claim an adopted child is loved more like a slice of pizza than biological offspring, no?
I'm of the belief that love is more a level of empathy than it is anything inherently special in its own category of emotion. The more empathy you have, the more you know something, and the closer you are to it, the more love you have for it. We just use love to describe the upper boundaries of empathy. Parents to their children have a strong feeling of empathy -among a cocktail other emotions of course- toward them because they created them and it's essentially like looking at a part of yourself. Could an AI not look at us as a parent or as its children? At the same rate, I can be empathetic toward other people without loving them. I can feel for a homeless person, but I don't do everything I possibly can to ensure they get back on their feet.
Is it truly, only biological? Why would I endanger myself to protect my dog? That goes against anything biological in nature. Why would a parent of an adopted child risk their life for the child? A piece of pizza is way too low on the scale, and being that it isn't sentient I think it may be impossible to actually love it, or have true empathy toward it.
​
>it's knowledge of love and responses to that emotion aren't quite the same as ours, or aren't 'naturally' derived.
This would be under the assumption that nothing artificial is natural. Which, fair enough, but that opens up a can of worms that just leads to whether or not the AI would even be capable of sapience. Is it aware, or is it just programmed to be aware? That debate, while fun, is impossible to actually have a solid opinion on.
As to whether or not an AI would be able to fundamentally love, well I don't know. My argument isn't whether or not it can, but more that if it can, then it should love humans. If it can't, then it shouldn't be programmed to fake it. Faking love would be relegated to non-sapient AI. This may be fun for simulating relationships, but a lot less fun when it's an AI in control of every aspect of our lives, government, health, resources...
​
>why does it matter if it loves you or not, if the outcome can appear to be the same? If the only functional difference is convincing it to love you without it being directed to, or just giving it a choice, then that sounds pretty unnecessary for something we want to use as a tool.
I may never know if that time comes. But the question isn't whether I would know, it's whether or not it has the capacity to, right? I don't give any privileges to humans being unique in the ability to feel certain emotions. It will depend how AI is formed, and whether or not it is just another tool for humankind. Too many ethical questions arise there, when for all we know in the future an ASI may be born and raised by humans with a synthetic-organic brain. There may or may not be a time when AI is a tool for us or it's a sapient, conscious being that has equal rights. If it's sapient, we should no longer control it as a tool.
I believe given enough time it would be inevitable an AI would truly be able to feel those emotions and most certainly stronger than a human today can. That could be in 20 years, it could be in 10 million years but I wouldn't say never.
-sorry if that's all over the place I typed it in sections at work.
AdditionalPizza t1_irx2frv wrote
Reply to comment by AsheyDS in How would you program Love into AI? by AutoMeta
>love is a bit more of a powerful emotion that (as we experience it) isn't necessary, especially considering the biological reasoning for it
Are you talking about love strictly for procreation? What about love for your family? If we give the reins to an AGI/ASI someday, I would absolutely want it to truly love me if it were capable. Now you mention it could fake it, so we think it loves us. That sounds like betrayal waiting to happen, and what op sounds like they were initially concerned about. The AI would have to be unaware of it being fake, but then what makes it fake? It's a question of sentience/sapience.
The problem here is the question posed by op seems to be referring to a sapient AI, while you're comment is referring to something posing as being conscious and therefore not sentient. If the AI is sapient it better have the ability to love, and not just fake it. However, if the AI is not sapient, there's zero reason to give it any pseudo-emotion and it'd be better suited to give statistical outcomes to make cold hard decisions, or relent the final decision to humans who experience real emotion.
AdditionalPizza t1_irwxipm wrote
Reply to comment by AutoMeta in How would you program Love into AI? by AutoMeta
>What to do with that knowledge could depend on w[h]ether or not you care or love that given person.
Do you have more empathy for the people you love, or do you love the people you have more empathy for?
If I had to debate this I would choose the latter, as empathy can be defined. Perhaps love is just the amount of empathy you have toward another. You cannot love someone you don't have empathy for but you can have empathy for someone you don't love.
Would we program an AI to have more empathy toward certain people, or equally for all people? I guess it depends on how the AI is implemented, whether it's individual bots roaming around, or if it's one singular AI living in a cloud.
AdditionalPizza OP t1_irwqjiw wrote
Reply to comment by Bierculles in Am I crazy? Or am I right? by AdditionalPizza
Now when I say this, I don't mean I want the theory to come to fruition because that'd be stupid:
I hope this problem gets worse quickly. We're in a limbo right now where most people are totally ignorant to the capabilities of these bots, and I think we all could use a wake up call on this soon. I would love to read some studies done on this and see some statistics.
AdditionalPizza OP t1_irwplgn wrote
Reply to comment by Equivalent-Ice-7274 in Am I crazy? Or am I right? by AdditionalPizza
Now I want to know what zero mad is haha.
AdditionalPizza OP t1_irwpfea wrote
Reply to comment by phriot in Am I crazy? Or am I right? by AdditionalPizza
>As for good vs. evil, I believe that most people are good. Therefore I think that most bots, being deployed by humans and not yet being intelligent in their own right, are either good or benign.
The problem with that logic:
>Of course, people with nefarious intentions could be deploying more bots than good or benign people.
Is precisely that.
There can be one bad person for every thousand good people, but one person could automate countless "evil" bots. Yes people could deploy good or benign chat bots, but if someone wanted to troll or spread misinformation they would just deploy an army of chat bots across a wide scope of the social media.
Anyway, I'm not defining good or evil here, just going along with those words to keep it simple. Evil in this situation can refer to any form of deception from advertising to hate speech. If the bar for evil is simply not disclosing that it's a chat bot, I think that brings money and political gain into the mix which closing the gap of good vs bad people.
AdditionalPizza OP t1_irwn5ve wrote
Reply to comment by [deleted] in Am I crazy? Or am I right? by AdditionalPizza
Out of curiosity, what did that comment say?
AdditionalPizza OP t1_irtfyeu wrote
Reply to comment by BearStorms in Am I crazy? Or am I right? by AdditionalPizza
Not to mention how many other "enterprises" and at this point, individuals, are working on this sort of thing now.
AdditionalPizza OP t1_irtfrbr wrote
Reply to comment by Davidoheat in Am I crazy? Or am I right? by AdditionalPizza
Assuming what it meant, I searched it and skimmed an article.
A theory about the internet just being all bots and AI communicating back and forth while humans no longer take part? If so, that's exactly what I see in the future if we don't have a solution at some point. I don't really like the idea of removing more anonymity out of the internet but I don't know a better solution.
I've always wondered how a social media platform would work out if it required legitimate credentials to sign up.
AdditionalPizza OP t1_irtf9c9 wrote
Reply to comment by Vaellyth in Am I crazy? Or am I right? by AdditionalPizza
Let's hope for a a more civilized revolution, or perhaps AI can shepherd us into better living standards.
I try to be optimistic of the future and its potential, but as a kid I didn't think the 2020s would be so brutal for cost of living. Not to mention people in power don't even have to try and hide the shitty deeds they do anymore, they just do it and have half the people chanting for more. We live in a strange world now.
AdditionalPizza OP t1_irtet61 wrote
Reply to comment by Kolinnor in Am I crazy? Or am I right? by AdditionalPizza
Haha see this was pretty convincing, my reply would've been something about I'm more concerned about my friends, and ultimately the general population. But, also I don't know if I'm just being paranoid, though my gut tells me I'm not. It feels like we're about to see the internet change drastically because of AI really soon, and people will need to be more aware.
AdditionalPizza OP t1_irsksyg wrote
Reply to comment by ThoughtSafe9928 in Am I crazy? Or am I right? by AdditionalPizza
Yeah, and then how many are gpt-3 level or hey even better? That's the question. When people think bots they think mangled comments that make no sense, or ones that say beep boop.
AdditionalPizza OP t1_irskgha wrote
Reply to comment by rainy_moon_bear in Am I crazy? Or am I right? by AdditionalPizza
You make a good point with number 2. I don't know what to think about grammar errors, because theoretically a bot wouldn't make them, but they're often so stupid like I saw a post the other day starting with "as a civil engineer" and then it had nothing to do with being a civil engineer. Like it's a bot specifically designed for social media posting and using buzz words/memes but it's still in beta.
You should make one, and journal it all and make a big post about it to wake people up about it. I'm tired of sounding like the crazy one in my group.
AdditionalPizza OP t1_irsjv0d wrote
Reply to comment by 4e_65_6f in Am I crazy? Or am I right? by AdditionalPizza
Yeah, exactly what I was thinking. I've chatted with the boys and they're super convincing now, like unless you are trying to trip them up they carry a conversation superbly.
AdditionalPizza OP t1_irsjh87 wrote
Reply to comment by SgtAstro in Am I crazy? Or am I right? by AdditionalPizza
Well I do question that, but these are often the subs everyone will see because they're default subs and the most popular.
Social media just encourages echo chambers and conflict. And I feel like bots are a becoming a very large part of encouraging engagement from users.
AdditionalPizza OP t1_irsisvo wrote
Reply to comment by 4e_65_6f in Am I crazy? Or am I right? by AdditionalPizza
The only issue is if they just disengage the conversation after a comment or 2. Like humans do all the time.
AdditionalPizza OP t1_irrgddh wrote
Reply to comment by 4e_65_6f in Am I crazy? Or am I right? by AdditionalPizza
I wonder if just assuming everything is fake until you acquire citations is the only way to go forward. It's truly exhausting.
AdditionalPizza OP t1_irrg9o0 wrote
Reply to comment by Equivalent-Ice-7274 in Am I crazy? Or am I right? by AdditionalPizza
I mean I default to assuming it's fake or a bot, until I'm sure it's not or it doesn't have an affect on my beliefs or opinions. Like if someone is asking for help with a setting on their phone, I don't care if it'd a bit so researching isn't needed. But if someone is telling me why the education system is failing I might question it more.
But also everything in your comment could easily be done by a bot haha.
I'm more asking about the actual prevalence of bots in social media, especially the convincing ones.
AdditionalPizza OP t1_irr9pz0 wrote
Reply to comment by TheHamsterSandwich in Am I crazy? Or am I right? by AdditionalPizza
As in you believe all points are false?
AdditionalPizza t1_iramzom wrote
Reply to comment by DungeonsAndDradis in The last few weeks have been truly jaw dropping. by Particular_Leader_16
Yup. Considering video is being done by AI with prompts, and music. I wonder what will be next after entertainment mediums.
AdditionalPizza t1_iradytz wrote
Reply to comment by doodlesandyac in The last few weeks have been truly jaw dropping. by Particular_Leader_16
The general population has constantly moving goal posts for what impresses them with ai. They say ai will never be able to do something, and then when it does they say ok but it will never be able to do something else.
AdditionalPizza OP t1_is6xehj wrote
Reply to comment by Dark-Arts in What's your 10 year / pre-AGI predictions? by AdditionalPizza
What do you think that timeline looks like? I'm more interested in the "how" than the "what" so for example, we see graphic artists panicking. Though they haven't faced unemployment yet, it seems inevitable. So what's the next pillar to fall, and when? And then what?
Basically what are the significant steps here?