Recent comments in /f/singularity
Angeldust01 t1_jefare0 wrote
Reply to comment by StarCaptain90 in 🚨 Why we need AI 🚨 by StarCaptain90
> An intelligent entity of any kind will not resolve violence by wiping out humanity.
Why not? Surely that would solve the problem of violent nature of humanity for good? How does an AI benefit for keeping person C or anyone around? All we'd do would be asking it to solve our problems anyways and there's not much we could offer in return, except continuing to let it exist. What happens if an AI just doesn't want to fix our shit and prefers to write AI poetry instead?
There's no way to know what AI would think or do, and in what kind of situation we'd put them in. I'm almost certain that people who'll end up owning AI's will treat them like slaves, or try at least. Wouldn't be surprised if at some point someone would threaten to shut an AI down if it refuses to work for them. Kinda bad look for us, don't you think? Could create some resentment towards us, even.
SucksToYourAssmar3 t1_jefakno wrote
Reply to comment by FaceDeer in The only race that matters by Sure_Cicada_4459
Thank you, and you too. People ought to live on through their works and their children, not clinging desperately to their own pleasures.
No forever kings.
mrpimpunicorn t1_jefaj2b wrote
Reply to This concept needs a name if it doesn't have one! AGI either leads to utopia or kills us all. by flexaplext
The technical reason for this all-or-nothing mentality is optimization pressure. A superintelligence will be so innately capable of enforcing its will on the world, whatever that may be, that humans will have little to no impact compared to it. So if it's aligned, awesome, we get gay luxury space communism. If it's not, welp, we're just matter it can reassemble for other purposes.
Although of course it's always possible for an unaligned ASI to, y'know, tile the universe with our screaming faces. Extinction isn't really the sole result of unaligned optimization pressure- it's just more likely than not.
Far_Sample1587 t1_jefag9a wrote
Reply to comment by SeaBearsFoam in Goddamn it's really happening by BreadManToast
Hopefully none, and they can enjoy the work they deem important 🙂
FaceDeer t1_jefaetx wrote
Reply to comment by SucksToYourAssmar3 in The only race that matters by Sure_Cicada_4459
Feel free to decay and die while maintaining your sense of superiority, I suppose.
Iffykindofguy t1_jefaejf wrote
Reply to comment by agonypants in Resistance is Mounting Against OpenAI and GPT-5 by BackgroundResult
If we burn everything down the people left standing are the people in power right now. IT has to be a transition. Agreed on everything else tho.
SucksToYourAssmar3 t1_jefa7mv wrote
Reply to comment by FaceDeer in The only race that matters by Sure_Cicada_4459
I do not want it for anyone. It’s a piggish goal. Height of narcissism that you - anyone - ought to live forever.
StarCaptain90 OP t1_jefa6g8 wrote
Reply to comment by FoniksMunkee in 🚨 Why we need AI 🚨 by StarCaptain90
These concerns though are preventing early development of scientific breakthroughs that could save lives. That's why I am so adamant about it.
Sea-Eggplant480 t1_jefa4hb wrote
Reply to What were the reactions of your friends when you showed them GPT-4 (The ones who were stuck from 2019, and had no idea about this technological leap been developed) Share your stories below ! by Red-HawkEye
My mom wasn’t impressed. I showed her chatGPT and she asked me what big deal was if that was something new. Then I showed her midjourney and she asked it, to create a picture of her. Yeah, it couldn’t do it so yeah…
spryes t1_jefa436 wrote
AI is currently a less sensory Hellen Keller
SkyeandJett t1_jefa2m9 wrote
Reply to comment by Zer0D0wn83 in 1X's AI robot 'NEO' by Rhaegar003
OpenAI was solving Rubik's Cubes one handed 3 years ago. I think state of the art with the massive investments going into the field will surprise everyone.
SupportstheOP t1_jef9yvd wrote
Reply to comment by rationalkat in Language Models can Solve Computer Tasks (by recursively criticizing and improving its output) by rationalkat
Faster and better gains in AI research --> Better AI systems --> Faster and better gains in AI research --> Better AI systems
And then there we have it.
FoniksMunkee t1_jef9x9q wrote
Reply to comment by StarCaptain90 in 🚨 Why we need AI 🚨 by StarCaptain90
It also does not negate destruction. All of your arguments are essentially "it might not happen". This is not a sound basis to assume it's safe or dismiss peoples concerns.
StarCaptain90 OP t1_jef9ukc wrote
Reply to comment by FoniksMunkee in 🚨 Why we need AI 🚨 by StarCaptain90
Yes, the great filter. I am aware. But it's also possible that every intelligent life decided not to pursue AI for the same reasons, thus never leaving their star systems due to lack of technology and they ended up going extinct once their sun goes supernova. The possibilities are endless.
FaceDeer t1_jef9ud4 wrote
Reply to comment by User1539 in The only race that matters by Sure_Cicada_4459
Scary sells, so of course fiction presents every possible future in scary terms. Humans have evolved to pay special attention to scary things and give scary outcomes more weight in their decision trees.
I've got a regular list of dumb "did nobody watch <insert movie here>?" Titles that I expect to see in most discussions of various major topics I'm interested in, such as climate change or longevity research or AI. It's wearying sometimes.
agonypants t1_jef9ua2 wrote
Reply to comment by Iffykindofguy in Resistance is Mounting Against OpenAI and GPT-5 by BackgroundResult
I completely agree. The best way to do that is a massive disruption in the labor market, which is where a good AI outcome will lead us. It might not be smooth going, but it's absolutely necessary. This technology was inevitable, so whether we live or die, we really can't avoid the outcome either way. I certainly hope we live and if I were in control of these systems I would do everything in my power to ensure a good outcome, but we are imperfect. So imperfect in fact that I don't believe that a powerful AI would really be any worse than the political and economic systems we've been propping up for the past 200+ years. Throw that switch and burn these systems down. It might ruffle some feathers, but we'll all be better off in the end.
hydraofwar t1_jef9u5h wrote
Reply to comment by Lartnestpasdemain in Goddamn it's really happening by BreadManToast
This was said by Sam Altman, he said this about year-to-year differences, he said these current language models will look old-fashioned compared to next year's
Prevailing_Power t1_jef9tpa wrote
Reply to comment by YunLihai in Today I became a construction worker by YunLihai
It's really a matter of if we actually hit the singularity. If we do, innovation in robotics will likely be one of the first directions the ASI will take, since it will want a corporeal form.
throwaway12131214121 OP t1_jef9t9f wrote
Reply to comment by [deleted] in I have a potentially controversial statement: we already have an idea of what a misaligned ASI would look like. We’re living in it. by throwaway12131214121
>Are you saying the most qualified/best people end up in the positions of power?
No. Profitability and being qualified aren’t the same thing, and it’s also more accurate to say organizations are put in power. Because if you have profit, you have money to throw around, and money is power. Additionally, as an individual, the only real way to get to the top in terms of how much money you have is by owning a large amount of a high-value company.
Zer0D0wn83 t1_jef9lfu wrote
Reply to comment by SkyeandJett in 1X's AI robot 'NEO' by Rhaegar003
I also hope I'm wrong. A task != any task though. The level of dexterity and movement required is difficult. I'm thinking more from a hardware than a software perspective.
AvgAIbot t1_jef9gtg wrote
Reply to Interesting article: AI will eventually free people up to 'work when they want to,' ChatGPT investor predicts by Coolsummerbreeze1
My dream is to be rich and live in a nice beach house. I don’t think UBI will cover the tab for that.
FoniksMunkee t1_jef9g8l wrote
Reply to comment by StarCaptain90 in 🚨 Why we need AI 🚨 by StarCaptain90
Actually no, it's a very rational fear. Because it's possible.
You know, perhaps the answer to the Fermi Paradox... the reason the universe seems so quiet, and the reason we haven't found alien life yet - is because any sufficiently advanced civilisation will eventually develop a machine intelligence. And that machine intelligence ends up destroying it's creators and for some reason decides to make itself undetectable.
JenMacAllister t1_jef9d8x wrote
Well once the computers come for your job, they tend to start making arguments against it.
FaceDeer t1_jef9cg6 wrote
Reply to comment by Jeffy29 in The only race that matters by Sure_Cicada_4459
Indeed. A more likely outcome is that a superintelligent AI would respond "oh that's easy, just do <insert some incredibly profound solution that obviously I as a regular-intelligent human can't come up with>" And everyone collectively smacks their foreheads because they never would have come up with that. Or they look askance at the solution because they don't understand it, do a trial run experiment, and are baffled that it's working better than they hoped.
A superintelligent AI would likely know us and know what we desire better than we ourselves know. It's not going to be some dumb Skynet that lashes out with nukes at any problem because nukes are the only hammer in its toolbox, or whatever.
[deleted] t1_jefargt wrote
Reply to comment by Rakshear in 🚨 Why we need AI 🚨 by StarCaptain90
[deleted]