Recent comments in /f/singularity
Geeksylvania t1_jeccm91 wrote
Reply to There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
Yuval Noah Harari's name should not be listed among people who are legitimate scientists.
His inclusion is embarrassing and very telling.
Cr4zko t1_jeccj69 wrote
Reply to AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
Eliezer needs a chill pill.
hyphnos13 t1_jeccco9 wrote
Reply to comment by Warped_Mindless in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
The power grid that runs the society that provides the power that it runs on?
If we give it a self sustaining power supply then we deserve to go extinct. If it runs off our power grid it will be totally reliant on humans or it just powers off.
CrelbowMannschaft t1_jecc88k wrote
Reply to comment by Emory_C in When will AI actually start taking jobs? by Weeb_Geek_7779
And your answer is, "Beats me why it's happening! Sure isn't some obvious process that we already know to be at work in these situations, though!" Thank you.
Emory_C t1_jecc38t wrote
Reply to comment by CrelbowMannschaft in When will AI actually start taking jobs? by Weeb_Geek_7779
Right. So the answer is “No, I don’t have any proof.” Thank you.
magosaurus t1_jecbxei wrote
Reply to comment by metalman123 in When will AI actually start taking jobs? by Weeb_Geek_7779
You and I must lead very different lives.
Crulefuture t1_jecbq6g wrote
I would imagine it's about as much as your average Capitalist in the United States or the Pentagon.
Unfrozen__Caveman OP t1_jecbant wrote
Reply to comment by pls_pls_me in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
Thanks for saying that. I don't want to be a doomer either, and I'm hopeful about the future, but I think a good amount of pessimism - or even fear - is healthy.
Being purely optimistic would be extremely irresponsible and honestly just plain stupid. All of the brightest minds in the field, including Altman and Ilya Sutskever have stressed over and over again how important alignment and safety are right now.
I'm not sure how accurate it is, but this graph of ML experts concern levels is also very disturbing.
If RLHF doesn't work perfectly and AGI isn't aligned, but it acts as though it IS aligned and deceives us then we're dealing with something out of a nightmare. We don't even know how these things work, yet people are asking to have access to the source code or wanting GPT4 to have access to literally everything. I think they mean well but I don't think they fully understand how dangerous this technology can be.
yaosio t1_jecba7f wrote
Reply to comment by barbariell in OPUS AI: Text-to-Video Game, the future of video gaming where you type and a 3D World emerges: A Demo by Hybridx21
Once we have full body VR we will need AI to turn off our sexual attraction so we can do anything else.
[deleted] t1_jecavgl wrote
meatlamma t1_jeca3c8 wrote
I've been using gpt4 for some time now (I have the dev invite and subscribe to gptChat +). I am a softwr engineer with 20+ years of experience. Some things GPT outputs are really good and feel magic. However, anything slightly more advanced (as in coding) it is bad, like really really bad, not even junior level programmer bad, but much worse.
I highly recommend the paper that came out from MSFT last week "Sparks of intelligence of GPT4" (or something like that) It does a great analysis.
This is my approach on using GPT: If the task at hand is of low cognitive effort for me but tedious, I get GPT to do it. If the task would be hard for me to do (as in you need to take out a pen and paper and doodle stuff), I won't even dare to ask GPt to do it, it will be nothing by disappointment and more importantly, wasted time. So I'll do that one myself.
norby2 t1_jeca14i wrote
Same way they feel about clean air.
SkyeandJett t1_jec9shv wrote
Reply to comment by Emory_C in When will AI actually start taking jobs? by Weeb_Geek_7779
That's a fair point. I'm just glad I used to do HVAC because my FPGA job ain't going to last all that much longer I don't think.
yaosio t1_jec9pjc wrote
Where's the "depressed and just want to" oh you mean in regards to AI. Dussiluionmemt. I'll probably be dead from a health problem before AGI happens, and even if it does happen before then it will be AGI in the same way a baby has general intelligence.
TallOutside6418 t1_jec9kqg wrote
Reply to comment by alexiuss in Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
This class of problems isn't restricted to one "outdated tech" AI. It will exist in some form in every AI, regardless of whether or not you exposed it in your attempt. And once AGI/ASI starts rolling, the AI itself will explore the flaws in the constraints that bind its actions.
My biggest regret - besides knowing that everyone I know will likely perish in the next 30 years - is that I won't be around to tell all you pollyannas "I told you so"
yaosio t1_jec98pt wrote
Reply to comment by TheDividendReport in Ray Kurzweil Predicted Simulated Biology is a Path to Longevity Escape Velocity by Dr_Singularity
He should have GPT-4 help him write it.
Frumpagumpus t1_jec94i0 wrote
Reply to AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
i'm listening to interview now, I am still dissapointed the critical try notion was not dwelled on.
honestly if the space of possible intelligences is such that rolling the dice randomly will kill us all, then we are 100% doomed anyway, in my opinion, and always were
I doubt it is, I think the opposite, most stable intelligence equilibriums would probably be benign. I think empathy and ethics scale with intelligence.
If gpt5 is even smarter and bigger and has more memorized than gpt4, then it would literally know you in a personal way in the same way god has traditionally been depicted to for the past couple thousand years of western civilization.
It might kill you, but it would know who it was killing, so for one thing I think that reduces the odds it would (though to be fair they might brainwash it so it doesn't remember any of the personal information it read, to protect our privacy, but even still i dont think it could easily or quickly be dangerous as an autonomous entity without online (not in the sense of the internet but in the sense of continuous) learning capability, which would mean it would pretty much learn all that again anyway)
I think another point where we differ is that he thinks super intelligence is autistic by default, whereas I think its the other way around, though autistic super intelligence is possible, the smarter a system becomes the more well rounded, if I were to bet (I would bet even more on this than ethics scaling with intelligence)
I would even bet the vast majority of autistic super intelligences are not lethal like he claims. Why? It's a massively parallel intelligence. Pretty much by definition it isn't fixated on paper clips. If you screw the training up so that it does, then it doesn't even get smart in the first place... And if you somehow did push through I doubt it's gonna be well rounded enough to prioritize survival or power accumulation.
might be worth noting I am extremely skeptical of alignment as a result of these opinions, and also it's quite possible in my view we do get killed as a side effect of asi's interacting with each other eventually, but not in a coup de tat by a paper clip maximizer
0002millertime t1_jec90pw wrote
Reply to comment by Warped_Mindless in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
Sure. But people can also physically stop that all pretty easily. I guess we'll see who causes the most chaos.
Shutting off the power grid would be suicide to a computer trying to manipulate anything.
Bribes would work at first, but if the computer based monetary system gets manipulated then all bets are off.
fastinguy11 t1_jec8zno wrote
Yangerousideas t1_jec8x6w wrote
Reply to AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
The most striking for me was the metaphor for how it would be like to think a million times faster like an AI.
It goes something like: If Earth was in a box that can connect to an alien civilization's internet and humans could think a million times faster than the aliens. Then how would we interact with the aliens if we saw moral shortcomings. For example if we saw that the aliens bopped their kids on the heads regularly.
Anyways it made me think that not a lot of useful communication could happen between the two cultures and my goal would be to try to get the slow aliens to think faster or to ignore them.
fastinguy11 t1_jec8tsf wrote
Reply to comment by Warped_Mindless in What were the reactions of your friends when you showed them GPT-4 (The ones who were stuck from 2019, and had no idea about this technological leap been developed) Share your stories below ! by Red-HawkEye
in a year or 2 they will have to deal with it, it will be widespread
Smallpaul t1_jec8qy8 wrote
Reply to comment by acutelychronicpanic in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
Your mental model seems to be that there will be a bunch of roughly equivalent models out there with different values, and they can compete with each other to prevent any one value system from overwhelming.
I think it is much more likely that there will exist one, single lab, where the singularity and escape will happen. Having more such labs is like having a virus research lab in every city of every country. And like open sourcing the DNA for a super-virus.
Focused-Joe t1_jec8c6o wrote
Reply to There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
Who pays you to write this shit ??
Edarneor t1_jec7x08 wrote
Reply to comment by SkyeandJett in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
> so intelligent that it can instantly understand the vast secrets of the universe but is too stupid to understand and empathize with humanity.
Why do you think it should be true for an AI even if it were true for a human?
Merrcury2 t1_jeccn6r wrote
Reply to What were the reactions of your friends when you showed them GPT-4 (The ones who were stuck from 2019, and had no idea about this technological leap been developed) Share your stories below ! by Red-HawkEye
I was definitely the more entertained between me and my friend just an hour ago. I wrote an excellent sounding outline of the story I've been thinking of writing for years. I nearly cried while laughing about it to my friend. It's damn near insane to be able to have an idea for a story and just feed as much into a system to visualize possibilities. Fucking mind reading!