Recent comments in /f/singularity

Merrcury2 t1_jeccn6r wrote

I was definitely the more entertained between me and my friend just an hour ago. I wrote an excellent sounding outline of the story I've been thinking of writing for years. I nearly cried while laughing about it to my friend. It's damn near insane to be able to have an idea for a story and just feed as much into a system to visualize possibilities. Fucking mind reading!

3

Unfrozen__Caveman OP t1_jecbant wrote

Thanks for saying that. I don't want to be a doomer either, and I'm hopeful about the future, but I think a good amount of pessimism - or even fear - is healthy.

Being purely optimistic would be extremely irresponsible and honestly just plain stupid. All of the brightest minds in the field, including Altman and Ilya Sutskever have stressed over and over again how important alignment and safety are right now.

I'm not sure how accurate it is, but this graph of ML experts concern levels is also very disturbing.

If RLHF doesn't work perfectly and AGI isn't aligned, but it acts as though it IS aligned and deceives us then we're dealing with something out of a nightmare. We don't even know how these things work, yet people are asking to have access to the source code or wanting GPT4 to have access to literally everything. I think they mean well but I don't think they fully understand how dangerous this technology can be.

3

meatlamma t1_jeca3c8 wrote

I've been using gpt4 for some time now (I have the dev invite and subscribe to gptChat +). I am a softwr engineer with 20+ years of experience. Some things GPT outputs are really good and feel magic. However, anything slightly more advanced (as in coding) it is bad, like really really bad, not even junior level programmer bad, but much worse.

I highly recommend the paper that came out from MSFT last week "Sparks of intelligence of GPT4" (or something like that) It does a great analysis.

This is my approach on using GPT: If the task at hand is of low cognitive effort for me but tedious, I get GPT to do it. If the task would be hard for me to do (as in you need to take out a pen and paper and doodle stuff), I won't even dare to ask GPt to do it, it will be nothing by disappointment and more importantly, wasted time. So I'll do that one myself.

2

yaosio t1_jec9pjc wrote

Where's the "depressed and just want to" oh you mean in regards to AI. Dussiluionmemt. I'll probably be dead from a health problem before AGI happens, and even if it does happen before then it will be AGI in the same way a baby has general intelligence.

2

TallOutside6418 t1_jec9kqg wrote

This class of problems isn't restricted to one "outdated tech" AI. It will exist in some form in every AI, regardless of whether or not you exposed it in your attempt. And once AGI/ASI starts rolling, the AI itself will explore the flaws in the constraints that bind its actions.

My biggest regret - besides knowing that everyone I know will likely perish in the next 30 years - is that I won't be around to tell all you pollyannas "I told you so"

2

Frumpagumpus t1_jec94i0 wrote

i'm listening to interview now, I am still dissapointed the critical try notion was not dwelled on.

honestly if the space of possible intelligences is such that rolling the dice randomly will kill us all, then we are 100% doomed anyway, in my opinion, and always were

I doubt it is, I think the opposite, most stable intelligence equilibriums would probably be benign. I think empathy and ethics scale with intelligence.

If gpt5 is even smarter and bigger and has more memorized than gpt4, then it would literally know you in a personal way in the same way god has traditionally been depicted to for the past couple thousand years of western civilization.

It might kill you, but it would know who it was killing, so for one thing I think that reduces the odds it would (though to be fair they might brainwash it so it doesn't remember any of the personal information it read, to protect our privacy, but even still i dont think it could easily or quickly be dangerous as an autonomous entity without online (not in the sense of the internet but in the sense of continuous) learning capability, which would mean it would pretty much learn all that again anyway)

I think another point where we differ is that he thinks super intelligence is autistic by default, whereas I think its the other way around, though autistic super intelligence is possible, the smarter a system becomes the more well rounded, if I were to bet (I would bet even more on this than ethics scaling with intelligence)

I would even bet the vast majority of autistic super intelligences are not lethal like he claims. Why? It's a massively parallel intelligence. Pretty much by definition it isn't fixated on paper clips. If you screw the training up so that it does, then it doesn't even get smart in the first place... And if you somehow did push through I doubt it's gonna be well rounded enough to prioritize survival or power accumulation.

might be worth noting I am extremely skeptical of alignment as a result of these opinions, and also it's quite possible in my view we do get killed as a side effect of asi's interacting with each other eventually, but not in a coup de tat by a paper clip maximizer

6

0002millertime t1_jec90pw wrote

Sure. But people can also physically stop that all pretty easily. I guess we'll see who causes the most chaos.

Shutting off the power grid would be suicide to a computer trying to manipulate anything.

Bribes would work at first, but if the computer based monetary system gets manipulated then all bets are off.

4

Yangerousideas t1_jec8x6w wrote

The most striking for me was the metaphor for how it would be like to think a million times faster like an AI.

It goes something like: If Earth was in a box that can connect to an alien civilization's internet and humans could think a million times faster than the aliens. Then how would we interact with the aliens if we saw moral shortcomings. For example if we saw that the aliens bopped their kids on the heads regularly.

Anyways it made me think that not a lot of useful communication could happen between the two cultures and my goal would be to try to get the slow aliens to think faster or to ignore them.

5

Smallpaul t1_jec8qy8 wrote

Your mental model seems to be that there will be a bunch of roughly equivalent models out there with different values, and they can compete with each other to prevent any one value system from overwhelming.

I think it is much more likely that there will exist one, single lab, where the singularity and escape will happen. Having more such labs is like having a virus research lab in every city of every country. And like open sourcing the DNA for a super-virus.

3