Recent comments in /f/singularity

AGI_69 t1_jef3jz7 wrote

>In humans for example, people with the highest intelligence tend to be more empathetic towards life itself and wants to preserve it.

That's such a bad take. Humans are evolved to cooperate and have empathy, AI is just optimizer, that will kill us all, because it needs our atoms.. unless we explicitly align it.

3

wowimsupergay OP t1_jef3iic wrote

Then you are a test subject in our experiment my friend! Can you self-reflect on this thinking process? I'm serious. Think about translating your vision to words, and deliver me what you say.

It's important to not give me a coherent sentence here. I just want a one-to-one translation of visions to tokens (words, subwords, whatever)

If you think you can make the tokenization process more coherent, that's okay as well. But I really just want you thinking in vision first

2

confused_vanilla t1_jef2u0e wrote

Reply to AI investment by Svitii

I always try to stay conservative with investments but I did invest some in microsoft and Nvidia. I would say only invest the amount you wouldn't miss if something goes wrong.

2

acutelychronicpanic t1_jef28jo wrote

I think we are past that. It would maybe have worked 10 years ago..

My concern is that even the models less powerful than ChatGPT (which can be run on a single pc), can be linked up as components into systems which could achieve AGI. Raw transformer based LLMs may actually be safer than this because they are so alien that they don't even appear to have a single objective function. What they "want" is so context sensitive that they are more like a writhing mass of inconsistent alignments - a pile of masks - this might be really good for us in the short term. They aren't even aligned with themselves. More like raw intelligence.

I also think that approximate alignment will be significantly easier than perfect alignment. We have the tools right now, this approximate alignment is possible. Given the power combined with lack of agency of current LLMs, we may surpass AGI without knowing it. The issue of course is someone just has to set it up to put on the mask of a malevolent or misaligned AI. Thats why I'm worried about concentrating power.

I'll admit I'm out of my depth here, but looking around, so are most of the actual researchers.

0

rootless2 t1_jef1xp1 wrote

Yeah, I agree. I think there are a lot of bad jobs out there where automation is too expensive (humans are the automatons). Like the dishwasher example. Someone has to load the machine or unload it, and the underlying question of is the service industry simply BS? You don't need to go eat at a restaurant, etc.

I worked in IT and had no clue really what we did in connection to the various business sections. A lot of it was checkmarking that things were up, or checkmarking just for the sake of.

Or you have jobs that are deprecated where only 1 person knows how it works, but still critical and can't be automated. Its too old to be replaced.

1

FeepingCreature t1_jef1wb3 wrote

Also: we have at present no way to train a system to reason from instructions.

GPT does it because its training set contained lots of humans following instructions from other humans in text form, and then RLHF semi-reliably amplified these parts. But it's not "trying" to follow instructions, it's completing the pattern. If there's an interiority there, it doesn't necessarily have anything to do with how instruction-following looks in humans, and we can't assume the same tendencies. (Not that human instruction-following is even in any way safe.)

> But that would be as simple as adding that clause to your query

And also every single other thing that it can possibly do to reach its goal, and on the first try.

1

IcyBoysenberry9570 t1_jef1vzt wrote

I think that this is the most likely scenario, at least for the developed world, but I don't think that it will take 25 years and I don't think necessarily that people have to be hurt in the transition. If people are hurt it will likely be because of the traditionalists and Luddites who are resistant to change. The people who are standing between us and a more fair and equitable future are the same people who stop us from having a more fair and equitable present.

2

StarCaptain90 OP t1_jef1u4u wrote

The idea that most people will do nothing is also theory. If you were not restricted by finances, could work in any field without worry about money, would you be lazy and sit around all day? You could finally be an artist while having the ability to support a large family, you could travel anywhere, you could focus on yourself for once and not the cog that drives humanity around money. If humanity becomes lazy then that's their dream life because that is what they looked for when they finally had freedom.

2