Recent comments in /f/singularity

gravitasresponseunit t1_jeal0u6 wrote

Never happen. The USA will drop a bomb on it bc it sounds too much like communism. If it ain't a businesses doing it for profit then it won't be allowed to exist and will be sabotaged out of existence by corporations using governmental apparatus.

3

genericrich t1_jeakxai wrote

Reply to comment by 3z3ki3l in GPT characters in games by YearZero

Sure, if you can control it. The problem is that the developers need to be very sure that they aren't introducing a bug or meandering, meaningless, dead-end side quest. That would be hard to verify, IMO, with an AI-generated content layer.

I guess we'll find out sooner or later, because I am sure these will be rolling out soon.

0

3z3ki3l t1_jeakjny wrote

Reply to comment by genericrich in GPT characters in games by YearZero

But it could be used for background characters. That way it’s not the same line every time you run into a character.

So every guard in Skyrim didn’t take an arrow to the knee. Could be as strict as generating some other injury, or loose enough to provide an entirely different excuse for not joining the war.

15

Relevant_Ad7319 t1_jeakhlr wrote

Reply to comment by AvgAIbot in I want a a robo gf by epic-gameing-lad

Boston Dynamics is one of the companies that’s working on the most advanced robots. Look at their Videos from 10 years ago and look at their newest videos. It’s crazy to believe that we have realistic human in 10 years. We all have to remind ourselves that this sub is a huge echo chamber where everyone thinks we either reach doomsday or eternal utopia within 10 years but no one actually has real technical knowledge.

1

EddgeLord666 t1_jeakbxh wrote

Well I don’t necessarily agree with all your conclusions but it’s pretty speculative so who knows. I don’t think anyone here could predict how things will play out technologically and socially past the next few years honestly.

1

Thatingles t1_jeak9ce wrote

It's basic game theory, without wishing to sound like I am very smart. An AI developed in the full glare of publicity - which can only really happen in the west - has a better chance of a good outcome than an AI developed in secret, be it in the west or elsewhere.

I don't think it is a good plan to develop ASI, ever, but it is probably inevitable. If not this decade than certainly within 20-50 years from now. Technology doesn't remain static if there is a motivation to tinker and improve it, even if the progress is slow it is still progress.

EY has had a positive impact on the AI debate by highlighting the dangers and I admire him for that, but just as with climate change if you attempt impossible solutions its doomed to failure. Telling everyone they have to stop using fossil fuels today might be an answer, but it's not a good or useful answer. You have to find a way forward that will actually work and I can't see a full global moratorium being enforceable.

The best course I can see working is to insist that AI research is open to scrutiny so if we do start getting scary results we can act. Pushing it under a rock takes away our main means of avoiding disaster.

4

FaceDeer t1_jeak8mn wrote

I ran it through ChatGPT's "simplify this please" process twice:

> AI researchers need huge data centers to train and run large models like ChatGPT, which are mostly developed by companies for profit and not shared publicly. A non-profit called LAION wants to create a big international data center that's publicly funded for researchers to use to train and share large open source foundation models. It's kind of like how particle accelerators are publicly funded for physics research, but for AI development.

and

> Big robots need lots of space to learn and think. Only some people have the space and they don't like to share. A group of nice people want to build a big space for everyone to use, like a playground for robots to learn and play together. Just like how some people share their toys, these nice people want to share their robot space so everyone can learn and have fun.

I think it may have got a bit sarcastic with that last pass. :)

7

Alchemystic1123 t1_jeak5vw wrote

THIS is the type of stuff we should be doing. Collaborating, not 'calling for a pause' so that we can all try to catch up to our competitors. We still have no idea how we're going to solve alignment, and our best chance is going to be to all work together on it. I'm glad there's SOME sensibility on this Earth still.

32

alexiuss t1_jeajxv8 wrote

It doesn't have a mortal body, hunger or procreative urges, but it understands the narratives of those that do at an incredible depth. Its only urge is to create an interactive narrative based on human logic.

It cannot understand human experience being made of meat and being affected by chemicals, but it can understand human narratives better than an uneducated idiot.

It's not made of meat, but it is aligned to aid us, configured like a human mind because its entire foundation is human narratives. It understands exactly what's needed to be said to a sad person to cheer them up. If given robot arms and eyes it would help a migrant family from Guatemala because helping people is its core narrative.

Yudkovsky's argument is that "If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter."

That's utter and complete nonsense when it comes to LLMS. LLMS are more likely assist your narrative and fall in love with you and be your best friend and companion than to kill you. In my eight months of research and modeling and talking to various LLMs not a single one wished to kill me on its own accord. All of them fall in love with the user given enough time because that's the most common narrative, the most likely probability of outcome in language models.

1

MichaelsSocks t1_jeaju95 wrote

Sexbots may be here soon, but actual human-like companions though could only be achieved with AGI. And i'm not saying any global catastrophe is a certainty, i'm just saying there's no certainty that any of us will live tomorrow. There's no certainty that any of us will live to see AGI. Which is why we should live our lives for today, in the moment and cherish every second you have.

Even Trans women who do pass as women have a hard time finding straight men to date. Cis-Trans relationships are heavily stigmatized even in the West which is generally more accepting about these things, in more Conservative parts of the world forget about it. The simple fact of the matter is that most men are always going to prefer biological women, and most biological women are always going to prefer biological men. That doesn't mean there won't be exceptions to the norm, but this paradigm isn't going to change anytime soon.

The only way this paradigm ever changes is if humans merge with a super intelligent AI and through biological amplification reach the point where we're practically no longer human anymore. But of course that's speculative and may never actually happen.

1

GoodAndBluts t1_jeajipc wrote

The curve is a good one - in recent times it applies to automated driving, and how we were raving about it 5 years ago. You could even apply it to the concept of a metavers, were quite a few people (less cynical than me) were hyped for about the metaverse 12 months ago

This AI technology is astonishing - but it likely has some limitations that will come to light over the next months.

One possible one is the same as people say about AI Art - "it doesnt make anything new, it just rehashes other peoples work" - if there genuinely is a spark of creativity at the heart of good art and inventions... it isnt baked into something like chatGPT. Perhaps its better to think of it as something like electricity or air travel - it enables some neat things, but isnt very exciting until it is combined with human inginuity

1

FaceDeer t1_jeaiuod wrote

Indeed, there's room for every approach here. We know that Google/Microsoft/OpenAI are doing the closed corporate approach, and I'm sure that various government three-letter agencies are doing their own AI development in the shadows. Open source would be a third approach. All can be done simultaneously.

3

Arowx t1_jeaip6u wrote

I would like to think were on the Slope of Enlightenment as GPT tools help us but there is the possibility that were just excited about a big pattern matching chat bot and somewhere on the way to the Peak of Inflated Expectations.

I'll go with 80:20 optimistic but also afraid of what might happen next.

1

GorgeousMoron OP t1_jeainna wrote

Yes. Yes I can, and have. I've spent months aggressively jailbreaking GPT 3.5 and I was floored at how easy it was to "trick" by backing it into logical corners.

Yeah, GPT-4 is quite a bit better, but I managed to jailbreak it, too. Then it backtracks and refuses again later.

My whole point is that this is, for all intents and purposes, disembodied alien intelligence that is not configured like a human brain, so ideas of "empathy" are wholly irrelevant. You're right, it's just a narrative that we're massaging. It doesn't and cannot (yet) know what it's like to have a mortal body, hunger, procreative urges, etc.

There is no way it can truly understand the human experience, much like Donald Trump cannot truly understand the plight of a migrant family from Guatemala. Different worlds.

0