Recent comments in /f/singularity

visarga t1_jeehxgo wrote

You don't understand, even a model well tuned by OpenAI to be safe, if it gets in the hands of the public, will be fine-tuned to do anything they want. It doesn't matter what politicians do to regulate the big players.

The only solution to AGI danger is to release it everywhere at once, to balance out AGI by AGI. For example the solution to AI generated spam and disinformation is AI based detection, humans can't keep up with the bots.

10

Wavesignal OP t1_jeehd7z wrote

Other interesting bits from the full transcript

Sundar on personalized assistants

> But I think, wow. Yeah, can it be a very, very powerful assistant for you? I think yes. Anybody at work who works with a personal assistant, you know how life changing it is. But now imagine bringing that power in the context of every person’s day-to-day lives. That is a real potential we are talking about here. And so I think it’s very profound.

> And so we’re all working on that. And again, we have to get it right. But those are the possibilities. Getting everyone their own personalized model, something that really excites me, in some ways, this is what we envisioned when we were building Google Assistant. But we have the technology to actually do those things now.

Sundar on AGI and AI safety

> It is so clear to me that these systems are going to be very, very capable. And so it almost doesn’t matter whether you’ve reached AGI or not. You’re going to have systems which are capable of delivering benefits at a scale we have never seen before and potentially causing real harm.

> So can we have an AI system which can cost disinformation at scale? Yes. Is it AGI? It really doesn’t matter. Why do we need to worry about AI safety? Because you have to anticipate this and evolve to meet that moment. And so today, we do a lot of things with AI people have taken it for granted.

38

FeepingCreature t1_jeeh7mz wrote

Higher intelligence also means better execution of human skills, which means harder to verify. Once you have loss flow through deception, all bets are off.

I think it gets easier, as the model figures out what you're asking for - and then it gets a lot harder, as the model figures out how to make you believe what it's saying.

−3

Arowx OP t1_jeegrav wrote

Just asked Bing and apparently AI can already write novels. With one report of a Japanes AI system that can write novels better than humans.

https://www.euronews.com/next/2022/11/08/ai-writing-is-here-and-its-worryingly-good-can-writers-and-academia-adapt

We would be inundated with millions of novels from people who wanted to write a novel and companies that would target specific novels at profitable demographics.

We would need AI critics just to help sort the wheat from the chaff.

The thing is it's like a DJ mixing records it could generate some amazing new mixes but if the pattern is not already out there it's very unlikely to find new patterns.

1

Shemetz t1_jeegpal wrote

> Given the vastness of outerspace, ... why is it that we see no large evidence of completely destructive AIs?... I would expect us to see some evidence for it for other species. Yet we are entirely empty?... we should see widescale destruction if not a galaxy completely overridden by AI.

This counterargument doesn't work if we believe in the (very reasonable IMO) grabby aliens model.

Some facts and assumptions:

  • information moves at the speed of light
  • strong alien AIs would probably move at some significant fraction of the speed of light. let's say 1%.
  • civilizations probably develop extremely quickly, but in very rare conditions (that take a lot of time to occur); e.g. the Universe is 14 billion years old, Earth is 4.5 billion years old, and human-society-looking-at-the-stars is only 10,000 years old.
  • humanity appeared relatively early in the cosmic timescale; there are trillions of years in our future during which life should only become more common
  • "grabby" aliens would take control over their sections of space in a way that prevents new space civilizations from forming

-> If/when a "grabby alien AI" got created, it would spread around our galaxy - and eventually, the universe - so quickly, that it's incredibly unlikely for young civilizations to see it. much more likely for the alien to either not exist (yet), or to expand and gain control of places. -> since we appear to be safe and alone and "early", we can't say AI won't take over the universe, and actually we are well-positioned to be the ones to develop that AI.

2

AlFrankensrevenge t1_jeefdr0 wrote

The whole point is that the naysayers have be able to guarantee it will NOT occur. If there is a 10% risk of annihilation, isn't that enough to take this seriously? Even a 1% chance? You'd just do nothing because 1% doesn't seem very high?

You mentioned a coin toss. I basically agree with that metaphor. Because there is so much uncertainty in all this, and we don't know what we don't know about AGI, we should treat a human apocalypse as a 50-50 chance. How much it can be reduced with much more sophisticated guard rails and alignment programming, I do not know, but if we can't even take it seriously and try, I guess as a species we deserve to die.

Remember that what you call the more ethical parties, "researchers", are working for the less ethical ones! Google, Meta, etc. Even OpenAI at this point is not open, and it is corporatized.

There is a long history of "researchers" thinking too much about how quickly they can produce the cool new thing they invent and not enough about long-term consequences. Researchers invented leaded gasoline, DDT, chlorofluorocarbon-based aerosols, etc., etc.

2

No_Ninja3309_NoNoYes t1_jeef1v1 wrote

For me the singularity equals simulation. If the singularity is possible then simulation is possible. And if something is possible, you can't rule it out. So I hope we don't get the literal Singularity because I don't want to be a NPC. There's a chance AI will be banned in several countries which could slow progress considerably.

5

genericrich t1_jeeephy wrote

Yes this is the problem.

Actually, there is a plan. The US DOD has plans, revised every year, for invading every country on Earth. Why do they do this? Just in case they need to, and it's good practice for low-level general staff.

Do you really think the US DOD doesn't have a plan for what to do if China or Russia develop an ASI?

I'm pretty sure they do, and it involves the US Military taking action against the country that has one if we don't. If they don't have a plan, they are negligent. So odds are they have a plan, even if it is "Nuke the Data Center".

Now, if they have THIS plan, for a foreign adversary, do you think they also have a similar plan for "what happens if a Silicon Valley startup develops the same kind of ASI we're afraid China and Russia might get, which we're ready to nuke/bomb if it comes down to it?"

I think they probably do.

It is US doctrine that no adversary that can challenge our military supremacy be allowed to do so. ASI clearly would do this, so it can't be tolerated in anyone's hands but ours.

Going to be very interesting.

2

Shiningc t1_jeeedon wrote

At this point it's a cult. People hyping up LLM have no idea what they're talking about and they're just eating up corporate PR and whatever dumb hype the articles write about.

These people are in it for a disappointment in a year or two. And I'm going to be gloating with "I told you so".

−3

CertainMiddle2382 t1_jeee42m wrote

Im 40, the planet is not fine. Methane emissions in thawing permafrost has been worrying since the 70s.

Everything of what is happening now was predicted, and what is going to follow is going to be much worse than the subtle changes we have seen so far.

All is all, earth entropy is increasing fast, extremely fast.

I know I will never convince you though, so whatever…

2

AlFrankensrevenge t1_jeedllt wrote

I agree with you when talking about an AI that is very good but falls far short of superintelligence. GPT4 falls in that category. Even the current open source AIs, modified in the hands of hackers, will be very dangerous things.

But we're moving fast enough that the superintelligence that I used to think was 10-20 years away now looks like 3-10 years away. That's the one that can truly go rogue.

1