Recent comments in /f/singularity
Professional_Copy587 t1_jeehzwu wrote
Reply to comment by Shiningc in Goddamn it's really happening by BreadManToast
Hopefully the sub returns to what it was as it was a reasonable subbreddit before all this delusion
visarga t1_jeehxgo wrote
Reply to comment by NonDescriptfAIth in The only race that matters by Sure_Cicada_4459
You don't understand, even a model well tuned by OpenAI to be safe, if it gets in the hands of the public, will be fine-tuned to do anything they want. It doesn't matter what politicians do to regulate the big players.
The only solution to AGI danger is to release it everywhere at once, to balance out AGI by AGI. For example the solution to AI generated spam and disinformation is AI based detection, humans can't keep up with the bots.
falldeaf t1_jeehm0d wrote
Reply to comment by Relevant_Ad7319 in Language Models can Solve Computer Tasks (by recursively criticizing and improving its output) by rationalkat
I bet it will be possible with the multi modal version! Essentially just give it access to the ability to take screenshots and an API for choosing mouse position. It'd be interesting to know if that could work in a one shot fashion.
Queue_Bit t1_jeehgof wrote
Reply to comment by Automatic_Paint9319 in Goddamn it's really happening by BreadManToast
Haha yeah I bet they were better for your straight white male older relative
Wavesignal OP t1_jeehd7z wrote
Reply to Google CEO Sundar Pichai promises Bard AI chatbot upgrades soon: ‘We clearly have more capable models’ - The Verge by Wavesignal
Other interesting bits from the full transcript
Sundar on personalized assistants
> But I think, wow. Yeah, can it be a very, very powerful assistant for you? I think yes. Anybody at work who works with a personal assistant, you know how life changing it is. But now imagine bringing that power in the context of every person’s day-to-day lives. That is a real potential we are talking about here. And so I think it’s very profound.
> And so we’re all working on that. And again, we have to get it right. But those are the possibilities. Getting everyone their own personalized model, something that really excites me, in some ways, this is what we envisioned when we were building Google Assistant. But we have the technology to actually do those things now.
Sundar on AGI and AI safety
> It is so clear to me that these systems are going to be very, very capable. And so it almost doesn’t matter whether you’ve reached AGI or not. You’re going to have systems which are capable of delivering benefits at a scale we have never seen before and potentially causing real harm.
> So can we have an AI system which can cost disinformation at scale? Yes. Is it AGI? It really doesn’t matter. Why do we need to worry about AI safety? Because you have to anticipate this and evolve to meet that moment. And so today, we do a lot of things with AI people have taken it for granted.
[deleted] t1_jeeh9uk wrote
Reply to The Alignment Issue by CMDR_BunBun
[deleted]
FeepingCreature t1_jeeh7mz wrote
Reply to comment by Sure_Cicada_4459 in The only race that matters by Sure_Cicada_4459
Higher intelligence also means better execution of human skills, which means harder to verify. Once you have loss flow through deception, all bets are off.
I think it gets easier, as the model figures out what you're asking for - and then it gets a lot harder, as the model figures out how to make you believe what it's saying.
Arowx OP t1_jeegrav wrote
Reply to comment by TFenrir in What if it's just chat bot infatuation and were overhyping what is just a super big chat bot? by Arowx
Just asked Bing and apparently AI can already write novels. With one report of a Japanes AI system that can write novels better than humans.
We would be inundated with millions of novels from people who wanted to write a novel and companies that would target specific novels at profitable demographics.
We would need AI critics just to help sort the wheat from the chaff.
The thing is it's like a DJ mixing records it could generate some amazing new mixes but if the pattern is not already out there it's very unlikely to find new patterns.
Shemetz t1_jeegpal wrote
Reply to comment by DragonForg in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
> Given the vastness of outerspace, ... why is it that we see no large evidence of completely destructive AIs?... I would expect us to see some evidence for it for other species. Yet we are entirely empty?... we should see widescale destruction if not a galaxy completely overridden by AI.
This counterargument doesn't work if we believe in the (very reasonable IMO) grabby aliens model.
Some facts and assumptions:
- information moves at the speed of light
- strong alien AIs would probably move at some significant fraction of the speed of light. let's say 1%.
- civilizations probably develop extremely quickly, but in very rare conditions (that take a lot of time to occur); e.g. the Universe is 14 billion years old, Earth is 4.5 billion years old, and human-society-looking-at-the-stars is only 10,000 years old.
- humanity appeared relatively early in the cosmic timescale; there are trillions of years in our future during which life should only become more common
- "grabby" aliens would take control over their sections of space in a way that prevents new space civilizations from forming
-> If/when a "grabby alien AI" got created, it would spread around our galaxy - and eventually, the universe - so quickly, that it's incredibly unlikely for young civilizations to see it. much more likely for the alien to either not exist (yet), or to expand and gain control of places. -> since we appear to be safe and alone and "early", we can't say AI won't take over the universe, and actually we are well-positioned to be the ones to develop that AI.
arckeid t1_jeegitn wrote
Reply to comment by Relevant_Ad7319 in Language Models can Solve Computer Tasks (by recursively criticizing and improving its output) by rationalkat
I think this is a good way no just to make the AI, but to help humans to stay in sync, for me it's looking the advancements are already so fast.
big_retard_420 t1_jeegiof wrote
Reply to comment by NonDescriptfAIth in The only race that matters by Sure_Cicada_4459
I aint reading all that
im happy for u tho or sorry that happened
face_eater_5000 t1_jeefgvg wrote
AlFrankensrevenge t1_jeefdr0 wrote
Reply to comment by Gotisdabest in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
The whole point is that the naysayers have be able to guarantee it will NOT occur. If there is a 10% risk of annihilation, isn't that enough to take this seriously? Even a 1% chance? You'd just do nothing because 1% doesn't seem very high?
You mentioned a coin toss. I basically agree with that metaphor. Because there is so much uncertainty in all this, and we don't know what we don't know about AGI, we should treat a human apocalypse as a 50-50 chance. How much it can be reduced with much more sophisticated guard rails and alignment programming, I do not know, but if we can't even take it seriously and try, I guess as a species we deserve to die.
Remember that what you call the more ethical parties, "researchers", are working for the less ethical ones! Google, Meta, etc. Even OpenAI at this point is not open, and it is corporatized.
There is a long history of "researchers" thinking too much about how quickly they can produce the cool new thing they invent and not enough about long-term consequences. Researchers invented leaded gasoline, DDT, chlorofluorocarbon-based aerosols, etc., etc.
No_Ninja3309_NoNoYes t1_jeef1v1 wrote
Reply to Goddamn it's really happening by BreadManToast
For me the singularity equals simulation. If the singularity is possible then simulation is possible. And if something is possible, you can't rule it out. So I hope we don't get the literal Singularity because I don't want to be a NPC. There's a chance AI will be banned in several countries which could slow progress considerably.
genericrich t1_jeeephy wrote
Reply to comment by ilikeover9000turtles in ASI Is The Ultimate Weapon, And We Are In An Arms Race by ilikeover9000turtles
Yes this is the problem.
Actually, there is a plan. The US DOD has plans, revised every year, for invading every country on Earth. Why do they do this? Just in case they need to, and it's good practice for low-level general staff.
Do you really think the US DOD doesn't have a plan for what to do if China or Russia develop an ASI?
I'm pretty sure they do, and it involves the US Military taking action against the country that has one if we don't. If they don't have a plan, they are negligent. So odds are they have a plan, even if it is "Nuke the Data Center".
Now, if they have THIS plan, for a foreign adversary, do you think they also have a similar plan for "what happens if a Silicon Valley startup develops the same kind of ASI we're afraid China and Russia might get, which we're ready to nuke/bomb if it comes down to it?"
I think they probably do.
It is US doctrine that no adversary that can challenge our military supremacy be allowed to do so. ASI clearly would do this, so it can't be tolerated in anyone's hands but ours.
Going to be very interesting.
B0tRank t1_jeeej8h wrote
Reply to comment by Lazy-Length-7174 in How does China think about AI safety? by Aggravating_Lake_657
Thank you, Lazy-Length-7174, for voting on alphabet_order_bot.
This bot wants to find the best and worst bots on Reddit. You can view results here.
^(Even if I don't reply to your comment, I'm still listening for votes. Check the webpage to see if your vote registered!)
Lazy-Length-7174 t1_jeeeiaj wrote
Reply to comment by alphabet_order_bot in How does China think about AI safety? by Aggravating_Lake_657
Good Bot
Shiningc t1_jeeedon wrote
Reply to comment by Professional_Copy587 in Goddamn it's really happening by BreadManToast
At this point it's a cult. People hyping up LLM have no idea what they're talking about and they're just eating up corporate PR and whatever dumb hype the articles write about.
These people are in it for a disappointment in a year or two. And I'm going to be gloating with "I told you so".
[deleted] t1_jeeealv wrote
Reply to comment by SWATSgradyBABY in The Alignment Issue by CMDR_BunBun
[removed]
CertainMiddle2382 t1_jeee42m wrote
Reply to comment by TallOutside6418 in Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
Im 40, the planet is not fine. Methane emissions in thawing permafrost has been worrying since the 70s.
Everything of what is happening now was predicted, and what is going to follow is going to be much worse than the subtle changes we have seen so far.
All is all, earth entropy is increasing fast, extremely fast.
I know I will never convince you though, so whatever…
SWATSgradyBABY t1_jeedmuo wrote
Reply to The Alignment Issue by CMDR_BunBun
You got some tech you'd like to share with the world?
AlFrankensrevenge t1_jeedllt wrote
Reply to comment by DangerZoneh in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
I agree with you when talking about an AI that is very good but falls far short of superintelligence. GPT4 falls in that category. Even the current open source AIs, modified in the hands of hackers, will be very dangerous things.
But we're moving fast enough that the superintelligence that I used to think was 10-20 years away now looks like 3-10 years away. That's the one that can truly go rogue.
Cartossin t1_jeedewy wrote
Reply to comment by scooby1st in Where do you place yourself on the curve? by Many_Consequence_337
I have a Scooby tattoo and a guy called Scooby1st got mad at me. fml
[deleted] t1_jeededy wrote
Reply to comment by DarkCeldori in We have a pathway to AGI. I don't think we have one to ASI by karearearea
[removed]
Ill_Regular_9339 t1_jeei51e wrote
Reply to Interesting article: AI will eventually free people up to 'work when they want to,' ChatGPT investor predicts by Coolsummerbreeze1
Nobody can predict the future