Recent comments in /f/singularity
[deleted] t1_je9n21n wrote
Cartossin t1_je9n0ik wrote
Reply to comment by pls_pls_me in Do you guys think AGI will cure mental disorders? by Ok-Wing111
Maybe I'm biased because I'm generally surviving the mental health issues, but I'll take that trade. A bit of adhd is a small price to pay to live through the transition to a digital age. Compared to today, I was born in the dark ages. I'd rather have jetplanes and smartphones than be a bit happier with my day to day. The world is AMAZING.
acutelychronicpanic t1_je9mzk9 wrote
Reply to comment by Darustc4 in Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
The problem is that its impossible. Literally impossible. To enforce this globally unless you actively desire a world war plus an authoritarian surveillance state.
Compact models running on consumer PCs aren't as powerful as SOTA models obviously, but they are getting much better very rapidly. Any group with a few hundred graphics cards may be able to build an AGI at some point in the coming decades.
[deleted] t1_je9mzh3 wrote
huskysoul t1_je9moic wrote
Reply to Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
Fear not the scythe but the reaper.
It isn’t AI that creates bad outcomes, but the system that enculcates and wields it. We fear AI because we already know how it will be utilized - to eliminate livelihoods, further marginalize vulnerable groups, and reinforce structural power and inequity.
Placing control of AI in the hands of privileged groups and individuals is what we should be concerned about, not whether or not it exists.
acutelychronicpanic t1_je9mn7y wrote
Reply to Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
Imagine thinking something could cause the extinction of all humans and writing an article about it.
Then putting it behind a pay wall.
Trackest t1_je9mlrd wrote
Reply to comment by acutelychronicpanic in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
First off I do agree that in the ideal world, AI research continues under a European-style, open source and collaborative framework. Silicon valley companies in the US are really good at "moving fast and breaking things" which is why most of the AI innovation is happening in the US currently. However since AI is a major existential risk I believe moving to a strict and controlled progress like what we see with nuclear fusion in ITER and theoretical physics in CERN is the best model for AI research.
Unfortunately there are a couple points that may make this unfeasible in reality.
- Unlike with nuclear fusion or theoretical physics where profitability and application potential is extremely low during the R&D phase, every improvement in AI that brings us closer to AGI has extreme potential profits in the form of automating more and more jobs. Corporations have no motive to give up their AI research to a non-profit international organization besides the goodness of their hearts.
- AGI and Proto-AGI models are huge national security risks that no nation-state would be willing to give up.
- Open-sourcing research will greatly increase risk of mis-aligned models landing in the wrong hands or having nations continue research secretly. If AI research has to be concentrated within an international body, there should be a moratorium on large scale AI research outside of that organization. This may be a deal-breaker.
If we can somehow convince all the top AI researchers to quit their jobs and join this LAION initiative that would be awesome.
SkyeandJett t1_je9makr wrote
Reply to comment by Darustc4 in Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
Yeah I'm MUCH more worried about being blown up in WW3 over AI dominance than a malevolent ASI deciding to kill us all.
Darustc4 OP t1_je9m2ce wrote
Reply to comment by SkyeandJett in Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
I don't consider myself part of the EY cult, but I must admit that AI progress is getting out of hand and we really do NOT have a plan. Creating a super-intelligent entity with fingers in all pies in the world, and humans having absolutely no control over it, is straight up crazy to me. It could end up working out somehow, but it can also very well devolve in the complete destruction of society.
czk_21 t1_je9ln9m wrote
Reply to comment by thecatneverlies in What are the so-called 'jobs' that AI will create? by thecatneverlies
well you dont need expensive lifestyle and you will probably able to do amazing things in virtual reality
czk_21 t1_je9lhuz wrote
Reply to comment by Dubsland12 in What are the so-called 'jobs' that AI will create? by thecatneverlies
have you seen https://www.synthesia.io/?
you can create realistic looking human avatars, in time there will be holograms, indistinguishable robots...
SkyeandJett t1_je9lgt7 wrote
Reply to comment by Dyeeguy in Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
He's literally King Doomer. He and his cult are the ones that push that narrative.
Dyeeguy t1_je9lbiq wrote
Reply to Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
whoever that is is a GOD DAMN BOOMER!!!
[deleted] t1_je9l53h wrote
Reply to comment by Sad_Laugh_8337 in Microsoft research on what the future of language models that can be connected to millions of apis/tools/plugins could look like. by TFenrir
[deleted]
Sad_Laugh_8337 t1_je9kvox wrote
Reply to Microsoft research on what the future of language models that can be connected to millions of apis/tools/plugins could look like. by TFenrir
>Visual ChatGPT is just an example of applying TaskMatrix.AI to the visual domain.
So I guess that answers my initial question -- AI models for tools.
Now apply this logic to:
https://twitter.com/yoheinakajima/status/1640934493489070080?s=46&t=18rqaK_4IAoa08HpmoakCg
I believe this could get us to strong Proto-AGI (just made that up). Why?
- AI models as agents for the specific cases mentioned in Yohe's twitter post -- task keeping, planning, etc...
Very soon we will have an AI model that could perfect every task if using tools that are fine tuned. I believe this puts it into the category of strong Proto-AGI.
Arowx t1_je9ksjv wrote
Reply to comment by Dubsland12 in What are the so-called 'jobs' that AI will create? by thecatneverlies
Just wondering how well an AI with great face recognition and generation technology could appear to be empathising with someone's feelings?
acutelychronicpanic t1_je9ks0m wrote
Reply to comment by Trackest in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
Here is why I respectfully disagree:
-
It is highly improbable that any one attempt at alignment will perfectly capture what humans value. For starters, there are at least hundreds of different value systems that people hold across many cultures.
-
The goal should not be minimizing the likelihood of any harm. The goal should be minimizing the chances of a worst-case scenario. The worst case isn't malware or the fracturing of society or even wars. The worst case is extinction/subjugation
-
Extinction/subjugation is far less likely with a distributed variety of alignment models than with one single model. With a single model, the creators could do a bait and switch and become like gods or eternal emperors with the AI aligned to them first and humanity second. Or they could just get it wrong. Even a minor misalignment becomes a big deal if all power is concentrated in one model.
-
If you have hundreds of attempts at alignment that are mostly good faith attempts, you decrease the likelihood that they share the same blindspots. But it is highly likely that they will share a core set of ideals. This decreases the chances of accidental misalignment for the whole system (even though the chances of having some misaligned AI increases).
Sorry for the wall of text, but I feel that this is extremely important for people to discuss. I want you to tear apart the reasoning if possible because I want us to get this right.
Zer0D0wn83 t1_je9kl1m wrote
Eliezer can fuck off. He's gone right off the deep end now.
PM_ME_ENFP_MEMES t1_je9k36c wrote
Reply to comment by arckeid in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
Logistically that’s obviously very difficult but from a carbon footprint perspective, that’s ideal because your data centre has access to almost free cooling.
basilgello t1_je9k0kg wrote
Azuladagio t1_je9jxsa wrote
You can forget about that in Germany. Those cringey old boomers don't even understand the Internet. How could they possibly understand AI?
basilgello t1_je9jv8o wrote
Slope of enlightenment. It took me a month to go through excitement/fear cycles and start actively refreshing my knowledge in NNs.
bigbeautifulsquare t1_je9juq2 wrote
Reply to comment by [deleted] in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
Can you explain why the US must be the dominant force on everything? It's not particularly like it's intrinsically better than any other country.
turnip_burrito t1_je9jo7t wrote
Reply to comment by SnooWalruses8636 in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Ilya seems to be thinking more like a physicist than a computer scientist. This makes sense from a physics point of view.
[deleted] t1_je9n4yz wrote
Reply to comment by YaAbsolyutnoNikto in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
[removed]