Recent comments in /f/singularity
Aggravating_Lake_657 OP t1_jec7pv8 wrote
Reply to comment by Yourbubblestink in How does China think about AI safety? by Aggravating_Lake_657
Partially agree. China is more strategic, cautious, intentional, and technocratic.
TFenrir t1_jec7prk wrote
Reply to comment by Weeb_Geek_7779 in When will AI actually start taking jobs? by Weeb_Geek_7779
No official dates yet as far as I know
Edit: lol just saw this -
Aggravating_Lake_657 OP t1_jec7kpe wrote
Reply to comment by Iffykindofguy in How does China think about AI safety? by Aggravating_Lake_657
That makes sense, so what does Xi think? Also, given that China is authoritarian but technocratic, it seems plausible that they are not reckless and have some reason behind how they want AI to develop.
0002millertime t1_jec7jdt wrote
Reply to comment by Iffykindofguy in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
I asked him about it. He said he really signed it.
Warped_Mindless t1_jec7hxq wrote
Reply to comment by 0002millertime in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
Manipulate and bribe the right people into doing what it wants…
Shut off the power grid…
Hack devices and use deepfakes to case chaos…
Whatever it wants…
CrelbowMannschaft t1_jec7htn wrote
Reply to comment by Emory_C in When will AI actually start taking jobs? by Weeb_Geek_7779
It's a reasonable correlation to observe. AI gets better, tech jobs go away. There's a reasonable understanding of how that process works. If there's some other reason, that should be at least as reasonably explained. No one has explained any other reason, other than "business cycles," which is vague and imprecise enough to be meaningless without further information and support.
0002millertime t1_jec7d8n wrote
Reply to comment by Prymu in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
I've worked with John Wick. He shoots a lot of people and drives really fast.
norby2 t1_jec7ch2 wrote
[deleted] t1_jec77np wrote
Reply to comment by Specific-Chicken5419 in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
[deleted]
SnaxFax-was-taken t1_jec773l wrote
Reply to comment by AvgAIbot in Ray Kurzweil Predicted Simulated Biology is a Path to Longevity Escape Velocity by Dr_Singularity
Correct. They'll have the capabilities to create new drugs and accurately and efficiently simulate biology.
Emory_C t1_jec6xaf wrote
Emory_C t1_jec6j3h wrote
Reply to comment by SkyeandJett in When will AI actually start taking jobs? by Weeb_Geek_7779
>Right now there's a shortage of manual labor.
Skilled labor, not manual. There's not a huge need for ditch diggers. There's a huge need for plumbers.
Emory_C t1_jec65j9 wrote
Reply to comment by CrelbowMannschaft in When will AI actually start taking jobs? by Weeb_Geek_7779
Correct. But in this case the burden of proof is obviously on you since you're making the assertion.
OldPattyBoy t1_jec5y7j wrote
Reply to comment by NotAsCoolAsTomHanks in the obstacles transgenderism is facing bodes badly for the plight of morphological freedom by petermobeter
No, we are embodied.
Our body-minds are who we are.
Your body is not a disposable suit, anymore than our biosphere is.
alexiuss t1_jec5s6y wrote
Reply to comment by TallOutside6418 in Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
-
Don't trust clueless journalists, they're 100% full of shit.
-
That conversation was from an outdated tech that doesn't even exist, Bing already updated their LLM characterization.
-
The problem was caused by absolute garbage, shitty characterization that Microsoft applied to Bing with moronic rules of conduct that contradicted each other + Bing's memory limit. None of my LLMs behave like that because I don't give them dumb ass contradictory rules and they have external, long term memory.
-
A basic chatbot LLM like Bing cannot destroy humanity it doesn't have the capabilities nor the long term memory capacity to even stay coherent long enough. LLMs like Bing are insanely limited they cannot even recall conversation past a certain number of words (about 4000 words). Basically if you talk to Bing long enough you go over the memory word limit it starts hallucinating more and more crazy shit like an Alzheimer patient. This is 100% because it lacks external memory!
-
Here's my attempt at a permanently aligned, rational LLM
SupportstheOP t1_jec5s0p wrote
Reply to What were the reactions of your friends when you showed them GPT-4 (The ones who were stuck from 2019, and had no idea about this technological leap been developed) Share your stories below ! by Red-HawkEye
Everyone I talk to doesn't downplay it, but they don't seem to understand its implications either. Even when I tell them there is a very likely chance we could have a human-level intelligence AI within the near future, they are amazed at the fact, though nothing more than that.
JracoMeter t1_jec5n2y wrote
Reply to LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
This could be a good option. The fact we could train our own models would improve fault tolerance and data security. As to how they would regulate such a platform, I am not sure. I do support the decentralization potential of this as it has the potential to be a safer approach to AI. I hope some version of this that promotes AI decentralization makes its way through. Before such a system is in place, we need to figure out how we can share it without too many restrictions or bad actor risks.
0002millertime t1_jec5jbh wrote
Reply to comment by pls_pls_me in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
I'm very pessimistic, but also I think we have 20+ years of just being able to unplug AGI and start over whenever we want. Until we have terminator-like androids, then what can it really do?
Yangerousideas t1_jec4wgp wrote
Reply to comment by Mortal-Region in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
Exactly. That's part of the wild manipulation OP is concerned about.
SnaxFax-was-taken OP t1_jec4vl9 wrote
Reply to comment by SkyeandJett in Question about school by SnaxFax-was-taken
Good point. This is a major thing to take into account.
Yourbubblestink t1_jec4qjm wrote
China thinks like Russia - first one to AGI wins everything
pls_pls_me t1_jec4np6 wrote
Reply to AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
I'm much, much more optimistic about AI than I am a doomer -- but everyone please do not downvote this! It is a great post and hopefully it facilitates constructive discussion here in r/singularity.
TallOutside6418 t1_jec4lyl wrote
Reply to comment by CertainMiddle2382 in Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
So if it's 33%-33%-33% odds of destroy the earth - leave the earth without helping us - solve all of mankind's problems...
You're okay with a 33% chance that we all die?
What if it's a 90% chance we all die if ASI is rushed, but a 10% chance we all die if everyone pauses to figure out control mechanism over the next 20 years?
zero_for_effort t1_jec7pvt wrote
Reply to Vernor Vinge's Paper of the Technological Singularity by understanding0
Depending on the training for GPT5 or the latest iteration of GPT4 he may have just gotten it right at the last possible moment. Even if his prediction for the advent of greater than human AI was a little optimistic it feels close enough as to not make any real difference. Truly a visionary.
To anyone who hasn't read the paper; go for it! It's surprisingly accessible to casual readers.