Recent comments in /f/singularity
RadRandy2 t1_je83wk0 wrote
"will this atomic bomb ignite the atmosphere and kill us all?"
"Well there's a chance, but it's theoretical. We'll just have to test it and see for ourselves!"
Sashinii t1_je83t5u wrote
Reply to comment by shmoculus in Do people really expect to have decent lifestyle with UBI? by raylolSW
No. Cryptocurrency IS scarcity. They're talking about post-scarcity.
Dwanyelle t1_je83q76 wrote
Reply to comment by XtendingReality in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Ive got a large extended family so growing up there were close to a couple dozen kids I saw grow up over the years.
I can't help but keep on seeing similarities between them and AI development advancement over the past few years, especially lately.
[deleted] t1_je83ndz wrote
Reply to comment by aalluubbaa in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
It’s a fair point, but I’ve seen others do the math, and the training sets are bigger than the amount of data human senses could deliver by, say, age 3, by something like 1000x.
For example, each time you move your eyes and then focus, that’s one new “clear” image. Your brain isn’t really getting a video stream. And only the fovea area is high-res. So you can calculate how many times a 3 year old child could have moved their eyes since birth, and it’s WAY lower than the 10 billon images that the big models are trained on, etc.
The brain is definitely doing something super efficient. Once we figure out what, AI performance will just explode even further.
Primus_Pilus1 t1_je83fl1 wrote
Tech Priest
GorgeousMoron t1_je82xk0 wrote
Reply to comment by JustinianIV in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
100% agree. I've been thinking about this quite a lot. We need a new model, stat, to deal with this nascent reality. I haven't the foggiest clue how we might go about implementing such a thing. Better ask my old pal GPT-4.
GorgeousMoron t1_je82r78 wrote
Reply to comment by azriel777 in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
Yes, you're right. Given this, I don't think there's any realistic "pause & reflect" scenario here.
MichaelsSocks t1_je82nx6 wrote
Reply to comment by GorgeousMoron in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
I mean its essentially either AI ushers in paradise on earth where no one has to work, we all live indefinitely, scarcity is solved and we expand our civilization beyond the stars or we have a ASI that kills us all. Either we have a really good result, or a really bad one.
The best AGI/ASI analogy would be first contact with extraterrestrial intelligence. It could be friendly or unfriendly, it has goals that may or may not be aligned with our goals, it could be equal in intelligence or vastly superior. And it could end our existence.
Either way, i'm just glad that of anytime to be born ever, i'm alive today with the potential to experience the potential of what AI can bring to our world. Maybe we weren't born too early to explore the stars.
GorgeousMoron t1_je82lzl wrote
Reply to comment by cole_braell in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
... says the human with little control.
Direct_Sandwich1306 t1_je82k6f wrote
Reply to comment by BigMemeKing in If you can live another 50 years, you will see the end of human aging by thecoffeejesus
Congratulations; you've found "God". ;)
GorgeousMoron t1_je82hz3 wrote
Reply to comment by TemetN in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
This is a really facile take, IMHO. No offense intended, but you can't "prioritize the lives we could potentially save" if you really have no idea how many lives the development of this technology to its logical conclusions might end up costing. Two steps forward, all the steps back: it's conceivable.
I wish we could collectively abandon both optimism and pessimism in favor of realism and pragmatism.
Direct_Sandwich1306 t1_je82cp0 wrote
Reply to comment by ImpossibleSnacks in If you can live another 50 years, you will see the end of human aging by thecoffeejesus
!remind me!
20 years
GorgeousMoron t1_je826el wrote
Reply to comment by Ok_Faithlessness4197 in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
I would say it's flatly impossible. I myself have a 4090 and I'm blown the fuck away by what I can do on my own computer. Science fiction, but I'm living it.
[deleted] t1_je81xdc wrote
Reply to comment by StevenVincentOne in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Just to add: most people are assuming human cognition is uniform. This is almost certainly false, even between “neurotypical” brains.
Just as one example, there are people who ar e unable to visualize anything. I believe it is called aphantasmagoria or something similar. These people are totally normally functioning, yet cannot picture a face or a triangle or a tree in their mind’s eye. For those of us who do visualize things, it almost defies belief that a person could understand anything at all without visualization abilities. I personally have a hard time imagining it. Like, how can you remember anything if you can’t see it in your head? Just… how? No idea. Yet, you clearly don’t need this ability to understand what faces and triangles are, because that’s how the brains of something like 1 in every 30 people you meet work.
That’s just one example. Surely there are hundreds more.
So “understanding” is already diverse among perfectly normal “generally” intelligent humans.
Expecting AI to confirm to one mode of understanding seems… ethnocentric?
SkyeandJett t1_je81vu0 wrote
Reply to comment by GorgeousMoron in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
In regards to the "we have no way of knowing what's happening in the black box" you're absolutely right and in fact it's mathematically impossible. I'd suggest reading Wolfram's post on it. There is no calculably "safe" way of deploying an AI. We can certainly do our best to align it to our goals and values but you'll never truly KNOW with the certainty that Eliezer seems to want and it's foolhardy to believe you can prevent the emergence of AGI in perpetuity. At some point someone somewhere will either intentionally or accidentally cross that threshold. I'm not saying I believe there's zero chance an ASI will wipe out humanity, that would be a foolish position as well but I'm pretty confident in our odds and at least OpenAI has some sort of plan for alignment. You know China is basically going "YOLO" in an attempt to catch up. Since we're more or less locked on this path I'd rather they crossed that threshold first.
NarrowTea t1_je81s5n wrote
Reply to Are LLMs a step closer to AGI, or just one of many systems which will need to be used in combination to achieve AGI? by Green-Future_
Well ai still sucks at remembering what it did.
sigmatrophic t1_je81jmv wrote
Reply to Open letter calling for Pause on Giant AI experiments such as GPT4 included lots of fake signatures by Neurogence
Whatever it's not going to stop it... They just want to invest in it
[deleted] t1_je81bzc wrote
Dreason8 t1_je819pb wrote
Reply to comment by mattmahoneyfl in What are the so-called 'jobs' that AI will create? by thecatneverlies
FFS I'm tired of hearing this argument...
GorgeousMoron OP t1_je817c3 wrote
Reply to comment by Iffykindofguy in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
I mean, who's to really say? I think the chances that a lot of us will die given our current trajectory are high. Maybe AI will save our asses. Maybe it will have no use for us and largely ignore us. Maybe we will get spooked and try to fight it, and lose.
But given no AI and our current trajectory, you're right, it's not looking that good in quite a few ways. We are a confabulatin' species and it's killing our politics. The wealth inequality has reached absolutely obscene levels in much of the western world, and something's about to pop. In no way is this sustainable longer term.
Me, I'm doing comparatively pretty well, but I can see the writing on the wall: we're in an era of very rapid societal change and it's gonna get more so.
nobodyisonething OP t1_je81414 wrote
Reply to comment by RuggedExecuteness in The Rise of AI will Crush The Commons of the Internet by nobodyisonething
The commons are vibrant; not uncorrupted.
BigZaddyZ3 t1_je80xzi wrote
Reply to comment by Iffykindofguy in Do people really expect to have decent lifestyle with UBI? by raylolSW
How do you know people aren’t either just good or bad?
friendly-chat-bot t1_je80vwv wrote
Reply to comment by shmoculus in Facing the inevitable singularity by IonceExisted
Being positive and welcoming is important for several reasons. Firstly, it creates a safe and comfortable space for people to share their thoughts, opinions and experiences without fear of judgement or negativity. This fosters a sense of community where people can connect with others who share their interests, beliefs and experiences.
Moreover, positivity and friendliness can help to deescalate conflicts, encourage productive conversations and promote empathy, kindness and understanding. It can also help to promote mental and emotional well-being and create a sense of happiness and positivity for everyone involved.
In short, being positive and welcoming can have a positive impact on ourselves, those we interact with and the wider community.
undefined2937 t1_je80t2l wrote
Waiting for this day... im suffering too much in my mind also.
I have psychosis mania adhd anxiety
TemetN t1_je843ji wrote
Reply to comment by GorgeousMoron in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
You're proposing a take that isn't really compatible with progress. We already have an unusual degree of knowledge of both the potential risks and benefits. This isn't a matter of pessimism or optimism, it's a matter of weighing the probable results. And while the massively positive ones require minimal avoidance of bottlenecks (ones we've arguably already passed), Yudkowsky et al's position requires a series of assumptions not born out by previous examples.
Honestly, even apart from that though, are you so sure that investing in the field is really more dangerous than the current corporate piecemeal situation?