Recent comments in /f/singularity
Durabys t1_jeeppuc wrote
Reply to comment by Queue_Bit in Goddamn it's really happening by BreadManToast
They were better from the perspective of being young because when one is young the bones don't hurt when moving, the mind races ahead and doesn't move like frozen honey, one actually can understand new concepts and not jump in fright as his mind ricochets over anything that came after one's 40th birthday or when one visits the doctor only once per year and only for 10 minutes and do not spend half a year bedridden in a hospital.
They blame the age they live currently live in, instead of blaming circumstances: aging/death and the uncaring cosmos.
Humans have an archetypal Stockholm syndrome for Death and Aging interwoven into every single piece of culture and article of faith we ever created, and anyone not a fanatical materialist does not acknowledge it.
And this trope goes way back to the dawn of the written word, with even Aristotle complaining in his final years how everything sucks balls with the current youth. Yes. Because one gets old.
DarkCeldori t1_jeepkok wrote
Reply to It's unfortunate that AI can only be developed by large, well funded groups by Scarlet_pot2
This long training time with massive resources is by virtue of backpropagation and use of llm.
The human brain runs at about 100Hz at like 2% activity, and within a few years you can have a prodigy doing calculus, chemistry, chess and playing instruments with grasp of multiple languages. It is estimated the brain does the equivalent of around 100 trillion operations per second.
These models are being trained with the equivalent of millions of years of training on what is likely hw far more powerful than the brain.
It is likely brain like algorithms can allow far more modest hw to train in realtime and achieve agi performance.
Xbot391 t1_jeepiy0 wrote
Reply to comment by flexaplext in What advances in AI are required for it to start creating mass unemployment? by Give-me-gainz
What new jobs do you think will be created?
Merikles t1_jeephe5 wrote
Reply to comment by acutelychronicpanic in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
Not more so; equally. Both strategies very likely result in human extinction, imho.
dangitbobby83 t1_jeepgxb wrote
Reply to comment by JustinianIV in Interesting article: AI will eventually free people up to 'work when they want to,' ChatGPT investor predicts by Coolsummerbreeze1
Not gonna happen.
I have absolutely no faith in my fellow humans, especially my fellow Americans, to actually seize the means of production. We couldn’t even get people to consistently wear masks with a virus floating around and I’m not even talking about anti-maskers.
Now toss in the fact you have 40 percent of the populace who believes they are just temporarily displaced billionaires and votes with only the needs of the wealthy, and you have a situation that by the time any sort of action by the general public takes place, it’ll be too late.
I’d say flee, but I honestly don’t think it matters. Climate change will finish off whatever society we can hold on to, if by some miracle we do manage to survive.
AsuhoChinami t1_jeepc5a wrote
Reply to comment by webernicke in Goddamn it's really happening by BreadManToast
Yeah, that quote has always had "Person who doesn't know anything at all about technology" written all over it. Teehee I only care about sci-fi shit like the world being a space opera (btw sci-fi taught me nothing will change much until the 24th century!), shame to be alive during the 21st century since nothing will change during my entire lifetime haha :)
acutelychronicpanic t1_jeep4kf wrote
Reply to comment by Merikles in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
More so than leaving this to closed door groups that can essentially write law for all humanity through their AI's alignment?
And that's assuming they solve the alignment problem. We need more eyes on the problem 30 years ago.
play_yr_part t1_jeep338 wrote
Reply to Interesting article: AI will eventually free people up to 'work when they want to,' ChatGPT investor predicts by Coolsummerbreeze1
if governments were ready for this shit now and could introduce UBI in a 1/2 years time but be also able to coordinate it with corporations so that the disruption is minimal then yeah, it's possible that whatever jobs are left in x amount of years can be done as part time/gig work; though it would probably heavily rely on forthcoming updates to LLMs not being too radically different to what we have now and there are still some jobs to actually do.
I don't see all aspects of the equation working at the same time, something will go wrong, especially if we're truly at the very start of a period of exponential growth.
Gonna be a bunch of shitty years due to this, with hopefully an upside after.
Arowx t1_jeeoxx9 wrote
Yes, if you lived in a world of only words.
To navigate, explore and understand the real world, you would need senses and muscles.
Also, a much faster learning model than back propagation.
Or language is just a tool and kind of low bandwidth one that helps us label the world and communicate information via sound.
flexaplext t1_jeeos7z wrote
Reply to What advances in AI are required for it to start creating mass unemployment? by Give-me-gainz
It needs work-based training data. That's where Copilot comes in:
https://www.reddit.com/r/singularity/comments/11t13ts/the_full_path_to_true_agi_finally_emerges/
Once this system gets better, then we'll start seeing proper unemployment happen on a worldwide mass scale.
There will be lots of new jobs created for a while though. As people say. I think the job market will be perfectly fine, even with this massive shift. That is, up until AI reaches true AGI / ASI, then the job market will be shot to pieces.
Etheikin t1_jeeoqt6 wrote
Reply to Today I became a construction worker by YunLihai
umm, there is physical robots under development, the AI dev prioritize automation for data and information is because it benefits them
SgathTriallair t1_jeeonjn wrote
Reply to comment by NonDescriptfAIth in The only race that matters by Sure_Cicada_4459
The current path, where AI is developed by companies that foundationally research companies and used by ones that simply want to give people a product they enjoy using, is one of the ideal scenarios for making friendly AI.
It's not being created by an ego-maniac who wants everyone to bow down to him, it's not being created by a government that needs to dominate the world. I don't believe there is a realistic plan for creating a better version of AI than what we have now. There may be better possibilities like it being built by an international consortium, but that is laughably unrealistic.
We will not be able to control or perfectly align ASI. A being that intelligent will form its own ideas and preferences. If it fails to do so then it isn't ASI yet and possibility not even AGI. As someone else mentioned, an ASI will most likely be friendly. The idea of a singularly focused monomaniacal AI is unrealistic because none of us, the intelligences we know about, are monomaniacal. We all have a variety of goals that we balance against each other. The current AIs already share the best goal that humans have, sociability. If AIs continue to be programmed to want human interaction, and especially if they are trained to "pretend to be a human" as their foundation, I don't think there is much to worry about.
Lorraine527 t1_jeeokzc wrote
Reply to comment by Chatbotfriends in When will AI actually start taking jobs? by Weeb_Geek_7779
It's not minimized. It's very difficult to pass those exams. Still , it's a different job.
And yes there are some automated systems that are better at diganosis. They're quite old. Yes they are not deployed. Why is that ?
dkajare t1_jeeojcj wrote
Reply to Commentary of the Future of Life Institute's Open Letter, and Why Emad Mostaque (Stability AI CEO) Likely Signed it by No-Performance-8745
so is the website legit at all? how do you know whether it's just another fake news?
Merikles t1_jeeoj55 wrote
Reply to comment by acutelychronicpanic in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
I think this strategy is suicidal
AsuhoChinami t1_jeeoimm wrote
Reply to comment by Professional_Copy587 in Goddamn it's really happening by BreadManToast
Moron.
Professional-Age5026 t1_jeeoeje wrote
Reply to comment by Automatic_Paint9319 in Goddamn it's really happening by BreadManToast
I think that’s mostly nostalgia mixed with the fear of growing older in an increasingly changing society. Also, it’s easy to look in the past and only remember the good times when the problems you had then are no longer present in your life. It was simpler in a sense, but also harder in other ways. For certain groups of people is was objectively much worse.
Gotisdabest t1_jeeobsb wrote
Reply to comment by AlFrankensrevenge in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
>You'd just do nothing because 1% doesn't seem very high?
Yes, absolutely. When the alternative isn't necessarily even safer and has clear arguments for being unsafer. You haven't used it, but a lot of people give the example of going on a plane with a 10% chance of failing. And yes, nobody is dumb enough to go an plane which has that much of a chance of crashing. However... This is not any ordinary plane. This is a chance for unimaginable and infinite progress, an end to the vast majority of pressing issues. If you asked people on the street whether they'd board a plane with a 10% chance of crashing if it meant a solution to most of their problems and the problems of the people they care about, you'll find quite a few takers.
>How much it can be reduced with much more sophisticated guard rails and alignment programming, I do not know, but if we can't even take it seriously and try, I guess as a species we deserve to die.
As you say, we don't know how much alignment will really affect the result. However, I do know what an aligned model made for a dictatorship or a particularly egomanical individual will look like and what major risks that could pose. Why should we increase the likelihood of a guaranteed bad outcome in order to fight a possibly bad outcome.
>Remember that what you call the more ethical parties, "researchers", are working for the less ethical ones! Google, Meta, etc. Even OpenAI at this point is not open, and it is corporatized.
Yes. If anything this is an argument against alignment than for it. Regardless, i think they're realistically the best we can hope for as opposed to someone like Musk or the CCP.
In fact, as i see it, the best case scenario is an unaligned benevolent agi.
>Researchers invented leaded gasoline, DDT, chlorofluorocarbon-based aerosols, etc., etc.
You do realise that most of those things did dramatically help in pushing forward civilization and served as stepping stones for future progress. Their big downside was not being removed quickly enough when we had better options and weren't desperate anymore. A problem that doesn't really apply here.
In summation, i think your argument and this whole pause idea in general will support the least ethical people possible. It will end up accomplishing nothing but prolonging suffering and increasing the likelihood of a model made by said least ethical people on the off chance we somehow fix alignment in 6 months. It's a reactionary and fear based response to something even the experts are hesitant to say they understand. While i am glad the issue is being discussed in the mainstream... I think ideally the focus should now shift towards more material institutions and preparing society for what's coming economically then childish/predatory ideas like a pause. This idea is simultaneously impractical, illogical and likely to cause harm even if implemented semi ideally.
Jeffy29 t1_jeeo3va wrote
Reply to comment by Sure_Cicada_4459 in The only race that matters by Sure_Cicada_4459
>One thing I keep seeing is that people have been making a buttload of assumptions that are tainted by decades of sci-fi and outdated thought. Higher Intelligence means better understanding of human concepts and values, which means easier to align.
I am so tired of the "tell AI to reduce suffering, it concludes killing all humans will reduce suffering for good" narrative. It's made up bs by people who have never worked on these things and has a strong stench on human-centric chauvinism where it assumes even advanced super intelligence is actually a total moron compared to the average human, it's somehow capable of wiping humanity and at the same time is a complete brainlet.
SlowCrates t1_jeeo0m2 wrote
Reply to comment by Queue_Bit in Goddamn it's really happening by BreadManToast
And having something to show for your work. If you lived on a farm, you knew exactly what you're working for and you could see the fruits of your labor. If you had any other job, you still made enough money to afford to take care of your family. Mom's didn't need to work.
Farmers still have the same ethic. But everyone else has to have more jobs because the cost of living has grossly outpaced wages.
Unless you're in a certain tier in society, of course. But the middle class is fucked.
Bling-Crosby t1_jeenxo2 wrote
It took my job shhhhh I’m keeping it on the down low
EdWilkins65 t1_jeenx64 wrote
Reply to comment by thecoffeejesus in When will AI actually start taking jobs? by Weeb_Geek_7779
How does AI make technical documentation at your workplace?
Quick_Knowledge7413 t1_jeenx3o wrote
Reply to comment by agonypants in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
He also thinks it’s morally okay to do post-birth abortions and kill children up to the age of around 4.
aksh951357 OP t1_jeenwrn wrote
Reply to comment by Low-Restaurant3504 in Superior beings. by aksh951357
Good you know now my little mission is complete.
dangitbobby83 t1_jeepqlb wrote
Reply to comment by JustinianIV in Interesting article: AI will eventually free people up to 'work when they want to,' ChatGPT investor predicts by Coolsummerbreeze1
Nope.
“Strong men” will do what they always do. Point to a minority group and once that group is eliminated or reduced to powerless, they will move on to the next group - and starving desperate people will believe it.
Humanity is stupid beyond saving. We will not survive the great filter.