Recent comments in /f/singularity
prolaspe_king t1_je8eyh9 wrote
All the things that make us human, like to complete eliminate panic? Healthy anxiety? Depression? Completely cured is really unlikely. Since a lot of these are automated. I also do not want to meet anyone with none of these problems, I cannot image how sterile their personality would be.
Shack-app t1_je8ewej wrote
Reply to comment by Iffykindofguy in What are the so-called 'jobs' that AI will create? by thecatneverlies
AI won’t generate jobs. It will allow the best people in each field to produce far more with less work.
In some field, like code, we have a shortage. So we can soak up the excess supply. In other fields, like bad romance novels, or college essay plagiarism for hire, or commanding robot armies, jobs will be in short supply.
SmoothPlastic9 t1_je8er6r wrote
Esport champion
AmericanDidgeridoo t1_je8epg2 wrote
Battery cells
cadred000 t1_je8emlf wrote
I think some people envision a world where you no longer have to work at a job performing a menial task and you will be free to pursue other things that are more interesting and rewarding than digging ditches or selling dog food.
tightchester t1_je8emcx wrote
Reply to comment by boreddaniel02 in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
Probably just preparing H100 pipelines and infrastructure. "We think" doesn't tell much.
Arowx t1_je8ej1p wrote
Reply to comment by Dubsland12 in What are the so-called 'jobs' that AI will create? by thecatneverlies
Counselors, Agency Staff sound like jobs with a strong knowledge or rule-based system that a Chat AI could do.
And as soon as someone figures out how to get GPT-5 to drive a Boston Dynamics Dog or Atlas and fixed location, based security will go automated.
Mind you could GPT-6 drive military drones or tanks?
Orc_ t1_je8ef0o wrote
Reply to comment by throwaway12131214121 in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
> What would have been impossible?
For a non-modern system to exist that somehow prevent all wars, genocides, famines, etc. It's High Fantasy.
> Lots of empires were outwardly expansive. None of them took over the world like capitalism did, because their expansionist goals were motivated simply by power fantasies of their leaders.
Power that came from CAPITAL, capitalism has always existed since the first civilization, it even existed in the USSR.
ActuatorMaterial2846 t1_je8e3lg wrote
So what happens is they compile a dataset. Basically a big dump of data. For large language models, that is mostly text, books, websites, social media comments. Essentially as many written words as possible.
The training is done through whats called a neural network using something called a transformer architecture. Which is a bunch of GPUs (graphics processing units) linked together. What happens in the nueral network whilst training is a bit of a mystery, 'black box' is often a term used as the computational calculations are extremely complex. So not even the researchers understand what happens here exactly.
Once the training is complete, it's compiled into a program, often referred to as a model. These programs can then be refined and tweaked to operate a particular way for public release.
This is a very very simple explanation and I'm sure there's an expert who can explain it better, but in a nutshell that's what happens.
Substantial_Part_952 t1_je8dtll wrote
Reply to comment by Maskofman in Facing the inevitable singularity by IonceExisted
You can't trust anything online anymore. It's become obsolete.
icanbenchurcat t1_je8dq8i wrote
No one lives forever, no one. But with advances in modern science and my high level income, it's not crazy to think I can live to be 245, maybe 300. Heck, I just read in the newspaper that they put a pig heart in some guy from Russia. Do you know what that means?
BigZaddyZ3 t1_je8dh2a wrote
Reply to comment by Puzzleheaded_Pop_743 in Do people really expect to have decent lifestyle with UBI? by raylolSW
This thought process only works if you believe good and bad are completely subjective, which they aren’t.
There are two decently objective ways to define bad people.
-
People who are a threat to the wellbeing of others around them (the other people being innocent of course.)
-
People that are bad for the well-being of society as a whole.
For example, there’s no intelligent argument that disputes the idea that a serial killer targeting random people is a bad person. It literally can not be denied by anyone of sound mind. Therefore we can conclude that some people are objectively good and objectively bad.
throwaway12131214121 t1_je8dg8m wrote
Reply to comment by Orc_ in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
What would have been impossible?
Lots of empires were outwardly expansive. None of them took over the world like capitalism did, because their expansionist goals were motivated simply by power fantasies of their leaders.
Capitalist motivations are different because they apply to everybody in society, and whoever carries them out the best is automatically put into a position of power, so the society is quickly controlled by capitalist forces and becomes hyperfocused on capitalist interests and that doesn’t stop even when the people doing it die; they are just replaced with new capitalists.
That’s why so many elites in modern society are utter sociopaths; empathy gets in the way of profit a lot of the time and so if you have empathy there are people who will outcompete you and you won’t get to the top
Nervous-Patience-310 t1_je8d967 wrote
Reply to Facing the inevitable singularity by IonceExisted
Hope it can bring us some decent TV. I'm tired of rewatching breaking bad
Aevbobob t1_je8d6cm wrote
How about a direct democracy where your personal AI represents you? Laws directly and immediately reflect the will of the people and are engineered by superintelligent minds with continuous access to the true will of the people on every topic
Orc_ t1_je8czkd wrote
Reply to comment by throwaway12131214121 in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
> Yeah but every other system would also not have colonized the entire planet through countless continuous genocides and centuries of exploitation.
That would have been impossible, furthermore egalitarian systems are not inherently peaceful outwards or even ecological. Teotihuacan as an example.
UrusaiNa t1_je8czcl wrote
Reply to comment by [deleted] in Open letter calling for Pause on Giant AI experiments such as GPT4 included lots of fake signatures by Neurogence
ChatGPT recommended a hide bone (which has a risk of splintering) instead of the safer peanut butter based smart bones.
Don’t fuck with his dog.
hungariannastyboy t1_je8ciqk wrote
Reply to comment by Dwanyelle in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
>we still argue over whether some humans are fully human
uum what?
jlowe212 t1_je8cf2q wrote
Reply to comment by Cryptizard in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Humans are capable of being convinced of many things that are obviously false. Even otherwise smart humans fall into cognitive traps, and sometimes can be even more dangerous when those humans are confident in their own intelligence.
Aevbobob t1_je8c5kj wrote
Reply to The Limits of ASI: Can We Achieve Fusion, FDVR, and Consciousness Uploading? by submarine-observer
The things you list in your title aren’t forbidden by the laws of physics, they’re just engineering problems. If you’re going to speculate about the capabilities of a mind 100x smarter than you, it’d be worth considering what a dog thinks about chip manufacturing. Now, while you’re considering this perspective, consider how much smarter you are than your dog. It’s not 100x. It’s not even 10x.
A better discussion topic might be to point out that our DNA grew up in a linear world. Our primal intuitions about change are that things will probably not change that much. In an exponential world follow the evidence, not just your intuition. If you think an exponential will stop, make sure your reasoning, at its core, is not “it feels like it has to stop” or “it seems too good to be true”.
bennie_gee t1_je8c21f wrote
Interested! Recently read the first two. Would love to join!
Puzzleheaded_Pop_743 t1_je8bu0s wrote
Reply to comment by BigZaddyZ3 in Do people really expect to have decent lifestyle with UBI? by raylolSW
They can but that is just a subjective perspective because "good" and "bad" are just egoic projections. From a third person point of view if you view human behavior as part of a system then you can see that people behave immorally due to fear and ignorance.
Dubsland12 t1_je8btnd wrote
Mental health counselors, unemployment agency workers, security for the wealthy and military
Wyrdthane t1_je8bo1e wrote
No other government is going to "pause", so it won't accomplish anything
Not-Banksy OP t1_je8ez6o wrote
Reply to comment by ActuatorMaterial2846 in When people refer to “training” an AI, what does that actually mean? by Not-Banksy
Thanks for the explanation, much appreciated!