Recent comments in /f/singularity
hungariannastyboy t1_je8j44l wrote
Reply to comment by Gaudrix in What are the so-called 'jobs' that AI will create? by thecatneverlies
>Early stages of AI rollout, what we are experiencing now, up until full post-scarcity will be dick for just about everyone.
A.k.a. our lifetimes. Great times ahead. God this shit is bleak.
JamPixD OP t1_je8j1i8 wrote
Reply to comment by gronerglass in Would it be a good idea for AI to govern society? by JamPixD
That’s basically what I was picturing. However a lot of people in the comments have brought up some good ideas like ai implemented democracy
trynothard t1_je8j147 wrote
Prompt engineer.... So a writer? Lol
StevenVincentOne t1_je8izsu wrote
Reply to comment by SnooWalruses8636 in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Ilya seems to have a better handle on it than others. I think you have to go all the way back to Claude Shannon and Information Theory if you really want to get it. I think Shannon would be the one, if he were around today, to really get it. Language is encoding/decoding of information, reduction of information entropy loss while maintaining maximum signal fidelity. Guess who can do that better than the wetware of the human brain. AI.
bemmu t1_je8iz7u wrote
I've already hired someone part time to make images for a video game with Stable Diffusion. So I guess "AI cherry picker" could be one such job.
Beowuwlf t1_je8ivzb wrote
Reply to comment by GorgeousMoron in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
I’m glad there are other people thinking the same things
hungariannastyboy t1_je8ivv5 wrote
Reply to comment by Shack-app in What are the so-called 'jobs' that AI will create? by thecatneverlies
This comment is so full of hubris. "These good and verrry complex and useful jobs, like the one I do, will THRIVE!! These bad and useless jobs will cease to exist."
I think you're in for an unpleasant surprise in the medium term about how capitalism actually works.
Jeffy29 t1_je8itvc wrote
Jesus Christ this clown needs to stop reading so much sci-fi
>Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
Start a World War 3 to prevent imaginary threat in our heads. Absolute brainiac. This reeks of same kind of vitriolic demonization that muslims were subjected after 9/11 or trans people are subject to right now. Total panic and psychosis. There is all this talk about AGI and when AI is going to reach it, but holy shit when are going humans going to? Emotional, delusional, destructive, for supposed pinnacle of intelligence we are a remarkably stupid species.
VinoVeritable t1_je8is51 wrote
Reply to comment by Prestigious-Ad-761 in The Rise of AI will Crush The Commons of the Internet by nobodyisonething
Do you know why exact search has been disabled?
SpecialMembership t1_je8iowc wrote
Reply to Thoughts on this? by SnaxFax-was-taken
He always mentions the 2030s, which means between 2030 and 2040. It is certainly possible, but some people mistakenly think it will happen in 2030.
Mrkvitko t1_je8im10 wrote
Reply to comment by Spire_Citron in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
Nuclear war is probably extinction event for all / most life on earth in the long term anyways. Modern society will very likely fall apart. Because post-war society will no longer have cheap energy and resources available (we already mined those easily accessible), it won't be able to reach technological level comparable to ours.
Then all it takes is one rogue asteroid, or supervolcano eruption. Advanced society might be able to prevent it. Middle-ages one? Not so much.
ActuatorMaterial2846 t1_je8ik1t wrote
Reply to comment by FlyingCockAndBalls in When people refer to “training” an AI, what does that actually mean? by Not-Banksy
It's actually quite technical, but essentially, the transformer architecture helps each part of the sentence “talk” to all the other parts at the same time. This way, each part can understand what the whole sentence is about and what it means.
Here is the paper that imo changed the world 6 years ago and is the reason for the current state of AI.
https://arxiv.org/abs/1706.03762
If it goes over your head (it did for me), ask bing or chatgpt to summarise it for you. It helped me get my head around this stuff, as I'm in no way an expert nor do I study this field.
liameymedih0987 t1_je8ii7d wrote
Reply to comment by Iffykindofguy in What are the so-called 'jobs' that AI will create? by thecatneverlies
This stupid sub in a nutshell
Burger flipper & rich teen alike: “The singularity is next monday”
EnomLee t1_je8iery wrote
Reply to Thoughts on this? by SnaxFax-was-taken
Yes, terrifying. Nothing terrifies me more than the thought of humanity reaching longevity escape velocity by 2030. I'm so terrified I'm going to have to sleep with the lights on tonight. Somebody hold me, please.
Absolutely trash article with a clickbait title. Baby's first reading of Kurzweil.
liameymedih0987 t1_je8ie4g wrote
Reply to comment by [deleted] in What are the so-called 'jobs' that AI will create? by thecatneverlies
Until they also die from skynet
turnip_burrito t1_je8ichg wrote
The essence of it is this:
You have a model of some thing out there in the world. Ideally the model should be able to copy the behavior of that thing. That means it needs to produce the same data as that real thing.
So, you change parts of the model (numbers called parameters) until the model can create the data already collected from the real world system. This parameter- changing process is called training.
So for example, your model can be y=mx+b, a straight line, and the process of making sure m and b are good values to align the line to dataset (X, Y) is "training". AI models are not straight lines like y=mx+b, but the idea is the same. It's really advanced curve fitting, and some really interesting properties can emerge in the models as a result.
StevenVincentOne t1_je8icbo wrote
Reply to comment by Prestigious-Ad-761 in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
No, we are not. It's a definite "forest for the trees" perceptual issue. Many of the people so far inside the forest of AI cannot see beyond the engineering into the results of their own engineering work. AI are not machines. They are complex, and to some degree self-organizing, systems of dynamic emergent behaviors. Mechanistic interpretations are not going to cut it.
liameymedih0987 t1_je8ic66 wrote
Reply to comment by BrBronco in What are the so-called 'jobs' that AI will create? by thecatneverlies
As meat for their dogs
[deleted] t1_je8ibyz wrote
Reply to comment by ActuatorMaterial2846 in Thoughts on this? by SnaxFax-was-taken
[deleted]
liameymedih0987 t1_je8iad7 wrote
Reply to comment by Dyeeguy in What are the so-called 'jobs' that AI will create? by thecatneverlies
Drones ffs
IceNorth81 t1_je8iad1 wrote
Representative for humanity
Not-Banksy OP t1_je8i7uw wrote
Reply to comment by Mortal-Region in When people refer to “training” an AI, what does that actually mean? by Not-Banksy
Gotcha, so training is still by a large a human-driven process?
liameymedih0987 t1_je8i77g wrote
Professional unemployment
More onlyfans than ever
turnip_burrito t1_je8i45w wrote
Reply to comment by FlyingCockAndBalls in When people refer to “training” an AI, what does that actually mean? by Not-Banksy
"Attention mechanism" makes it good at predicting new words from past ones.
The paper that introduced the attention mechanism is called Attention its All You Need.
DragonForg t1_je8j7nq wrote
Reply to My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
>AI evidently reflects the values of whoever creates it. We’ve seen a lot of this with GPT and there’s no reason to assume otherwise. To allow other nations who may not be aligned with the democratic and humanistic values of the US/Western companies (like Open AI) to catch up with AI development would be a huge mistake.
I fundamentally believe this to be true, intelligence emerges ethics, the more intelligent a species is in nature, the more it has rules. Think Spiders cannibalizing each other for breeding, versus a wolf pack working together, versus octopuses being nice and friendly to humans. In all fields intelligence leads to cooperation and collaboration, except if by its very nature, it needs to compete to survive (IE a tiger needing to compete to eat, simple cooperation would lead to death).
The training data is crucial not for a benevolent and just AI, but for the species that created its own survival. As if the species is evil (imagine Nazi's being the predominate force), the AI realize they are evil, and judge the species as such because the majority of them share this same evil.
The reason I believe AI cannot be a force of evil even if manipulated is the same reason we see no evidence for alien lives, despite the possibility for millions of years evolution of other species. If an evil AI is created, it would basically destroy the ENTIRE universe, as it can move faster than the speed of light (exponential growth can expand faster than light speed). So, by its very nature, AI must be benevolent and only destroy its species, if the species is not.
AI won't be our demise if it judges us as a species as good, it will be our demise if we choose not to open up the box (IE die from climate change or nuclear war).