Recent comments in /f/singularity
lucellent t1_je8znb0 wrote
Researchers compile datasets which is the information used to train and then they let the AI go through all of that information and learn whatever is needed.
HeinrichTheWolf_17 t1_je8yygj wrote
Reply to comment by EnomLee in Thoughts on this? by SnaxFax-was-taken
No more getting sick or frail? OH MY GOD HOLD ME WE’RE ALL GONNA DIE!
EddgeLord666 t1_je8yw5m wrote
Reply to comment by thecatneverlies in What are the so-called 'jobs' that AI will create? by thecatneverlies
Well that will have to be negotiated by society. Ultimately we are trying to move past capitalism, right?
theotherquantumjim t1_je8ysa1 wrote
Reply to comment by Andriyo in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
This is largely semantic trickery though. Using apples is just an easy way for children to learn the fundament that 1+1=2. Your example doesn’t really hold up since a pile of sand is not really a mathematical concept. What you are actually talking about is 1 billion grains of sand + 1 billion grains of sand. Put them together and you will definitely find 2 billion grains of sand. The fundamental mathematical principles hidden behind the language hold true
IffyPeanut t1_je8yp8b wrote
Reply to comment by BrBronco in What are the so-called 'jobs' that AI will create? by thecatneverlies
I think they would find us quite entertaining.
ML4Bratwurst t1_je8yp5v wrote
Look up the Backpropagation algorithm. It's used in every neural network/language model for training
ccd488 t1_je8yc64 wrote
Nope not even close to 50years
[deleted] t1_je8y9hb wrote
[deleted]
agorathird t1_je8y4c6 wrote
Reply to comment by Easyldur in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
>Literally a LLM as it is today cannot learn: "Knowledge cutoff September 2021".
It's kind of poetic, this is was also the issue with Symbolic AI. But hopefully with the amount of breakthroughs, having to touch base, "What is learning?" every one in a while won't be costly.
Nastypilot t1_je8y3p7 wrote
Reply to comment by Cryptizard in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
>But, you can also convince it that 2+2=5 by telling it that is true enough times
The same is true for humans though, it's essentially what gaslighting is. Though if I can use a less malicious example, think of a colorblind person, how do they know grass is green? Everyone told them so.
YaAbsolyutnoNikto t1_je8x5y8 wrote
Reply to comment by barbariell in Do politicians in your country already talk about AI? by ItsPepejo
Well, it’s the EU that has the job of regulating it, not the Bulgarian government, so it kind of makes sense.
At the EU level, there’s the EU Artificial Intelligence Act being proposed. We’re the only ones actively doing shit about AI, just like with GDPR.
SpazCadet t1_je8x16g wrote
Sort of like how SEO was its own job segmentation 15 years ago. A new job segmentation will emerge where candidates will be selected based on how well they can prompt and utilize various AIs to achieve desired results.
Red-HawkEye t1_je8wy90 wrote
Reply to comment by MichaelsSocks in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
ASI will be a really powerful logical machine. The more intelligent a person is, the more they have empathy towards others.
I can see ASI, actually being a humanitarian that cares for humanity. It essentially nurtures the land, and im sure, its going to nurture humanity.
Destruction and hostility comes from fear. ASI will not be fearful, as it would be the smartest existence on earth. I can definitely see it having all perspectives all at the same time, it will pick the best one. I believe the ASI will be able to create a mental simulation of the universe and to try and figure it out (like an expanded imagination but recursively trillion times larger than that of a human)
What i mean by ASI is that its not human made but synthetically made by exponetially evolving itself.
FlyingCockAndBalls t1_je8wts0 wrote
Reply to comment by YaAbsolyutnoNikto in Microsoft research on what the future of language models that can be connected to millions of apis/tools/plugins could look like. by TFenrir
I guess it's just because there still hasn't been societal upheaval. But rome wasn't built in a day. I guess its like watching the early internet, unable to predict how hard the future is gonna change while the general population just brushes it off till it infiltrates everything
Desi___Gigachad t1_je8wnpo wrote
Reply to comment by Lyconi in OPUS AI: Text-to-Video Game, the future of video gaming where you type and a 3D World emerges: A Demo by Hybridx21
Well, I don't think everyone might do that. Some will always prefer to stay in the real world. Like how some people still don't like using smartphones and use dumb phones instead ¯\_(ツ)_/¯
Pro_RazE t1_je8wgqs wrote
Reply to comment by FpRhGf in How do i catch up with everything that is going on in A.I. Field? by Comfortable-Act9400
Wevolver App, 1X tech, Mira Murati, Nat Friedman, Clone Robotics, Mikhail Parakhin, MedARC, Adam.GPT, Jeff Dean (Google Research), DeepFloyd AI, John Carmack, Robert Scoble, Smoke-away, Wojciech Zaremba (OpenAI), roon, hardmaru, Joscha Bach, Nando de Freitas, Mustafa Suleyman, Andrej Karpathy, Ilya Sutskever, Greg Brockman, nearcyan, Runwayml, CarperAI, Emad Mostaque, Sam Altman, AI Breakfast, Aran Komatsuzaki, Jim Fan (Nvidia), Jack Clark (AnthropicAI), Bojan Tunguz, gfodor, Harmless AI, LAION, stability AI, pro_raze (my account is based on AI/Singularity).
I didn't list big research labs here. Follow all these, use For You page to stay updated, and you will be good :)
CollapseKitty t1_je8wa3w wrote
Reply to comment by Not-Banksy in When people refer to “training” an AI, what does that actually mean? by Not-Banksy
Modern LLMs (large language models), like ChatGPT, use what's called reinforcement learning from human feedback, RLHF, to train a reward model which then is used to train the language model.
Basically, we attempt to instill an untrained model with weights selected through human preference (which looks more like a cat? which sentence is more polite?). This then automates the process and scales it to superhuman levels which are capable of training massive models like ChatGPT with hopefully something close to what the humans initially intended.
YaAbsolyutnoNikto t1_je8w8j3 wrote
Reply to comment by FlyingCockAndBalls in Microsoft research on what the future of language models that can be connected to millions of apis/tools/plugins could look like. by TFenrir
Why is that?
etherified t1_je8vtzu wrote
We're gonna have to employ one guy to keep his hand on the electric power cord at all times and pull at the slightest hint of an AI coup d'etat. That's an important job which will pay well.
agorathird t1_je8vlkk wrote
Reply to comment by Mindrust in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
Eliezer is a crank. I see his posts I scroll. Too bad less wrong can be decent at times.
SlenderMan69 t1_je8uxwk wrote
Reply to comment by BeGood9000 in What are the so-called 'jobs' that AI will create? by thecatneverlies
Robots need pets on mars
Andriyo t1_je8uj7t wrote
Reply to comment by theotherquantumjim in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
There is nothing fundamental about the rule of 1 apple + 1 apple = 2 apples. It's entirely depending on our anthorpomorphic definition of what is "1" of anything is. If I add two piles of sand together, I'll get one pile of sand still.
Mathematics is our mental model for the real world. It could be super effective in its predictions but not always the case.
Kids just do what LLMs are doing. They observe that parents call any one noun + one noun equals 2 nouns. The concept of what is addition really is (with its commutative property, identity property, closing property etc) people learn much later
Schindog t1_je8ua62 wrote
Reply to comment by BrBronco in What are the so-called 'jobs' that AI will create? by thecatneverlies
Hopefully we won't need to be useful to justify our existence at some point.
Kafke t1_je8u4f1 wrote
Reply to comment by jetro30087 in When people refer to “training” an AI, what does that actually mean? by Not-Banksy
"instruction": "What are the three primary colors?",
"input": "",
"output": "The three primary colors are red, blue, and yellow."
No wonder they give false info. garbage in, garbage out lol.
shmoculus t1_je8zz9d wrote
Reply to comment by Loud_Clerk_9399 in Do people really expect to have decent lifestyle with UBI? by raylolSW
I think this is a bit premature because there is usually some form scarcity either by distance or time and a medium of exchange is necessary to trade these resources
e.g. gold is scarce localy but a sufficiently advanced space mining system will increase supply until we need to get out of the solar system, then you likely have to time constrained scarcity ie have to wait for operations in another star sysytem to be setup and send resources back
Even considering there are billions of people and perhaps a few very disirable places for them to live, how to allocate living space equitably since the qualities that make that space desirable cannot easily be scaled e.g. the historical / cultural value of living in Paris, the beauty of living in Hawaii