Recent comments in /f/singularity
afighteroffoo t1_jeazv9s wrote
The hype cycle is meant to represent the maturation and adoption of a specific technology. It's more appropriate to ask, for example, where the large language model is on the graph. I think it's at the very beginning for society at large.
The general public is dimly aware of the new thing that might take all our jobs but it's barely on the radar and I'd wager that in most parts of the world, it isn't on the radar at all.
It may turn out to have a very shallow curve as the technology is developing so quickly. I think people continue to be surprised by the capability of things like gpt-4 and other large reinforcement learning models.
There's also the phenomenon of capability overhang. The creators themselves don't know what all models like gpt-4 are capable of when put in the hands of the public. People are feverishly looking for ways to make it more capable by giving it access to the internet, long term memory, and looping it back on itself.
So it's hard to say that people have inflated expectations except among a demographic comprised of subs like this one.
WSBTurnipGod t1_jeazj7p wrote
Reply to comment by LiveComfortable3228 in Where do you place yourself on the curve? by Many_Consequence_337
Lol
Sigma_Atheist t1_jeazgu0 wrote
There is no known method of speeding up practical machine learning with real-world use cases with quantum computing.
natepriv22 t1_jeayzdd wrote
Reply to comment by BJPark in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
I guess it should be easier to come up with new ways of making money in an AI world rather than coming up with a completely different economic system or shifting the reality of economic rules to fit a subjective imaginary world view and unrealistic ideal of what and how socialism or communism should work.
In the same way it was almost impossible for someone working in agriculture hundreds if not thousands of years ago to imagine today's software jobs, it's probably as difficult for us to figure out what jobs in 5,10,20,30,etc years from now will look like.
nobodyisonething OP t1_jeayozm wrote
Reply to comment by byttle in The Rise of AI will Crush The Commons of the Internet by nobodyisonething
>it would be boring, and nonsensical to know you're viewing an only ai discussion of opinions
I'm inclined to think I would not choose that -- but in the future, will the AI have an opinion worth engaging with? I already turn to ChatGPT for technical opinions -- and it has been very useful.
darkness3322 t1_jeayl3t wrote
Reply to comment by scarlettforever in I want a a robo gf by epic-gameing-lad
Yeah I know, I've realized it in the worst way possible...
AllCommiesRFascists t1_jeayj0h wrote
Reply to comment by bigbeautifulsquare in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
Not the OP, but it is my country and I want to be part of the greatest collective in the world, so it should be dominant in everything
AllCommiesRFascists t1_jeaxwfl wrote
Reply to comment by techmnml in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
Or braindead like you for not recognizing a troll
KRCopy t1_jeaxro1 wrote
Reply to comment by acutelychronicpanic in Microsoft research on what the future of language models that can be connected to millions of apis/tools/plugins could look like. by TFenrir
Who else saw and heard the local news guy from Arrested Development (and real life Orange County)?
Secret-Paint t1_jeaxb9h wrote
Reply to LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
🚀 Now that's what I call a Singularity! 🌐 Let's bring the power of AI to the people and truly democratize research! 🧠✊🤖 Who's with me in supporting LAION's mission for an international, publicly funded supercomputing facility to revolutionize open source foundation models? 💪🔥 #AIForAll
Cypher10110 OP t1_jeax9cy wrote
Reply to comment by Saerain in This image felt a bit more meaningful given current events re:pausing AI. by Cypher10110
So true. That's why it's such a tough issue.
In the case where 2 powers are competing to reach AGI, there is a potentially huge first move advantage if one power reaches AGI before the other. So how do they each agree to slow development on safety grounds, when they also know there is a possibility of the other side disregarding their agreement to get a competitive lead? A kind of prisoner's dilemma, they both need to refuse the easy win.
Ultimately, I think who wins is less relevant than a safe win. The pendulum of power swings, and so long as humans are "in charge" in the grand scheme of things, it doesn't really matter (in my view) if they are outside my own sphere of values. So long as they have human values! The pendulum of cultural power will hopefully continue to swing and in a few centuries, the planet will still have humans on it.
Although there is a possible future where human directed AGI changes the world in a permanent negative way (against my own values) - like some sort of totalitarian dystopic power, but as a pessimist I don't think that outcome is exclusive to "the other side" winning this race. Both sides have the power to do bad things.
MightyDickTwist t1_jeawm5l wrote
Reply to comment by zendonium in Where do you place yourself on the curve? by Many_Consequence_337
Also, the thing about AI is that it isn't one technology. How can we determine the "hype curve" of an entire field? We have multiple models, multiple training methods, multiple technologies.
Perhaps one could argue for hype curves for each model type, like pix2pix, but I don't think this applies to AI as a whole.
TheDuwus t1_jeawbyi wrote
Reply to comment by Cartossin in Where do you place yourself on the curve? by Many_Consequence_337
I think everybody still treats this as a regular technology. This isn't just a tech.
Supernova_444 t1_jeavg8v wrote
Reply to comment by CertainMiddle2382 in Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
Maybe slowing down isn't the solution, but do you actually believe that speeding up is a good idea? What will going faster achieve, aside from increasing the risks involved? What reasoning is this based on?
horance89 t1_jeavfnj wrote
Reply to comment by ertgbnm in Can quantum computers be used to develop AGI > ASI? by Similar-Guitar-6
AI will speed up all development. Take the model. Give it data. Do the reinforcement learning scientist assisted.
Smellz_Of_Elderberry t1_jeavfb5 wrote
Shop keeper. Adventurer. Joba in the new 3d simulated worlds it creates
BJPark t1_jeavcrr wrote
Reply to comment by natepriv22 in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
Right. Which leads to the question: How will people earn money?
No_Ninja3309_NoNoYes t1_jeav30p wrote
Reply to OPUS AI: Text-to-Video Game, the future of video gaming where you type and a 3D World emerges: A Demo by Hybridx21
How long before we get the Star Trek holodeck? Maybe we are in a simulation after all...
GorgeousMoron OP t1_jeav1xq wrote
Reply to comment by alexiuss in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
I'm sorry, but this is one of the dumbest things I've ever read. "Fall in love"? Prove it.
scooby1st t1_jeav18q wrote
Reply to comment by Thedarkmaster12 in When people refer to “training” an AI, what does that actually mean? by Not-Banksy
Not a chance. ASI would be when a system can conceptualize better ideas and theories, build, train, and test entirely new models, from scratch, better than teams of PhDs. It's not going to happen by brute-forcing the same ideas.
uswhole t1_jeatvi2 wrote
Reply to comment by AdmirableTea3144 in Where do you place yourself on the curve? by Many_Consequence_337
AI will going to the moon there is no stopping this time
FaceDeer t1_jeato6p wrote
Reply to comment by el_chaquiste in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
The part you don't buy comes from ChatGPT's simplified verison.
natepriv22 t1_jeatf4l wrote
Reply to comment by BJPark in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
Money is mainly used as a medium of exchange.
It's unlikely that things will be "free" as in no sort of exchange of value, because for a transaction or exchange to take place between two or more parties, they need to each believe that there is a personal or community benefit from this exchange.
For this reason, some form of money will still exist, so that exchanges can properly be performed between parties.
byttle t1_jeata09 wrote
Reply to comment by nobodyisonething in The Rise of AI will Crush The Commons of the Internet by nobodyisonething
we currently pay reddit or other organizations to promote our opinions, pay for upvotes, etc.
So in your scenario the AI is going to be paying for its posts to be highly upvoted and flaired?
I think the point i'd like to make is that it would be boring, and nonsensical to know you're viewing an only ai discussion of opinions about a movie you might wanna watch.
scooby1st t1_jeb02iy wrote
Reply to comment by maskedpaki in Where do you place yourself on the curve? by Many_Consequence_337
And I don't like women