Recent comments in /f/singularity
throwaway_890i t1_jegeto6 wrote
Reply to ChatGB: Tony Blair backs push for taxpayer-funded ‘sovereign AI’ to rival ChatGPT by signed7
Article dated 22nd Feb 2023. Practically ancient news.
Article 15th March 2023. https://www.theguardian.com/technology/2023/mar/15/uk-to-invest-900m-in-supercomputer-in-bid-to-build-own-britgpt
No_Ninja3309_NoNoYes t1_jegeqgy wrote
Reply to 🚨 Why we need AI 🚨 by StarCaptain90
There's one thing you learn pretty quickly about programming: programs almost never do what you want on the first try. So we can expect AI to fail in some way that we can't predict too. If it's a simple program with nothing at stake, it's no big deal. But if you expose young children and adults with issues, known or unknown at the moment, it could lead to bad outcomes. Furthermore organised crime and terrorist groups are a threat we shouldn't underestimate.
If history has taught us anything, it's that almost anything can be turned into a weapon. Each weapon will be used sooner or later. Personally, I need AI, but not at any cost. For example if third world countries suffer because they can't compete, I think we have to fix this issue first.
scarlettforever t1_jegejrw wrote
Reply to This concept needs a name if it doesn't have one! AGI either leads to utopia or kills us all. by flexaplext
It is called "Dear God ASI, please let us go to Heaven, we don't want to go to Hell."
NiranS t1_jegeim1 wrote
Reply to ChatGB: Tony Blair backs push for taxpayer-funded ‘sovereign AI’ to rival ChatGPT by signed7
BrexChat
[deleted] t1_jegeh23 wrote
Sandbar101 t1_jegeeip wrote
Christmas of 25
raishak t1_jegebas wrote
Language is an encoding scheme of our intelligence. It is enough to model our intelligence, I'm sure. But I don't think it is enough to build an intelligent agent. The agency that humans have I think is old and not rooted in our intelligence, rather it uses our intelligence. It's a carefully tuned array of interconnected processes in equilibrium that respond to disturbances in our environment, all encoded by our genetics. I suspect that part will be much harder to get right, as the nuance of building an agent like a biological social animal for example, is no doubt tremendous. Evolution has had a long time using trial and error to work out the issues. This is the part that unfortunately has the most potential to go terribly wrong.
PickleLassy t1_jege8s6 wrote
If US wins it makes sense for the US to instantly neutralize all other threats. Because if others can still create ASI it would still be dangerous.
throwaway12131214121 t1_jege01d wrote
Reply to comment by zekex944resurrection in ChatGB: Tony Blair backs push for taxpayer-funded ‘sovereign AI’ to rival ChatGPT by signed7
Governments are competent when competency is necessary to serve their interests. Updating government websites isn’t usually their top priority or even a priority.
Absolute-Nobody0079 t1_jegdyuh wrote
Reply to comment by aalluubbaa in What if language IS the only model needed for intelligence? by wowimsupergay
TBH I really don't pay much attention to my own thinking process, beside that I am really good at visual thinking process.
1loosegoos t1_jegdsy8 wrote
Reply to Should AIs have rights? by yagami_raito23
No because even if they became conscious, that doesnt necessarily mean they will have emotions and/or experience pain.
spriggankin t1_jegdrzn wrote
Reply to comment by visarga in HuggingGPT - Solving AI Tasks with ChatGPT and its Friends in HuggingFace by visarga
Seeing as how chatgpt is now banned where Im from as of a few hours ago, I need to find an alternative : /
Agreeable_Bid7037 t1_jegdqbj wrote
language is a way to express ideas, ideas come from what we experience and think about, in order to experience things and to know that you as opposed to someone else or everything else is experiencing things, you need senses, visual, auditory, sensational etc.
LLMs maybe be able to replicate our thinking patterns by analysing our texts but, for them to be truly intelligent they need to be able to see as well and understand what we are referring to when we say "dog" or "animal"
Unfrozen__Caveman t1_jegdmm2 wrote
Reply to comment by scarlettforever in Today I became a construction worker by YunLihai
I'm not trying to make fun of OP, just saying this isn't the kind of stuff I would've expected to read 5 years ago if we were talking about the singularity.
I'm not sure if it's sad or a good thing.
AdditionalPizza OP t1_jegdkmk wrote
Reply to comment by greatdrams23 in Over-under that Google already has AGI? Try and rationalize the release of Bard. by AdditionalPizza
That speaks to why it's so unusual for Google to release Bard in the state it is.
MolybdenumIsMoney t1_jegd87f wrote
Reply to comment by just_thisGuy in The only race that matters by Sure_Cicada_4459
As long as powerful AIs require huge data centers with hundreds of millions of dollars worth of GPUs, this is impossible. Nuclear war would destroy the global chip supply chain, so only salvaged processors could be used.
plasticbubblegum t1_jegd6d1 wrote
Reply to comment by zestoki_gubitnik in Today I became a construction worker by YunLihai
Well, firstly, not every robot needs to carry a lot of weight or run for prolonged periods of time without recharging. Not every robot will be built the same. And the ones that are built that way will probably be designed well, so they're not disposable like cheap phones. Robots that replace manual labor should do what we do and use machinery/tools instead. They just need to be very efficient and flexible at moving in the physical world, not hyper strong. But yes... there is a big problem that working with your hands will still be cheaper in back breaking labor, for a while. I was just saying we don't need batteries that last for days without recharging, since we can build the world around recharging. There are already automatic vacuum cleaners that recharge themselves. Also, recycling is going to be more important in the future.
AdditionalPizza OP t1_jegd5x9 wrote
Reply to comment by internet_czol in Over-under that Google already has AGI? Try and rationalize the release of Bard. by AdditionalPizza
Hmm. I would have to assume they probably wouldn't really need that data at this point, considering the hoarding they have done already. But, it could be just a basic preliminary test to see how the public interacts with bots like this because people don't always post the darkest stuff they might ask to an AI.
I still question why they would risk so much with such an inferior model though.
SkyeandJett t1_jegd2en wrote
Reply to comment by MrEloi in When do you guys think chatgpt 5 is gonna come out ? by Klaud-Boi
You hit the nail on the head. Individual users and groups are cobbling together what could in fact be considered AGI as we speak. Anyone whose AGI prediction is later than 2024 might want to adjust it. Any sort of delay is ill advised. Most of these models still use GPT-4 at their core but I suspect once they're refined you could get away with something like Dolly for all but the most demanding problems and that's assuming someone doesn't bootstrap a self-improvement loop together that actually takes off.
As an example:
visarga OP t1_jegd2cs wrote
Reply to comment by spriggankin in HuggingGPT - Solving AI Tasks with ChatGPT and its Friends in HuggingFace by visarga
HuggingFace is the GitHub of AI. It hosts 166,392 AI models and 26,787 datasets. It has implementations for all the models in its own framework and is usually the starting codebase for research papers. You can also interact with many models right on their website in the "spaces" section.
You can also see it like an App Store for AI, you can shop for models and then include them in your project with 5 lines of code.
greatdrams23 t1_jegcwjn wrote
Reply to comment by SharpCartographer831 in Over-under that Google already has AGI? Try and rationalize the release of Bard. by AdditionalPizza
No. Companies know that winner takes all. Google, Amazon, Facebook, Twitter. If you let your rivals catch up, you lose. Forever.
ShowerGrapes t1_jegcv9f wrote
Reply to comment by aalluubbaa in What if language IS the only model needed for intelligence? by wowimsupergay
written language came about long after cities and industry and all that. it was the development of agriculture, that freed up time to create more interesting things (initially with clay) and then the wheel (to carry more stuff around) that led to the need to keep track of the massive hoards that local warlords (i.e. 'heroes') were collecting from the subjugated masses. our first written documents were lists of things kept in treasuries and trade lists between cultures.
so you have it backward but the point might still be valid.
Nanaki_TV t1_jegcpii wrote
Reply to I have a potentially controversial statement: we already have an idea of what a misaligned ASI would look like. We’re living in it. by throwaway12131214121
Profit is not at the expense of someone else. That’s such a naive understanding of economics. Exchanges only happen with both parties profit. Otherwise why would you do the exchange if you were not valuing the good or service over what you’re exchanging?
You may need to review your definitions. And I expect downvotes given this sub’s anti-capitalist stance. Shame.
delphisucks t1_jegcige wrote
Reply to comment by tiselo3655necktaicom in Interesting article: AI will eventually free people up to 'work when they want to,' ChatGPT investor predicts by Coolsummerbreeze1
salty much
Freedom_Alive t1_jegezou wrote
Reply to ChatGB: Tony Blair backs push for taxpayer-funded ‘sovereign AI’ to rival ChatGPT by signed7
One that gives the "Truth" about WMDs