Recent comments in /f/singularity
LosingID_583 t1_je79fc0 wrote
Maybe spread biological life to other places. Machines could technically do this, but humans are suited for it already.
Mortal-Region t1_je79d9o wrote
Reply to comment by 94746382926 in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
Musk is on the advisory board of the Future of Life Institute and is the primary donor.
Hmmm...
Beowuwlf t1_je79bmd wrote
Reply to comment by apinanaivot in ChatGPT browsing mode plugin now available to certain users. by Savings-Juice-9517
Gotcha, I misunderstood where you were going with that.
CyberPunkMetalHead OP t1_je79aej wrote
Reply to comment by SlenderMan69 in Connecting your Brain to GPT-4, a guide to achieving super human intelligence. by CyberPunkMetalHead
asking the real questions I see
[deleted] t1_je79a9t wrote
[deleted]
FC4945 OP t1_je79a2z wrote
Reply to comment by old-dirty-olorin in Creating a Private Persona. Is it Possible Now? by FC4945
I read "Virtually Human: The Promise--And the Peril--of Digital Immortality" Martine A. Rothblatt recently and it's changed my perspective on how we understand another person as well as "who I am" and what the boundaries of that is person-ness is. I honestly don't think we ever really know another person completely. We have our perception of someone but we're not able to knw what goes on inside someone else's mind, the totality of their thoughts and experiences. From that perspective, I think it's possible to recreate a person that will satisfy our emotional needs and seem to us very much like the original. There's a Buddhist quote that I like which relates to this: I am not what you think I am, you are what you think I am."
pls_pls_me t1_je79779 wrote
Reply to comment by Cartossin in Do you guys think AGI will cure mental disorders? by Ok-Wing111
I mean...you're certainly not wrong about the root cause
azriel777 t1_je791aj wrote
I mentioned this in another post, but it simply is not doable to stop or slow down for many AI researchers or companies. There are tons of money flowing into the AI field, and by stopping or slowing down, they are just burning money. If they took investor money, they can be sued for stopping work, not to mention the risk of tanking your own AI project and job.
artix111 t1_je78xm0 wrote
Reply to comment by JustinianIV in What are the so-called 'jobs' that AI will create? by thecatneverlies
Yeah, people can’t and won’t be able to comprehend how a likely/possible future will look like.
We mass-produced things in billions so far, we will do so with AI systems to and with robots that run on these systems. We won’t need to do any digital work, we will likely not need to do any physical work besides the one for ourselves… even there AI will help.
I have no idea when, but machines will be able to do a lot of jobs better with how advanced we got with creating voices, images, videos, text - machines will just get software installed that will make them do any job better than humans ever could in 1/10000 of the time for digital ideas and 1/10 of the time in physical work.
albanywairoa t1_je78x7j wrote
I agree U.S will choose super rich and super poor. Thankfully my daughter has dual citizenship so when she graduates she can live in Australia.
tornado28 t1_je78qzx wrote
Reply to Anyone else feel like everything else is meaningless except working towards AGI? by PixelEnjoyer
Maybe an unpopular opinion on here but I'd say working to prevent AGI.
JustinianIV t1_je78p9o wrote
Reply to comment by Focused-Joe in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
Only shills are the people parroting a letter signed by John Wick and Sarah Conor around lmao
flyaturtle t1_je78kf8 wrote
Pausing AI research and development, no. Pausing AI public releases, sure.
I mean I need 6 months just to catchup with last week’s avalanche of releases. All arguments of dangers are centered around public releases, the development itself is essential (even as a defense against other potential future bad actor AIs)
Ortus14 t1_je78a9u wrote
Reply to comment by D_Ethan_Bones in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
The first AGI will be an ASI because Ai and computers already have massive advantageous over humans. So for all practical purposes AGI and ASI are synonomouse.
Green-Future_ OP t1_je78a0k wrote
Reply to comment by simmol in Are LLMs a step closer to AGI, or just one of many systems which will need to be used in combination to achieve AGI? by Green-Future_
Very insightful response, thank you for sharing.
FC4945 OP t1_je786b5 wrote
Reply to comment by MattDaMannnn in Creating a Private Persona. Is it Possible Now? by FC4945
Thanks. I was thinking that as well but I hope it happens before a year. I know there's a couple of open source LLMs out there now but I'm doubtful they're capable of this yet. Maybe in a few months time that will change.
RuggedExecuteness t1_je77m80 wrote
It assumes the commons were uncorrupted.
Ortus14 t1_je77d3l wrote
It's just a political talking point. It will destroy far more jobs than it creates.
But as far as sheer numbers, the most common job will be training the ai's, selecting which response you like more for example. These will pay starvation wages and require no special skills as they already do.
Occasionally there might be industry specific jobs for training the Ai to take over your job.
simmol t1_je77ct0 wrote
Reply to Are LLMs a step closer to AGI, or just one of many systems which will need to be used in combination to achieve AGI? by Green-Future_
The training via broad sensory inputs will probably come in the multimodal LLMs. So essentially, the next generation LLMs will be able to look at an image and either be able to answer questions regarding that particular image (GPT-4 probably has this capability) or just treat the image itself as the input and say something about the image unprompted (GPT-4 probably does not have this capability). I think the latter ability will make the LLM seem more AGI like given that the current LLMs only respond to the inquiry of the users. But if the AGI can respond to an image and if you put this inside a robot, then presumably, the robot can respond naturally to the ever-changing image that is seen from its sensors and talk about it accordingly.
I think once this happens, then the LLM will seem less like a tool and more like a being. This probably does not solve the symbolic logic part of building up knowledge from simple set of rules, but that is probably a separate task on its own that will not be solve by multimodality but by layering the current LLM with another deep learning model (or via APIs/plugins with third party apps).
phriot t1_je779ga wrote
Reply to comment by BigMemeKing in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
But if you feed an LLM enough input data where "5 apples" follows "Adding 2 apples to an existing two apples gets you...," it's pretty likely to tell you that if Johnny has two apples and Sally has two apples, together they have 5 apples. This is true even if it can also tell you all about counting and discrete math. That's the point here.
D_Ethan_Bones t1_je775xm wrote
Reply to Connecting your Brain to GPT-4, a guide to achieving super human intelligence. by CyberPunkMetalHead
The guy who frankensteins himself to GPT-4 is going to have egg on his face when GPT-6 comes out, and so on. My experience with Mac updates leaves me with a preference to just wait for the update to come out before installing.
MagnateDogma t1_je76sbr wrote
Reply to comment by Anjz in AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
Don’t get to invested sorry to say my man that canceled the show. But damn I like it
kvlco t1_je76qvn wrote
Reply to comment by AstralTrader in If you can live another 50 years, you will see the end of human aging by thecoffeejesus
Matter of fact, if you are able to gradually replace each tiny part of your brain for a robotic one (so you don't lose your "self" in the process), then there's no limits. From an Android to a spaceship, with enough time and resources you could eventually become a whole planetary computer like those in Stellaris.
JustinianIV t1_je76p9l wrote
Reply to comment by mattmahoneyfl in What are the so-called 'jobs' that AI will create? by thecatneverlies
The counter-argument here is that in any industry, you can the replace humans as the workers (assuming robotics will eventually catch up). So while the farmers could have moved to the city to work in a factory before, this time around the farmer will get to the city and find all the jobs there are also done by AI.
JustinianIV t1_je79i35 wrote
Reply to comment by informavore in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
It's just a huge contradiction to think we can somehow ingrain AGI with our current values, and in that way preserve the current socioeconomic model.
True AGI is the end of capitalism. I don't care if you program it to love democracy, if a piece of software can do any job a human can do, the human worker is made obsolete. No job, no salary, no more buying products. What is capitalism's answer to that? There is none. AI is the ultimate manifestation of accelerationism, and it will lead us into a new socioeconomic model.