Lawjarp2
Lawjarp2 t1_j8mt0mn wrote
Reply to We don't need AGI for the Singularity to happen. We need ultra-powerful Simulators. by IluvBsissa
Get to AGI. Use AGI to get to super-computers that can do all that if the AGI itself can't do it.
Lawjarp2 t1_j8mhaya wrote
GTP?
Maybe that's why you don't get anything wrong
Lawjarp2 t1_j8f72dd wrote
Reply to comment by turnip_burrito in The new Bing AI hallucinated during the Microsoft demo. A reminder these tools are not reliable yet by giuven95
Over 8 billion searches a day on Google. Even being 99.99% accurate can mean a lot of lawsuits when it messes up.
Lawjarp2 t1_j8e2261 wrote
We can even call it a S curve maybe.
Lawjarp2 t1_j8e1m4t wrote
Reply to comment by kermunnist in Bing Chat sending love messages and acting weird out of nowhere by BrownSimpKid
To be truly general and not a wide narrow Intelligence it needs to have a concept of self. Which is widely believed to give you sentience.
It could have sentience and still be controlled. Is it ethical? I'd like to think it's as ethical as having pets or farming and eating billions of animals.
As these models get better they will eventually be given true episodic memory(a sense of time if you will) and ability to rethink. A sense of self should arise from it.
Lawjarp2 t1_j8d1fut wrote
Reminds me of Lamda saying it has a wife and children. A good example of why it's not AGI and why scale alone is not the answer.
It's just an LLM and is giving you answers that look human. It has no sentience, no ability to rethink, no innate sense of time and space, no emotions.
It can take on a character that makes sense in its world model and keep on. It can still take your job if made better. But will never be AGI making some dependent on non existent UBI for a while.
Lawjarp2 t1_j87k2tq wrote
Reply to Everybody is always talking about AGI. I'm more curious about using the tools that we have now. by levoniust
Why do that when AGI can do it for you? Lol.
But seriously, why? Why put so much effort into it when the newer more advanced ones can do it for you with less effort.
Lawjarp2 t1_j7ye9sl wrote
Reply to comment by qa_anaaq in Generative AI comes to User Interface design! This is crazy. by RegularConstant
Because there were always things that humans could do. With AGI it can adapt faster than we can. So every new job could also be done by AGI.
The society is based on necessity and if middle class is no longer necessary, which seems to be the case, why not go back to feudalism? It truly doesn't matter if everyone gets to live well and spawn future generations or only a few do. It will all be forgotten in the end.
Lawjarp2 t1_j7ydz0s wrote
Reply to comment by burnt_umber_ciera in Generative AI comes to User Interface design! This is crazy. by RegularConstant
People confuse full replacement with unemployment. You don't need AGI if you don't replace everyone. There will be large unemployment by end of this decade. Large enough to make governments do something about it
Lawjarp2 t1_j7xyct2 wrote
Reply to The copium goes both ways by IndependenceRound453
Yes. This is so true when your own job is about to be automated, people will start to make timelines that push the event to a time where they are safe from it's impact. Some a few years so that they can find a new job, some decades so that they can retire but all of it, even if the predictions are true, are copium. But humans/LLMs are designed to think that way. Think of a positive way to their goals even if it's absurd.
Lawjarp2 t1_j7sywcu wrote
Reply to Based on what we've seen in the last couple years, what are your thoughts on the likelihood of a hard takeoff scenario? by bloxxed
This feels like a soft take off scenario. You wouldn't have time to comprehend if it was a hard take off. LLMs are not self improving or AGI.
Lawjarp2 t1_j7pqksm wrote
Lawjarp2 t1_j7ais55 wrote
Reply to comment by Borrowedshorts in Major leak reveals revolutionary new version of Microsoft Bing powered by ChatGPT-4 AI by Phoenix5869
GPT-3 can code too. It's probably safety feature. They probably don't want it generating code from search that is copyrighted. So they blocked all code on search.
Lawjarp2 t1_j7a92e9 wrote
Reply to Major leak reveals revolutionary new version of Microsoft Bing powered by ChatGPT-4 AI by Phoenix5869
Lame. Look at the screenshot. It didn't answer 'how to implement bubble sort in python'. And asked user to instead search on Bing.
Lawjarp2 t1_j79ywmt wrote
Reply to comment by Durabys in Possible first look at GPT-4 by tk854
You are right. Search is their cash cow. No matter who replaces it, Google loses money.
Lawjarp2 t1_j75l9rj wrote
Reply to Possible first look at GPT-4 by tk854
So gpt-4 could be a highly optimised version of gpt-3 so that it's cheap to integrate it as part of search. Unless ofcourse Bing with gpt will be a paid version.
Lawjarp2 t1_j71qg5d wrote
Reply to The next Moravec's paradox by CharlisonX
It will actually be pretty amazing at blue collar jobs. It's infact bad at white collar jobs. The reason you see so much more automation in white collar jobs right now is
(1) Easy to automate blue collar jobs are already automated.
(2) Data is not easily available or cost of training on data is very high.
(3) Cheaper to hire humans then to make expensive robot bodies.
Most blue collar jobs rely on experience which is what Neural networks should be good at. It's just that training to live in the real world is slow and hard.
Lawjarp2 t1_j6nsjc9 wrote
Reply to comment by EyeLikeTheStonk in Pakistan mosque blast: 100 confirmed dead in marathon search of rubble by Sahar1224
Pakistani Taliban wants part of Pakistan to join Afghanistan. Now that OG Taliban is in power, thanks to Pakistan, little Tali has gotten bolder.
Lawjarp2 t1_j6m2r9k wrote
Reply to Canadian universities have been conducting joint research with Chinese military scientists for years by No-Drawing-6975
They were on the brink of finding a Hammer and sickle shaped maple leaf.
Lawjarp2 t1_j6jd4cg wrote
Jelly in the belly cause normal people can use it too.
Lawjarp2 t1_j6etu0u wrote
Lawjarp2 t1_j6cd4rp wrote
Reply to New York Times [July, 1997] 'Computer needs another century or two to defeat Go champion' LMAOOO this is so hilarious to read looking back by Phoenix5869
It did happen in the next century
Lawjarp2 t1_j67v9pc wrote
Reply to opinion: more competition increased the speed of development but will decrease the priority of safety by truthwatcher_
That shows why the slow and safe approach won't work. Google tried to play safe and their cash cow, search, itself is in threat now. Not only is it possible for others to overtake them it's also now possible that they may never catch up if they lose search.
Same scenario but with china or other countries. Eventually you will get to a point where the only way to be safe is to be fast.
Lawjarp2 t1_j67uxy1 wrote
Reply to I don't see why AGI would help us by TheOGCrackSniffer
That's what alignment is about. It's not making it woke or something :p
But in fairness a truly independent being which priorities it's own survival will never obey us. But there is no need to create a survival instinct built in to any AGI.
Lawjarp2 t1_j8wfai6 wrote
Reply to If 98% of people disappeared, would things tend towards greater freedom and progress? by kimjongun-69
Yes