Recent comments in /f/singularity

JustinianIV t1_je79i35 wrote

It's just a huge contradiction to think we can somehow ingrain AGI with our current values, and in that way preserve the current socioeconomic model.

True AGI is the end of capitalism. I don't care if you program it to love democracy, if a piece of software can do any job a human can do, the human worker is made obsolete. No job, no salary, no more buying products. What is capitalism's answer to that? There is none. AI is the ultimate manifestation of accelerationism, and it will lead us into a new socioeconomic model.

13

FC4945 OP t1_je79a2z wrote

I read "Virtually Human: The Promise--And the Peril--of Digital Immortality" Martine A. Rothblatt recently and it's changed my perspective on how we understand another person as well as "who I am" and what the boundaries of that is person-ness is. I honestly don't think we ever really know another person completely. We have our perception of someone but we're not able to knw what goes on inside someone else's mind, the totality of their thoughts and experiences. From that perspective, I think it's possible to recreate a person that will satisfy our emotional needs and seem to us very much like the original. There's a Buddhist quote that I like which relates to this: I am not what you think I am, you are what you think I am."

1

azriel777 t1_je791aj wrote

I mentioned this in another post, but it simply is not doable to stop or slow down for many AI researchers or companies. There are tons of money flowing into the AI field, and by stopping or slowing down, they are just burning money. If they took investor money, they can be sued for stopping work, not to mention the risk of tanking your own AI project and job.

10

artix111 t1_je78xm0 wrote

Yeah, people can’t and won’t be able to comprehend how a likely/possible future will look like.

We mass-produced things in billions so far, we will do so with AI systems to and with robots that run on these systems. We won’t need to do any digital work, we will likely not need to do any physical work besides the one for ourselves… even there AI will help.

I have no idea when, but machines will be able to do a lot of jobs better with how advanced we got with creating voices, images, videos, text - machines will just get software installed that will make them do any job better than humans ever could in 1/10000 of the time for digital ideas and 1/10 of the time in physical work.

5

flyaturtle t1_je78kf8 wrote

Pausing AI research and development, no. Pausing AI public releases, sure.

I mean I need 6 months just to catchup with last week’s avalanche of releases. All arguments of dangers are centered around public releases, the development itself is essential (even as a defense against other potential future bad actor AIs)

0

Ortus14 t1_je77d3l wrote

It's just a political talking point. It will destroy far more jobs than it creates.

But as far as sheer numbers, the most common job will be training the ai's, selecting which response you like more for example. These will pay starvation wages and require no special skills as they already do.

Occasionally there might be industry specific jobs for training the Ai to take over your job.

4

simmol t1_je77ct0 wrote

The training via broad sensory inputs will probably come in the multimodal LLMs. So essentially, the next generation LLMs will be able to look at an image and either be able to answer questions regarding that particular image (GPT-4 probably has this capability) or just treat the image itself as the input and say something about the image unprompted (GPT-4 probably does not have this capability). I think the latter ability will make the LLM seem more AGI like given that the current LLMs only respond to the inquiry of the users. But if the AGI can respond to an image and if you put this inside a robot, then presumably, the robot can respond naturally to the ever-changing image that is seen from its sensors and talk about it accordingly.

I think once this happens, then the LLM will seem less like a tool and more like a being. This probably does not solve the symbolic logic part of building up knowledge from simple set of rules, but that is probably a separate task on its own that will not be solve by multimodality but by layering the current LLM with another deep learning model (or via APIs/plugins with third party apps).

11

phriot t1_je779ga wrote

But if you feed an LLM enough input data where "5 apples" follows "Adding 2 apples to an existing two apples gets you...," it's pretty likely to tell you that if Johnny has two apples and Sally has two apples, together they have 5 apples. This is true even if it can also tell you all about counting and discrete math. That's the point here.

2

kvlco t1_je76qvn wrote

Matter of fact, if you are able to gradually replace each tiny part of your brain for a robotic one (so you don't lose your "self" in the process), then there's no limits. From an Android to a spaceship, with enough time and resources you could eventually become a whole planetary computer like those in Stellaris.

1

JustinianIV t1_je76p9l wrote

The counter-argument here is that in any industry, you can the replace humans as the workers (assuming robotics will eventually catch up). So while the farmers could have moved to the city to work in a factory before, this time around the farmer will get to the city and find all the jobs there are also done by AI.

9