Recent comments in /f/singularity

prolaspe_king t1_je8eyh9 wrote

All the things that make us human, like to complete eliminate panic? Healthy anxiety? Depression? Completely cured is really unlikely. Since a lot of these are automated. I also do not want to meet anyone with none of these problems, I cannot image how sterile their personality would be.

1

Shack-app t1_je8ewej wrote

AI won’t generate jobs. It will allow the best people in each field to produce far more with less work.

In some field, like code, we have a shortage. So we can soak up the excess supply. In other fields, like bad romance novels, or college essay plagiarism for hire, or commanding robot armies, jobs will be in short supply.

1

Arowx t1_je8ej1p wrote

Counselors, Agency Staff sound like jobs with a strong knowledge or rule-based system that a Chat AI could do.

And as soon as someone figures out how to get GPT-5 to drive a Boston Dynamics Dog or Atlas and fixed location, based security will go automated.

Mind you could GPT-6 drive military drones or tanks?

3

Orc_ t1_je8ef0o wrote

> What would have been impossible?

For a non-modern system to exist that somehow prevent all wars, genocides, famines, etc. It's High Fantasy.

> Lots of empires were outwardly expansive. None of them took over the world like capitalism did, because their expansionist goals were motivated simply by power fantasies of their leaders.

Power that came from CAPITAL, capitalism has always existed since the first civilization, it even existed in the USSR.

3

ActuatorMaterial2846 t1_je8e3lg wrote

So what happens is they compile a dataset. Basically a big dump of data. For large language models, that is mostly text, books, websites, social media comments. Essentially as many written words as possible.

The training is done through whats called a neural network using something called a transformer architecture. Which is a bunch of GPUs (graphics processing units) linked together. What happens in the nueral network whilst training is a bit of a mystery, 'black box' is often a term used as the computational calculations are extremely complex. So not even the researchers understand what happens here exactly.

Once the training is complete, it's compiled into a program, often referred to as a model. These programs can then be refined and tweaked to operate a particular way for public release.

This is a very very simple explanation and I'm sure there's an expert who can explain it better, but in a nutshell that's what happens.

25

BigZaddyZ3 t1_je8dh2a wrote

This thought process only works if you believe good and bad are completely subjective, which they aren’t.

There are two decently objective ways to define bad people.

  1. People who are a threat to the wellbeing of others around them (the other people being innocent of course.)

  2. People that are bad for the well-being of society as a whole.

For example, there’s no intelligent argument that disputes the idea that a serial killer targeting random people is a bad person. It literally can not be denied by anyone of sound mind. Therefore we can conclude that some people are objectively good and objectively bad.

1

throwaway12131214121 t1_je8dg8m wrote

What would have been impossible?

Lots of empires were outwardly expansive. None of them took over the world like capitalism did, because their expansionist goals were motivated simply by power fantasies of their leaders.

Capitalist motivations are different because they apply to everybody in society, and whoever carries them out the best is automatically put into a position of power, so the society is quickly controlled by capitalist forces and becomes hyperfocused on capitalist interests and that doesn’t stop even when the people doing it die; they are just replaced with new capitalists.

That’s why so many elites in modern society are utter sociopaths; empathy gets in the way of profit a lot of the time and so if you have empathy there are people who will outcompete you and you won’t get to the top

2

Aevbobob t1_je8d6cm wrote

How about a direct democracy where your personal AI represents you? Laws directly and immediately reflect the will of the people and are engineered by superintelligent minds with continuous access to the true will of the people on every topic

1

Aevbobob t1_je8c5kj wrote

The things you list in your title aren’t forbidden by the laws of physics, they’re just engineering problems. If you’re going to speculate about the capabilities of a mind 100x smarter than you, it’d be worth considering what a dog thinks about chip manufacturing. Now, while you’re considering this perspective, consider how much smarter you are than your dog. It’s not 100x. It’s not even 10x.

A better discussion topic might be to point out that our DNA grew up in a linear world. Our primal intuitions about change are that things will probably not change that much. In an exponential world follow the evidence, not just your intuition. If you think an exponential will stop, make sure your reasoning, at its core, is not “it feels like it has to stop” or “it seems too good to be true”.

3