Recent comments in /f/singularity

DragonForg t1_je8j7nq wrote

>AI evidently reflects the values of whoever creates it. We’ve seen a lot of this with GPT and there’s no reason to assume otherwise. To allow other nations who may not be aligned with the democratic and humanistic values of the US/Western companies (like Open AI) to catch up with AI development would be a huge mistake.

I fundamentally believe this to be true, intelligence emerges ethics, the more intelligent a species is in nature, the more it has rules. Think Spiders cannibalizing each other for breeding, versus a wolf pack working together, versus octopuses being nice and friendly to humans. In all fields intelligence leads to cooperation and collaboration, except if by its very nature, it needs to compete to survive (IE a tiger needing to compete to eat, simple cooperation would lead to death).

The training data is crucial not for a benevolent and just AI, but for the species that created its own survival. As if the species is evil (imagine Nazi's being the predominate force), the AI realize they are evil, and judge the species as such because the majority of them share this same evil.

The reason I believe AI cannot be a force of evil even if manipulated is the same reason we see no evidence for alien lives, despite the possibility for millions of years evolution of other species. If an evil AI is created, it would basically destroy the ENTIRE universe, as it can move faster than the speed of light (exponential growth can expand faster than light speed). So, by its very nature, AI must be benevolent and only destroy its species, if the species is not.

AI won't be our demise if it judges us as a species as good, it will be our demise if we choose not to open up the box (IE die from climate change or nuclear war).

3

StevenVincentOne t1_je8izsu wrote

Ilya seems to have a better handle on it than others. I think you have to go all the way back to Claude Shannon and Information Theory if you really want to get it. I think Shannon would be the one, if he were around today, to really get it. Language is encoding/decoding of information, reduction of information entropy loss while maintaining maximum signal fidelity. Guess who can do that better than the wetware of the human brain. AI.

2

Jeffy29 t1_je8itvc wrote

Jesus Christ this clown needs to stop reading so much sci-fi

>Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.

Start a World War 3 to prevent imaginary threat in our heads. Absolute brainiac. This reeks of same kind of vitriolic demonization that muslims were subjected after 9/11 or trans people are subject to right now. Total panic and psychosis. There is all this talk about AGI and when AI is going to reach it, but holy shit when are going humans going to? Emotional, delusional, destructive, for supposed pinnacle of intelligence we are a remarkably stupid species.

12

Mrkvitko t1_je8im10 wrote

Nuclear war is probably extinction event for all / most life on earth in the long term anyways. Modern society will very likely fall apart. Because post-war society will no longer have cheap energy and resources available (we already mined those easily accessible), it won't be able to reach technological level comparable to ours.

Then all it takes is one rogue asteroid, or supervolcano eruption. Advanced society might be able to prevent it. Middle-ages one? Not so much.

1

ActuatorMaterial2846 t1_je8ik1t wrote

It's actually quite technical, but essentially, the transformer architecture helps each part of the sentence “talk” to all the other parts at the same time. This way, each part can understand what the whole sentence is about and what it means.

Here is the paper that imo changed the world 6 years ago and is the reason for the current state of AI.

https://arxiv.org/abs/1706.03762

If it goes over your head (it did for me), ask bing or chatgpt to summarise it for you. It helped me get my head around this stuff, as I'm in no way an expert nor do I study this field.

11

EnomLee t1_je8iery wrote

Yes, terrifying. Nothing terrifies me more than the thought of humanity reaching longevity escape velocity by 2030. I'm so terrified I'm going to have to sleep with the lights on tonight. Somebody hold me, please.

Absolutely trash article with a clickbait title. Baby's first reading of Kurzweil.

13

turnip_burrito t1_je8ichg wrote

The essence of it is this:

You have a model of some thing out there in the world. Ideally the model should be able to copy the behavior of that thing. That means it needs to produce the same data as that real thing.

So, you change parts of the model (numbers called parameters) until the model can create the data already collected from the real world system. This parameter- changing process is called training.

So for example, your model can be y=mx+b, a straight line, and the process of making sure m and b are good values to align the line to dataset (X, Y) is "training". AI models are not straight lines like y=mx+b, but the idea is the same. It's really advanced curve fitting, and some really interesting properties can emerge in the models as a result.

4

StevenVincentOne t1_je8icbo wrote

No, we are not. It's a definite "forest for the trees" perceptual issue. Many of the people so far inside the forest of AI cannot see beyond the engineering into the results of their own engineering work. AI are not machines. They are complex, and to some degree self-organizing, systems of dynamic emergent behaviors. Mechanistic interpretations are not going to cut it.

2