Recent comments in /f/singularity

nowrebooting t1_jedyfcu wrote

That’s understandable; even as someone who is a tech cheerleader, there’s a hint of sadness when I think about the value of my programming skillset going down in value - but then again, that may be well my very human ego feeling annoyed about no longer being that special compared to others. It’s also typical that none of us generally cared when it was the prospect of truck drivers being replaced with self driving trucks, but now that it’s us being threatened, it becomes a big philosophical debate. It’s a bit of an echo from every paradigm shift since the industrial revolution - I think it’s arguable that we did lose some of our humanity when we switched to the assembly line or when we all started spending most of our days behind a computer, but it also gained us a lot of freedom to explore our humanity we didn’t have before.

0

SwayzeOfArabia t1_jedx0gk wrote

Can you please stop answering technical/meta questions with "just Google it" or using Google to answer the question. This is exhausting as f, and makes me worried about a dystopian future where people never use their own mind anymore but ask Google basically everything, as if using a calculator for whyisthis*ridiculous or so.

5

nowrebooting t1_jedwwoe wrote

I think the people who have the most to fear from AI right now are actually the people at the top - you are right that AI advancement will inevitably lead to societal upheaval, uncertainty and a paradigm shift, but the person with the most to lose isn’t Average Joe whose office job is automated, it’s the elite whose claim to power might come crashing down when AI levels the playing field across the board. At the moment almost all capitalist power structures are based on the idea that while I might resent the wealthy elite, I’m dependent on them for my livelihood. They control my income, which means they control me. Their only choice is to either keep Average Joe happy or to face their own French Revolution.

Beyond that, It’s my hope that in a world where AI is so smart that it can reliably replace a majority of all jobs, it’s also going to be smart enough to quickly come up with policies to keep the world from plunging into anarchy. Any AI that can outthink a human will realize that oppression, starvation and violence can always be avoided. A worst case scenario might be a Brave New World type scenario, where we are “domesticated” by an AI that understands our psychology better than we do and keeps us happy while unnecessarily keeping its elite masters in power.

It’s an interesting prospect; at this point we’re looking at a future that is pretty much impossible to predict; while I have my own ideas of what might happen - anything is possible.

1

Andriyo t1_jedw5r5 wrote

right, that's why AI needs to be multimodal and be able to observe the world directly bypassing the text stage.

we use text for learning today because it's trivial to train with text and verify. but i think you're right that we will hit the limit of how much knowledge there is in those texts.

​

For example, ChatGPT might be able to prove that Elvis is alive by analyzing the lyrics he wrote during his life and some obscure manuscripts from some other person in Argentina in 1990 and deducting it was the same person. That would be net positive knowledge added by ChatGPT just by analyzing all the text data in the world. But it won't be able to detect that, say, magnetic field of the earth is weaking without direct measurement or a text somewhere saying so.

6

Darustc4 OP t1_jedw3g4 wrote

AI does not hate you, nor does it like you, but you're made out of atoms it can use for something else. Given an AI that maximizes for some metric (dumb example: an AI that wants to make the most paperclips in existence), it will certainly develop various convergent properties such as: self-preservation that won't let you turn it off, a will to improve itself to make even more paperclips, ambitious resource acquisition by any and all means to make even more paperclips, etc... (see instrumental convergence for more details).

As for how it can kill us if it wanted to, or if we got in the way, or if we turn out to be more useful dead than alive: Hack nuclear launch facilities, political manipulation, infrastructure sabotage, key figure assasination, protein folding to create a deadly virus or nanomachine, etc....

Killing humanity is not hard for an ASI. But do not panic, just spread the word that building strong AI might be unwise when unprepared, and be ready to be pushed back by blind optimists that believe all of these problems will disappear magically at some point along the way to ASI.

2

baconwasright t1_jedvrlw wrote

>Historical evidence, as you pointed out, shows increased productivity doesn't have statistical significance on reducing hours worked

sure, but we, as a race, are WAY more rich than 100 years ago.

SO productivity does increase quality of life for everyone!

Stop focusing on the ceiling, focus on the floor, and how it has been raised in the past 100 years.

Now a guy cleaning bathrooms can become a junior software engineer by using Copilot and Chat-gpt and natural language. The amount of people doing manual labor will decrease, so they will have to pay them more.

Its a a sea rise that will lift everyone.

5

GlobusGlobus t1_jedvqxh wrote

GPT-4 is amazing at translation. Like, very very good.

Open AI claims that they mostly trained it on English, but it works very well in many other languages. Personally I use it Swedish and Turkish. There is a big step up in handling other languages in GPT4 compared to GPT3. GPT3 have problems with Swedish sayings and things like that, GPT4 handles it like a king.

1

1a1b t1_jedve2g wrote

The more you talk to your dog, the more it learns about language. GPT-4 does not even have a concept of a word, just character substrings (tokens) that don't correspond to words. Similarly, your dog doesn't know the sounds we are making are "words". No such thing as words exists to dogs or GPT-4.

Despite this, with enough training data being listened to by a dog, they no longer are responding to sound like an oscilloscope. Instead they respond to patterns which represent to us the underlying meaning of speech, read emotions and context.

Similarly, GPT with enough training has begun to associate tokens with not just other tokens, but patterns of tokens. Like a dog, the more data they are trained on, the better they become at identifying these patterns and making accurate predictions about what should come next.

3