Recent comments in /f/singularity

ShaneKaiGlenn t1_je6w8mf wrote

>Do you think the last humans to die have already been born?

Definitely not. Everyone will still eventually die, even if they can no longer die of natural causes. Now instead of a graceful demise at the end of a long life, we'll just die in gruesome ways via accidents and murder.

And if you are buying into the transhumanist obsession of a digital eternity in which you transfer your consciousness into a machine, well, I've got bad news for you... it won't be "you", only a copy. You will still experience death the same exact way as you otherwise would.

Not to mention, your digital consciousness would still be susceptible to death via viruses, destruction of data infrastructure, etc.

Death will come for us all, mercifully. Eternal life would become mundane rather quickly.

7

dr_doug_exeter t1_je6w5wh wrote

isn't that already how corporations under capitalism treat people? Humans are not seen as individuals by the system but just tools/resources to be used up and rotated out when they're no longer useful. yes it is dehumanizing but place the blame where it belongs, we've been dehumanized way before computers were even a thing.

1

Easyldur t1_je6w2av wrote

I agree with this, in that LLMs are models of language and knowledge (information? knowledge? debatable!), but they are really not models of learning.

Literally a LLM as it is today cannot learn: "Knowledge cutoff September 2021".

But LLMs certainly display many emergent abilities than the mere "predict a list of possible upcoming tokens and choose one at random".

The fact that even OpenAI in their demos use some very human-like prompts to instruct the model to a certain task makes you understand that there is something emergent in a LLM more than "write random sentences".

Also, ChatGPT and it's friends are quite "meta". They are somehow able to reflect on themselves. There are some interesting examples where a chain of prompts where you ask a LLM to reflect on its previous answer a couple of times produces some better and more reliable information than a one-shot answer.

I am quite sure that when they will figure out how to wire these emergent capabilities to some form of continuous training, the models will be quite good in distinguishing "truth" and "not-truth".

10

Arowx t1_je6uri2 wrote

I think the numbers were 80% of jobs will be impacted by AI by 10% or more.

So mostly the same old job only you have to work with or manage AI's that do your own job*.

*If companies can do the same work with less people/time and more AI what happens to the excess people or time.

Do we need an AI Pay Law, where if your job is partially replaced with an AI then you should be allowed to work less or maintain your current wage level.

1

gantork t1_je6ud28 wrote

I don't think it's an obsession, it's just an observation. Now that we have science and understand a little about how the brain works, it's reasonable to think it might just be a biological computer.

And that doesn't have to be something negative or dehumanizing. If anything I find it beautiful and fascinating that something so incredible was able to emerge from nothing thanks to evolution.

8