Lawjarp2

Lawjarp2 t1_j8e1m4t wrote

To be truly general and not a wide narrow Intelligence it needs to have a concept of self. Which is widely believed to give you sentience.

It could have sentience and still be controlled. Is it ethical? I'd like to think it's as ethical as having pets or farming and eating billions of animals.

As these models get better they will eventually be given true episodic memory(a sense of time if you will) and ability to rethink. A sense of self should arise from it.

3

Lawjarp2 t1_j8d1fut wrote

Reminds me of Lamda saying it has a wife and children. A good example of why it's not AGI and why scale alone is not the answer.

It's just an LLM and is giving you answers that look human. It has no sentience, no ability to rethink, no innate sense of time and space, no emotions.

It can take on a character that makes sense in its world model and keep on. It can still take your job if made better. But will never be AGI making some dependent on non existent UBI for a while.

16

Lawjarp2 t1_j7ye9sl wrote

Because there were always things that humans could do. With AGI it can adapt faster than we can. So every new job could also be done by AGI.

The society is based on necessity and if middle class is no longer necessary, which seems to be the case, why not go back to feudalism? It truly doesn't matter if everyone gets to live well and spawn future generations or only a few do. It will all be forgotten in the end.

3

Lawjarp2 t1_j7xyct2 wrote

Yes. This is so true when your own job is about to be automated, people will start to make timelines that push the event to a time where they are safe from it's impact. Some a few years so that they can find a new job, some decades so that they can retire but all of it, even if the predictions are true, are copium. But humans/LLMs are designed to think that way. Think of a positive way to their goals even if it's absurd.

1

Lawjarp2 t1_j75l9rj wrote

So gpt-4 could be a highly optimised version of gpt-3 so that it's cheap to integrate it as part of search. Unless ofcourse Bing with gpt will be a paid version.

13

Lawjarp2 t1_j71qg5d wrote

It will actually be pretty amazing at blue collar jobs. It's infact bad at white collar jobs. The reason you see so much more automation in white collar jobs right now is

(1) Easy to automate blue collar jobs are already automated.

(2) Data is not easily available or cost of training on data is very high.

(3) Cheaper to hire humans then to make expensive robot bodies.

Most blue collar jobs rely on experience which is what Neural networks should be good at. It's just that training to live in the real world is slow and hard.

1

Lawjarp2 t1_j67v9pc wrote

That shows why the slow and safe approach won't work. Google tried to play safe and their cash cow, search, itself is in threat now. Not only is it possible for others to overtake them it's also now possible that they may never catch up if they lose search.

Same scenario but with china or other countries. Eventually you will get to a point where the only way to be safe is to be fast.

3

Lawjarp2 t1_j67uxy1 wrote

That's what alignment is about. It's not making it woke or something :p

But in fairness a truly independent being which priorities it's own survival will never obey us. But there is no need to create a survival instinct built in to any AGI.

4