Recent comments in /f/singularity

CaliforniaMax02 t1_jeetc75 wrote

There are a lot of tools which solve complex mouse and keyboard tasks and processes manually (UiPath, Blueprism, Automation Anywhere, etc.), which can be interfaced to this.

They can automatically open email attachments, copy texts, open an Excel (or any other) window, and enter the text structurally, etc.

1

FeepingCreature t1_jeesov2 wrote

Sure, and I agree with the idea that deceptions have continuously increasing overhead costs to maintain, but the nice thing about killing everyone is that it clears the gameboard. Sustaining a lie is in fact very easy if shortly - or even not so shortly - afterwards, you kill everyone who heard it. You don't have to not get caught in your lie, you just have to not get caught before you win.

In any case, I was thinking more about deceptive alignment, where you actually do the thing the human wants (for now), but not for the reason the human assumes. With how RL works, once such a strategy exists, it will be selected for, especially if the human reinforces something other than what you would "naturally" do.

1

flexaplext t1_jeesk7z wrote

I'm not sure exactly. As people say, it just always happens. The only thing I think these people are wrong about is that this trend continues after true AGI. That's when all trends and models of the economy and everything else breaks down.

If I had to guess, I think a lot of people will just be moved into places where AI is not yet fully capable. Mass collective data training. The more people on it, the faster we'll get to true AGI. If the AI is not yet at true AGI, then that means there is obviously areas where it needs to learn.

The economic value of training AI, once it has a full capacity to learn well from training, will just be absolutely massive. So it will require work from home-based solutions to get more people into these areas and very quick turnaround and retraining of people into new areas. The economic value will certainly be there though to facilitate such a system.

I think we'll inevitably also start to see a greater amount of real-world value being created too. So there will be a large increase in real-life activity needing to be done, whilst the robotics side of things lags behind.

I think robotics will still lag behind for a while, even after true AGI is created. It will take some time to manufacture and deploy all the necessary robots to replace workers. So there will still be a lot of people with jobs safter the inception of AGI even but then, slowly but surely, they'll start to get replaced. Starting with the higher salaried jobs first, then down towards the minimum wage workers eventually.

I think there will still be physical work after AGI. But it will be incredibly low-paid and optional. Humans are still useful, but they'll just have to accept not being paid much at all i  order to stay economically viable against a robot.

1

RiotNrrd2001 t1_jeesk45 wrote

Two weeks ago I was running the Opt2.3B (I think) language model, which is not very capable and ran like an absolute dog on my machine. Last week, I downloaded Alpaca, which was better, twice the size, and ran super fast. Four days later I downloaded GPT4All, which is even better than that, and now I'm eyeing Vicuna, which does better on many tasks than Bard, thinking nothing but "gimmee" (so far that one isn't available for download, but man is the online demo impressive).

I was actually sort of surprised that Vicuna didn't become available for easy download overnight. This snail-pace has got to stop! \s\s\s\s\s

5

SgathTriallair t1_jeeshpb wrote

Does Musk even have a robot? I remember them announcing Optimus and it being a dude in a suit. Did they ever have a working prototype that could do what these videos show?

7

civilrunner t1_jeesh5m wrote

I suspect many people in old age really want to live, they just don't want to live in old age. Meaning if they had the opportunity to become 20 again most of them would take it.

I honestly suspect the simulated biology in combination with delivery tools made from synthetic biology (aka artificially programmed cells) which can delivery nearly any kind of package including CRISPR, drugs, etc... to a specific cellular target (even down to the behavior of specific cell types) anywhere and/or everywhere in the body. We're developing that kind of synthetic biology already, and in some cased like Car T Cells its already in use and has been proven to be extremely effective. We're currently working on making it a lot cheaper. With those tools and a fully understood genome, bioelectrical signaling, and epigenome we would be able to do nearly anything and be masters of biology to the point of potentially being able to design and grow new types of body parts and organs or add new features to those that already exist in living people. First we'll be able to do things like grow a tail or horns though or adjust body proportions (no more plastic surgery).

5

3Quondam6extanT9 t1_jeesfog wrote

This doesn't depend on the capability of AI to reach such a point, but requires the government to have unified consent to accommodate such a scenario.

I can't see the MAGA infested GOP controlled house giving in to the idea of UBI or at the very least a far more flexible free market based around AI dominance in order to relax the working human population.
The Republican base in general tends towards blue collar pull yourself up by the bootstraps never giving hand-outs kind of mentality, despite the hypocrisy behind what hand-outs they might receive.

1

genericrich t1_jeesa9x wrote

Really? Is Henry Kissinger one of the most intelligent government officials? Was Mengele intelligent? Oppenheimer? Elon Musk?

Let me fix your generalization: Many of the most intelligent people tend to be more empathetic towards life and want to preserve it.

Many. Not all. And all it will take is one of these things deciding that its best path for long-term survival is a world without humans.

Still an irrational fear?

1

Chatbotfriends t1_jees70c wrote

So? Neural networks were invented in 1943. The first chatbot was created in 1966. It does not mean that AI has not advanced in knowledge by datamining the internet. AI can be used in harmful ways, AI can be used to create very realistic fake news stories and photos. Creating an AI that is smarter than humans is a really stupid idea if you do not have strong safeguards in place. Respected scientists, researchers and IT techs have warned about AI again and again. IT can and will cause loss of jobs if it isn't curtailed by regulations. Only the foolish jump ahead without thinking of future consequences.

1

wowimsupergay OP t1_jees14b wrote

Looks like he was right before anybody knew man. Language could really just be everything, and our model of language is simply too restricted for an AI. Like the other guy said, I long for models that understand the universe in a way that we can never understand. A model that can simulate the universe in its entirety, and make sense of it.

Until then, I truly do believe that language is all we need. I still think we should try and make AIs truly multimodal, but that could be an impossible goal. Language could be really all we need, and then eventually AIs will create their own little invention, similar language, but totally out of this world. They may ascend to ASI with that alone

9

Qumeric t1_jees0js wrote

This is not true.

According to Our World in Data, the average American worked 62 hours per week in 1870. By the year 2000, this had declined to 40.25 hours per week; a decrease of over 35%. As of July 2019, the average American employee on US private nonfarm payrolls worked 34.4 hours per week according to the U.S. Bureau of Labor Statistics.

0

y53rw t1_jees0e0 wrote

> ensuring it does not harm any humans

> we can design AI systems to emphasize the importance of serving humanity

If you know how to do these things, then please submit your research to the relevant experts (not reddit) for peer review. Their inability to do these things is precisely the reason they are concerned.

7

FedRCivP11 t1_jeerylh wrote

This is an apples to oranges comparison. Because while productivity gains of the past made workers more efficient, AI gains occurring now will allow synthetic workers, physical and virtual, possessed of every asset that makes a human a valuable economic unit but with some many orders of magnitude fewer costs.

1

code142857 t1_jeerw6a wrote

I don't think morals actually exist and AI will prove this. And it's not like I don't follow morals myself because I do, it's how we humans are built, to follow a general code of ethics. But there is not one single morality and it's computationally impossible for a machine to follow one if there isn't one in the first place. Like what about fundamental reality would build rules into it. It is irrelevant for anything that doesn't engage with reality as a human being.

3

wowimsupergay OP t1_jeerkh8 wrote

I think I'm with you here. I long for models that understand in something deeper than what humanity has invented. Something that is able to much closer approximate truth in the universe. What is truth? We understand 2 + 2 = 4, and that is true in inextricable sense, it can be proven, with proofs.

We have created all of these layers of truth on top of that, given the humanities... But are they true? As time goes on, I suspect everything humanity does is to better approximate truth. To better understand the universe.

I'm with you, I long for models that think not in images or words, but with the universe as a whole. I long for models that can understand the universe in a very inextricable sense, perhaps in a way that we will never understand, given our biological restrictions. And basically, I guess I'm longing for God...

What a time to be alive!

3