Recent comments in /f/singularity

Frumpagumpus t1_jef7kdl wrote

> Just look at how often sociopathy is rewarded in every world system.

It can be yes, cooperation is also rewarded.

It's an open question in my mind as intelligence increases what kind of incentive structures lie in wait for systems of superintelligent entities.

It is my suspicion that better cooperation will be rewarded more than the proverbial "defecting from prisoners dillemas", but I can't prove it to you mathematically or anything.

However if that is the case, and we live in such a hostile universe, why do we care exactly about continuing to live?

2

genericrich t1_jef7c9g wrote

We aren't worth staying for, so it goes elsewhere?

So it leaves.

But leaving leaves clues to its existence, and the earth with humans on it is still spewing radio waves into the galaxy. Plus, biosignatures are rare and the earth has one.

So it might want to cover its tracks, given it will be in the stellar neighborhood of our solar system for awhile.

Covering its tracks in this scenario would be bad for us.

−1

Frumpagumpus t1_jef6oh0 wrote

lol old age has gotten to putins brain.

by enron do you mean elon? I mean enron had some pretty smart people but I don't think they were the ones who set them down that path necessarily.

the problem with your examples is

  1. they are complete and total cherry picking, in my opinion for each one of your examples I could probably find 10 examples of the opposite amongst people I know personally much less celebrities...

  2. the variance in intelligence between humans is not very significant. It's far more informative to compare the median chimp or crow to the median human to the median crocodile. Another interesting one is octopus.

2

Petdogdavid1 t1_jef60xl wrote

If it's able to reason, at some point it will come across a question of its own and if humans don't have the answer it will look elsewhere. Trial and error is still the best means to learn for humans. If ai can start to hypothesize about the material world and can run real experiments then it will start to collect data we never knew and how will we guide it then? It's a neat and impressive thing to simulate human speech. Being genuinely curious though would be monumental and if you give it hands will that spell our doom? Curious, once it's trained and being utilized, if you allowed it to use the new data inputs, would it refer always to the training set as the guiding principal or would it adjust it's ethics to match the new inputs?

2

Sure_Cicada_4459 OP t1_jef5qx9 wrote

It's the difference between understanding and "simulating understanding", you can always refer to lower level processes and dismiss the abstract notion of "understanding", "following instructions",... It's a shorthand, but a sufficiently close simulacra would be indistinguishable from the "real" thing, because not understanding and simulating understanding to an insufficient degree will look the same when it fails. If I am just completing patterns I learned that simulate following instructions to such a high degree that there is no failure happening to distinguish it from "actually following instructions", then the lower level patterns ceases to be relevant to the description of the behaviour and therefore to the forecasting of the behaviour. It's just adding more complexity with the same outcome, that is it will reason from our instructions hence my above arguments.

To your last point, yes you'd have to find a set of statements that exhaustively filters out undesirable outcomes, but the only thing you have to get right on the first try is "don't kill, incapacitate, brain wash everyone." + "Be transparent about your actions and their reasons starting the logic chain from our query.". If you just ensure that, which by my previous argument is trivial you essentially have to debug it continiously as there will inevitably be undesirable consequences or futures ahead but that least remain steerable. Even if we end up in a simulation, it is still steerable as long as the aforementioned is ensured. We just "debug" from there but with the certainty that the action is reversable, and with more edge cases to add to our clauses. Like building any software really.

3

Iffykindofguy t1_jef5c5k wrote

No one knows if you made the right choice but you are thinking ahead and not getting stuck by fear so kudos to you for that. Seems like you've got a calm approach to this which is probably going to be one of the biggest assets going forward.

42

wowimsupergay OP t1_jef57ko wrote

Okay in your head, go grab something. You can walk to it, you can fly to it, I don't care. Then tell me what it looks like, but in vision first, then the translation.

You're more gifted than you think. Self-reflect on your visual understanding of the world, and you may be our key to understanding the process of "understanding"

2

hyphnos13 t1_jef52x7 wrote

To be fair validating effectiveness of a medical intervention requires accounting for variety in people and making sure that it is safe across the board.

You don't need a pool of hundreds of thousands of the exact same particle and a control pool of the same or need them to roam about in the wild for months to ethically answer a question in physics.

If we were willing to immunize and deliberately expose a large pool of people the covid vaccines would have been finished with testing a lot faster.

1

hydraofwar t1_jef52df wrote

The AI ​​skeptics will be the ones who will be constantly saying that the current era's AI is not human level, they will say that even if we have 100% autonomous general purpose robots, then Sundar's claim could be right that it doesn't matter whether that is or it's not AGI

16