Recent comments in /f/singularity
Frumpagumpus t1_jef7kdl wrote
Reply to comment by burnt_umber_ciera in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
> Just look at how often sociopathy is rewarded in every world system.
It can be yes, cooperation is also rewarded.
It's an open question in my mind as intelligence increases what kind of incentive structures lie in wait for systems of superintelligent entities.
It is my suspicion that better cooperation will be rewarded more than the proverbial "defecting from prisoners dillemas", but I can't prove it to you mathematically or anything.
However if that is the case, and we live in such a hostile universe, why do we care exactly about continuing to live?
SucksToYourAssmar3 t1_jef7ire wrote
Reply to comment by Moist_Chemistry1418 in The only race that matters by Sure_Cicada_4459
I hope so. Immortality isn’t a desirable or noble end-goal, at least for any one person.
genericrich t1_jef7c9g wrote
Reply to comment by bigbeautifulsquare in This concept needs a name if it doesn't have one! AGI either leads to utopia or kills us all. by flexaplext
We aren't worth staying for, so it goes elsewhere?
So it leaves.
But leaving leaves clues to its existence, and the earth with humans on it is still spewing radio waves into the galaxy. Plus, biosignatures are rare and the earth has one.
So it might want to cover its tracks, given it will be in the stellar neighborhood of our solar system for awhile.
Covering its tracks in this scenario would be bad for us.
Itchy-mane t1_jef7awb wrote
Reply to comment by UserofDAN in 1X's AI robot 'NEO' by Rhaegar003
You could buy Microsoft stock. Which is twice removed from them
Ketaloge t1_jef71te wrote
Reply to comment by Ago0330 in We have a pathway to AGI. I don't think we have one to ASI by karearearea
I have a feeling we are talking about different things when speaking about parameters. What’s your definition of parameter?
TFenrir t1_jef6zp6 wrote
Reply to comment by SalimSaadi in When will AI actually start taking jobs? by Weeb_Geek_7779
It's not in people's hands yet, these are press releases, but the difference will be when it's folded into everyone's Microsoft/Google experience, which will take months. Maybe by the summer everyone will have access.
genericrich t1_jef6xyt wrote
Reply to comment by flexaplext in This concept needs a name if it doesn't have one! AGI either leads to utopia or kills us all. by flexaplext
Works great until it doesn't, right?
Absolute-Nobody0079 t1_jef6t9e wrote
Reply to comment by wowimsupergay in What if language IS the only model needed for intelligence? by wowimsupergay
I visualize a monkey spanner and rotate it in my head. I can visualize a few different kinds of them with different colors. I think I can create an 'exploded' view of its parts.
p0rty-Boi t1_jef6q5d wrote
Reply to Interesting article: AI will eventually free people up to 'work when they want to,' ChatGPT investor predicts by Coolsummerbreeze1
I thought it was going to free up time so I could focus on what’s really important. Just like all the cashiers displaced by self checkout that now wander the aisles giving excellent customer service.
Frumpagumpus t1_jef6oh0 wrote
Reply to comment by burnt_umber_ciera in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
lol old age has gotten to putins brain.
by enron do you mean elon? I mean enron had some pretty smart people but I don't think they were the ones who set them down that path necessarily.
the problem with your examples is
-
they are complete and total cherry picking, in my opinion for each one of your examples I could probably find 10 examples of the opposite amongst people I know personally much less celebrities...
-
the variance in intelligence between humans is not very significant. It's far more informative to compare the median chimp or crow to the median human to the median crocodile. Another interesting one is octopus.
genericrich t1_jef6keo wrote
Is it even possible to "align" a system, if you can't reliably understand what is happening inside it? How can you be sure it isn't deceiving you?
CausalDiamond t1_jef6fpx wrote
Reply to Today I became a construction worker by YunLihai
If AI is as deflationary as we are assuming here, the construction industry will shrink.
submarine-observer t1_jef68p0 wrote
In China, artists are being laid off en masse and replaced by MidJourney. Programmers are safe for now because the demand for software is so high, but it's expected that the salaries will go down because the supply is going up.
Petdogdavid1 t1_jef60xl wrote
If it's able to reason, at some point it will come across a question of its own and if humans don't have the answer it will look elsewhere. Trial and error is still the best means to learn for humans. If ai can start to hypothesize about the material world and can run real experiments then it will start to collect data we never knew and how will we guide it then? It's a neat and impressive thing to simulate human speech. Being genuinely curious though would be monumental and if you give it hands will that spell our doom? Curious, once it's trained and being utilized, if you allowed it to use the new data inputs, would it refer always to the training set as the guiding principal or would it adjust it's ethics to match the new inputs?
AGI_69 t1_jef5z5c wrote
Reply to comment by StarCaptain90 in 🚨 Why we need AI 🚨 by StarCaptain90
No. The power seeking is not result of human language. It's instrumental goal.
I suggest you read something about AI and it's goals.
https://en.wikipedia.org/wiki/Instrumental_convergence
Sure_Cicada_4459 OP t1_jef5qx9 wrote
Reply to comment by FeepingCreature in The only race that matters by Sure_Cicada_4459
It's the difference between understanding and "simulating understanding", you can always refer to lower level processes and dismiss the abstract notion of "understanding", "following instructions",... It's a shorthand, but a sufficiently close simulacra would be indistinguishable from the "real" thing, because not understanding and simulating understanding to an insufficient degree will look the same when it fails. If I am just completing patterns I learned that simulate following instructions to such a high degree that there is no failure happening to distinguish it from "actually following instructions", then the lower level patterns ceases to be relevant to the description of the behaviour and therefore to the forecasting of the behaviour. It's just adding more complexity with the same outcome, that is it will reason from our instructions hence my above arguments.
To your last point, yes you'd have to find a set of statements that exhaustively filters out undesirable outcomes, but the only thing you have to get right on the first try is "don't kill, incapacitate, brain wash everyone." + "Be transparent about your actions and their reasons starting the logic chain from our query.". If you just ensure that, which by my previous argument is trivial you essentially have to debug it continiously as there will inevitably be undesirable consequences or futures ahead but that least remain steerable. Even if we end up in a simulation, it is still steerable as long as the aforementioned is ensured. We just "debug" from there but with the certainty that the action is reversable, and with more edge cases to add to our clauses. Like building any software really.
StarCaptain90 OP t1_jef5q6v wrote
Reply to comment by Unlikely_Let2616 in 🚨 Why we need AI 🚨 by StarCaptain90
Once robot production speeds up, we are no longer optimal. We complain, require sleep, get tired, lack strength, and are always looking for a way out
dasnihil t1_jef5ky1 wrote
Reply to comment by sideways in Goddamn it's really happening by BreadManToast
first steps towards abundance
SurroundSwimming3494 t1_jef5gh0 wrote
Reply to comment by tiselo3655necktaicom in Interesting article: AI will eventually free people up to 'work when they want to,' ChatGPT investor predicts by Coolsummerbreeze1
>the soon to be billions of unemployed.
You know this is real life and not fantasy, correct? This is not going to happen soon.
Iffykindofguy t1_jef5c5k wrote
Reply to Today I became a construction worker by YunLihai
No one knows if you made the right choice but you are thinking ahead and not getting stuck by fear so kudos to you for that. Seems like you've got a calm approach to this which is probably going to be one of the biggest assets going forward.
wowimsupergay OP t1_jef57ko wrote
Reply to comment by Absolute-Nobody0079 in What if language IS the only model needed for intelligence? by wowimsupergay
Okay in your head, go grab something. You can walk to it, you can fly to it, I don't care. Then tell me what it looks like, but in vision first, then the translation.
You're more gifted than you think. Self-reflect on your visual understanding of the world, and you may be our key to understanding the process of "understanding"
IntroVertu OP t1_jef56ct wrote
Reply to comment by kikechan in Will AI's make language learning useless? by IntroVertu
ok I understand !
hyphnos13 t1_jef52x7 wrote
Reply to comment by visarga in Goddamn it's really happening by BreadManToast
To be fair validating effectiveness of a medical intervention requires accounting for variety in people and making sure that it is safe across the board.
You don't need a pool of hundreds of thousands of the exact same particle and a control pool of the same or need them to roam about in the wild for months to ethically answer a question in physics.
If we were willing to immunize and deliberately expose a large pool of people the covid vaccines would have been finished with testing a lot faster.
hydraofwar t1_jef52df wrote
Reply to comment by Wavesignal in Google CEO Sundar Pichai promises Bard AI chatbot upgrades soon: ‘We clearly have more capable models’ - The Verge by Wavesignal
The AI skeptics will be the ones who will be constantly saying that the current era's AI is not human level, they will say that even if we have 100% autonomous general purpose robots, then Sundar's claim could be right that it doesn't matter whether that is or it's not AGI
CommunismDoesntWork t1_jef7r37 wrote
Reply to comment by Relevant_Ad7319 in Language Models can Solve Computer Tasks (by recursively criticizing and improving its output) by rationalkat
Unix adopted the philosophy that text is the ultimate API, which is why everything on Linux can be done through the CLI, including moving the mouse. And LLMs are very good at using text. So everything sort of does have an API.