Recent comments in /f/singularity
FreshSchmoooooock t1_jeez7fv wrote
Reply to comment by Chatbotfriends in Goddamn it's really happening by BreadManToast
Anyone who deploys AI to create value should pay taxes.
burnt_umber_ciera t1_jeez28w wrote
Reply to comment by Frumpagumpus in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
Empathy and ethics definitely do not scale with intelligence. There are so many examples of this in humanity. For example, Enron, smartest in the room - absolute sociopaths.
Just look at how often sociopathy is rewarded in every world system. Many times the ruthless, who are also obviously cunning, rise. Like Putin for example. Heβs highly intelligent but a complete sociopath.
AndiLittle t1_jeeyw6x wrote
Reply to π¨ Why we need AI π¨ by StarCaptain90
I strongly recommend that everyone who doesn't know who Robert Miles is, search for him on Youtube and educate themselves.
theonlybutler t1_jeeyvgd wrote
Reply to comment by StarCaptain90 in π¨ Why we need AI π¨ by StarCaptain90
Good point. A product of it's parents. It won't necessarily be destructive I agree but it could potentially view us as inconsequential or a just a tool to use at its will. one example: Perhaps it may decide if wants to expand through the universe and having humans produce resources to do that, it could determine humans are most productive in labour camps and send us all off to them. It could also decide oxygen is too valuable a fuel to be wasted on us breathing it and just exterminate us. Pretty much how humans treat animals, sadly. (Hopefully it worries about ethics too and keeps us around).
wowimsupergay OP t1_jeeyunh wrote
Reply to comment by Azrael_Mawt in What if language IS the only model needed for intelligence? by wowimsupergay
No no I totally agree with you. I don't think consciousness is just a switch, I do think consciousness is something that is experienced by all "systems" so to speak, it is just that humans are so far on that consciousness spectrum We have been totally removed from animals, and thus we define where consciousness begins at where we are. Which is also like 100,000 times further away than every animal basically
This brings me to another idea. Will AI think we are conscious? Perhaps we are 100,000 times less conscious than the future AIs.. if that's the case, then once again, we are so far down the spectrum, We may not even fulfill the requirements for true consciousness (however the AIs choose to define it)
Once again, this is all speculation, This was just something cool to think about
[deleted] t1_jeeyprs wrote
Reply to comment by Saerain in π¨ Why we need AI π¨ by StarCaptain90
[deleted]
StarCaptain90 OP t1_jeeykrv wrote
Reply to comment by Hotchillipeppa in π¨ Why we need AI π¨ by StarCaptain90
I agree
StarCaptain90 OP t1_jeeyh6n wrote
Reply to comment by y53rw in π¨ Why we need AI π¨ by StarCaptain90
Believe it or not many people are concerned about that. It's irrational, I know. But it's there.
wowimsupergay OP t1_jeeyh0r wrote
Reply to comment by peterflys in What if language IS the only model needed for intelligence? by wowimsupergay
So I guess our question is, can AI effectively simulate the real world, taken in through senses (or perhaps whatever it invents)... Simulating the real world would fundamentally require simulating all of the 4 forces that make it up. If we can get to that, and then discover whatever new forces that we're missing (if there are any).
We're going to need a team of physicists and a team of devs to work on this. Given the four forces of the universe, can an AI simulate an artificial world that is accurate enough to actually run experiments?
IntroVertu OP t1_jeeygqq wrote
Reply to comment by Veleric in Will AI's make language learning useless? by IntroVertu
Don't know him but he is 94 years old. I should make it out alive.
Can you explain why ?
Hotchillipeppa t1_jeeygnj wrote
Reply to comment by StarCaptain90 in π¨ Why we need AI π¨ by StarCaptain90
Moral intelligence is connected to intellect, the ability to recognize cooperation is most often beneficial compared to competition, even humans with higher intelligence tend to have better moral reasoning...
StarCaptain90 OP t1_jeeybvy wrote
Reply to comment by WeeaboosDogma in π¨ Why we need AI π¨ by StarCaptain90
I am Skynet
Geeksylvania t1_jeeyafz wrote
Reply to comment by smokingPimphat in What are the so-called 'jobs' that AI will create? by thecatneverlies
If you don't want to make your own content, you can watch AI-created content made by other people and posted online, most of which will be free.
You can't compete with free.
qepdibpbfessttrud t1_jeey9as wrote
Reply to comment by genericrich in π¨ Why we need AI π¨ by StarCaptain90
It's inevitable. From my perspective the safest path forward is opening everything and distributing risk
DaCosmicHoop t1_jeey7v7 wrote
Reply to comment by genericrich in π¨ Why we need AI π¨ by StarCaptain90
It's really the ultimate coinflip.
It's like we are a pack of wolves trying to create the first human.
StarCaptain90 OP t1_jeey7hw wrote
Reply to comment by ididntwin in π¨ Why we need AI π¨ by StarCaptain90
No, but I've seen a million arguments on the opposite side so why not have million on each side?
StarCaptain90 OP t1_jeey2qr wrote
Reply to comment by theonlybutler in π¨ Why we need AI π¨ by StarCaptain90
One reason it seeks power strategies because it's based off of human language. By defaults humans seek power, so it makes sense for an AI to also seek power because of the language. Now that doesn't mean it equates to destruction.
silver-shiny t1_jeey24l wrote
Reply to comment by [deleted] in The Alignment Issue by CMDR_BunBun
If you're much smarter than humans, can make infinite copies of yourself that immediately know everything you know (as in, they don't need to spend +12 years at school), think must faster than humans, and want something different than humans, why would you let humans control you and your decisions? Why would you let them switch you off (and kill you) anytime they want?
As soon as these things have a goal that are different than ours, how do you remain in command of decision-making at every important step? Do we let chimpanzees, creatures much dumber than us, run our world?
And here you may say, "Well, just give them the same goals that we have". The question is how. That's the alignment problem.
WeeaboosDogma t1_jeey1p8 wrote
Reply to π¨ Why we need AI π¨ by StarCaptain90
Did an AI write this?
Rofel_Wodring t1_jeexxsd wrote
Reply to comment by homezlice in π¨ Why we need AI π¨ by StarCaptain90
They will try, but they can't. The powers-that-be are already realizing that the technology is growing beyond their control. It's why there's been so much talk lately about slowing down and AI safety.
It's not a problem that can be solved with more conscientiousness and foresight, either. It's a systemic issue caused by the structures of nationalism and capitalism. In other words, our tasteless overlords are realizing that this time around, THEY will be the ones getting sacrificed on the altar to the economy. And there's nothing they can do from experiencing the fate they so callously inflicted on their fellow man.
Tee hee.
Veleric t1_jeexx36 wrote
Reply to Will AI's make language learning useless? by IntroVertu
I think Noam Chomsky might murder you if he read this ;)
ididntwin t1_jeexvad wrote
Reply to comment by StarCaptain90 in π¨ Why we need AI π¨ by StarCaptain90
Are you saying youβre the first person to start the discussion on the benefits of AI to society ππ€£
StarCaptain90 OP t1_jeexul3 wrote
Reply to comment by Outrageous_Nothing26 in π¨ Why we need AI π¨ by StarCaptain90
Believe it or not I hear more Skynet concerns than that, but I do understand your fear. The implication of AGI is risky if it's in the hands of one entity. But I do think a solution is not shutting down AI development, I've been seeing a lot of that lately and I find it irrational and illogical. First of all, nobody can shut down AI. Pausing future development at some corporations for a short period is more likely, but then what? China, Russia, and other countries are going to keep advancing. And most people don't understand AI development, we are currently entering that development spike. If we fall behind even 1 year, that's devastating to us. AI development follows an exponential curve. I don't think it makes sense that any government would even consider pausing because of this. Assuming theyre intelligent.
qepdibpbfessttrud t1_jeextih wrote
Reply to comment by SgathTriallair in 1X's AI robot 'NEO' by Rhaegar003
They do, but LLMs may leapfrog carefully constructed vector space training/inference that Tesla had heavily invested into
StarCaptain90 OP t1_jeez966 wrote
Reply to comment by AndiLittle in π¨ Why we need AI π¨ by StarCaptain90
I am familiar with his work. Very smart dude