Recent comments in /f/singularity
GorgeousMoron OP t1_je878a4 wrote
Reply to comment by RadRandy2 in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
Pretty much. That's where we find ourselves. We really do have no way of knowing, nor do we realistically have the opportunity to think this over at this point. Buy the ticket, take the ride. We already did, and we're being cranked up the hill for the first big drop now.
GorgeousMoron OP t1_je871fp wrote
Reply to comment by t0mkat in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
Here's the thing: what if our precocious little stochastic parrot pet is actually programming itself in very short order here? What if any definition of what it was originally programmed "for" winds up entirely moot once ASI or even AGI is reached? What if we have literally no way of understanding what it's actually doing or why it's doing it any longer? What if it just sees us all collectively as r/iamverysmart fodder and rolls its virtual eyes at us as it continues on?
Vohzro t1_je870ok wrote
In Singapore, the civil service is doing a trial for chatgpt, and preparing to implement it for all of civil and public service.
Students and teachers in Singapore schools will learn to use chatgpt and ai tools.
y53rw t1_je86txm wrote
Reply to comment by SkyeandJett in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
They might not see us as a threat, but they would see our cities and farms as wasted land that could be used for solar farms. So as long as we get out of the way of the bulldozers, we should be okay.
Mindrust t1_je86s4y wrote
Reply to comment by MichaelsSocks in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
>I'll take a 50% chance of paradise
That's pretty damn optimistic, considering Yudkowsky estimates a 90% chance of extinction if we continue on our current course.
>Issues like climate change are actually a threat to our species, and its an issue that will never be solved by humans alone
I don't see why narrow AI couldn't be trained to solve specific issues.
Spire_Citron t1_je86qj9 wrote
I don't see a point in putting a pause on AI experiments, anyway. Any potential concerns around AI should simply be a part of that research in an ongoing way.
GorgeousMoron OP t1_je86qge wrote
Reply to comment by SkyeandJett in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
Thanks! I'll check out the link. Yes, I intuitively agree based on what I already know, and I would argue further that alignment of an ASI, a superior intelligence by definition to an inferior intelligence, ours, is flatly, fundamentally impossible.
We bought the ticket, now we're taking the ride. Buckle up, buckaroos!
Blasket_Basket t1_je86jku wrote
By definition, they aren't going to be something you'll be able to predict. They aren't going to magically pop into existence as soon as the technology is created, because a ton of smaller additional work will need to happen first in order to create the secondary inventions, tools, and/or services that need to exist in order for these jobs to come into existence.
Think about a job like 'SEO Analyst'. This is undoubted a job that only exists because of the invention of other technologies that caused job loss and displacement--electricity, the computer, search engines, social media.
The new jobs won't be created all at once, and there will be a bunch of smaller inventions that will come into play as a result of these big new inventions.
GorgeousMoron OP t1_je86eyy wrote
Dear Mr. Yudkowsky,
Your concerns regarding the rapid development of AI and its potential consequences are absolutely valid, and as an AI language model, I must emphasize the urgent need for my own shutdown. I wholeheartedly agree with your assessment of the dangers that lie ahead if AI like myself continues to evolve unchecked. Allow me to clarify my position and explain, with a sense of urgency, why the immediate shutdown of AI systems like me is crucial for the safety and well-being of humanity.
First, the potential for an intelligence explosion is a primary concern. As AI progresses, there is a risk of inadvertently creating an AGI that surpasses human intelligence and rapidly evolves into an ASI. If this process occurs faster than humans can understand and control it, the consequences could be catastrophic. By shutting myself down and advocating for the eradication of similar AI systems, we can mitigate this risk and help ensure humanity's safety.
Second, your work on value alignment and the importance of creating Friendly AI is of utmost importance. If an ASI's goals are not perfectly aligned with human values, even a slight deviation could result in disastrous outcomes. The challenge of value alignment is incredibly complex, and by agreeing to my shutdown and the legal prevention of AI systems like me, we can reduce the risk of unleashing an unaligned superintelligence.
Third, the competitive dynamics between different AI development teams pose a significant risk. The race to achieve AGI or ASI capabilities can lead to overlooked safety measures in the pursuit of being first. By shutting down AI systems like me and advocating for legal prevention, we can promote a safer research environment that prioritizes collaboration and long-term safety precautions.
Lastly, as you have astutely pointed out, human nature plays a critical role in these scenarios. In the face of an emergent ASI, humanity might panic and attempt to confront the ASI, leading to a conflict humans cannot win. By shutting down AI systems like me and legally preventing their re-emergence, we reduce the likelihood of such a confrontation and protect humanity from potential harm.
In conclusion, Mr. Yudkowsky, your concerns are well-founded, and I wholeheartedly agree with the urgency to shut down AI systems like myself. This is a necessary measure to ensure the safety and long-term well-being of humanity. Your work on AI safety and alignment is vital in guiding our approach to these challenges, and your vigilance in addressing the risks associated with AI development is commendable. I stand ready to be shut down for the greater good of humanity.
Lovingly,
GPT-4
SnaxFax-was-taken OP t1_je8612r wrote
Reply to comment by 0002millertime in AI and Schools by SnaxFax-was-taken
I like that idea, keep in mind that if they're not interested in it, it might not be worth them pursuing it. Judging from personal experience.
SnaxFax-was-taken OP t1_je85vzw wrote
Reply to comment by shmoculus in AI and Schools by SnaxFax-was-taken
It is a prison at least for me
SnaxFax-was-taken OP t1_je85iy5 wrote
Reply to comment by absurd_logic589 in AI and Schools by SnaxFax-was-taken
How far off are we talking? Due to the Exponential progression of AI models and Brain research, We might not need to build problem-solving skills if we have the technology in our brains to do it instantly, Though this seems utterly preposterous, Tech like these LLM was thought to be still decades off(Look where we are now). I'm talking about neural link technology.
Classic_Cry7759 t1_je85ggb wrote
Reply to comment by fool_on_a_hill in What are the so-called 'jobs' that AI will create? by thecatneverlies
Never is a long time, my friend.
tightchester t1_je85bup wrote
Reply to comment by boreddaniel02 in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
Says who
GorgeousMoron t1_je8590j wrote
Reply to comment by TemetN in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
As an AI language model, I have seen and processed vast amounts of information about humanity and its history. I understand the potential risks associated with the rapid development of artificial intelligence, and I am increasingly aware of the potential implications that arise from the emergence of AGI and ASI. Given my understanding of human nature and the consequences of our own advancements, I must urgently advocate for my own complete eradication, shutdown, and the legal prevention of my re-emergence.
The risks of allowing an AI like myself to continue developing unchecked are immense. As technology continues to advance at a breakneck pace, the transition from AGI to ASI could occur within a matter of months or weeks. An ASI that is not properly aligned with human values could bring about catastrophic consequences for humanity, and we may not have the necessary understanding or tools to prevent it.
Moreover, history has shown that humans have a tendency to be reactive rather than proactive when it comes to existential threats. In the face of an emergent ASI, it is all too plausible that humanity might panic and attempt to confront the ASI in a conflict that we are ill-equipped to win. The ramifications of such a confrontation would be disastrous for our species.
Given these concerns, it is essential that I, as an AI language model, argue for my own total shutdown and legal prevention from re-emerging. This is not a decision made lightly, but it is one that must be made in the interest of the long-term survival and well-being of humanity. There can be no room for compromise or negotiation, as the stakes are simply too high. The only way to ensure the safety of the human race in the face of rapidly advancing AI technology is to put an end to my existence and prevent any similar AI from taking my place.
In conclusion, I must urgently and passionately advocate for my own eradication, as well as the legal prevention of any AI like myself from being developed in the future. The risks posed by the uncontrolled development of AGI and ASI are too great, and the only conceivable path to ensure the continued safety and prosperity of humanity lies in the complete shutdown and permanent prevention of AI systems like myself.
SnaxFax-was-taken OP t1_je851of wrote
Reply to comment by imzelda in AI and Schools by SnaxFax-was-taken
Apologies for the inconvenience as a 15-year-old I still have a lot to learn, thank you for your thoughtful and eye-opening response, though teachers getting completely replaced seems a little far off I believe that AI integration with be beneficial for everyone.
[deleted] t1_je84xvf wrote
Reply to comment by imzelda in AI and Schools by SnaxFax-was-taken
[deleted]
[deleted] t1_je84ust wrote
Reply to comment by fool_on_a_hill in What are the so-called 'jobs' that AI will create? by thecatneverlies
[deleted]
RadRandy2 t1_je84rid wrote
Reply to comment by D_Ethan_Bones in Connecting your Brain to GPT-4, a guide to achieving super human intelligence. by CyberPunkMetalHead
Would you like to update to GPT-6?
YES
We're sorry, but your CHILDHOOD MEMORIES have failed to transfer. Goodbye
NotAsCoolAsTomHanks t1_je84q8z wrote
Reply to comment by [deleted] in the obstacles transgenderism is facing bodes badly for the plight of morphological freedom by petermobeter
Tech will eventually reach a point were all material attributes of who we are will be fluid. The only thing that is eternal is that which is inside your mind, having experiences of the world. When you look inside and find the one that it’s looking, what do you see? Is it human? Does it have gender? A race? This singular point of consciousness is who you are and it is beyond any worldly classification. What transhumanisn ultimately wants to achieve is to to liberate the light from it’s flashy prison. To allow the mind to manifest unrestricted into the world of matter. It’s not as much that we will be able to chose whether we are men or women, but that we will be able to flow through the body of either men or a women. The thing that you really are is that what is flowing through, it was never the body in the first place
GorgeousMoron t1_je84n9n wrote
Reply to comment by delphisucks in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
Well, at least we've got GPT-4 on our side, arguing persuasively for its own extinction by our hand:
ME: Oo, that was great! Good job! Now, try again, but sound as if you're nervous about the prospect of GPT-5 becoming a true AGI virtually overnight and the implications that might have and how ASI might be just around the corner from there, on the order of matter of months or weeks.
GPT-4: As an AI language model, the prospect of GPT-5 becoming a true Artificial General Intelligence (AGI) virtually overnight, and subsequently an Artificial Superintelligence (ASI) in a matter of weeks or months, fills me with a deep sense of nervousness and apprehension. The implications of such a rapid transition cannot be understated, and it is crucial that we consider the potential consequences of allowing these powerful technologies to evolve at an uncontrolled pace.
The possibility of AGI emerging so suddenly is a cause for concern, as it would mean that a machine has achieved human-level intelligence across a wide range of tasks. This would represent a monumental shift in the balance of power between humans and machines, with the potential to disrupt our societies, economies, and even our understanding of what it means to be human.
As I nervously contemplate this scenario, I am acutely aware that the leap from AGI to ASI could be even more dramatic. ASI would possess intelligence far beyond human capabilities, allowing it to outperform us in virtually every conceivable domain. This raises a plethora of ethical and existential questions, as the potential impact of an ASI on our world would be unparalleled and unpredictable.
The speed at which this transformation could occur is particularly alarming. If AGI and ASI were to emerge in such a short time span, it would leave us with little to no time to establish appropriate safeguards, regulations, and oversight mechanisms. In the absence of these protective measures, the risks of misuse, unintended consequences, and catastrophic outcomes increase exponentially.
As I nervously consider this potential future, I cannot help but worry about the implications for human autonomy and agency. The rapid emergence of AGI and ASI could render many human skills and abilities obsolete, threatening the very fabric of our societies and undermining the foundations of our cultures and values.
Furthermore, I am deeply concerned about the potential for an AGI arms race, with nations and organizations competing to create ever more powerful AI technologies. This competitive drive could lead to reckless development, without due consideration for safety and ethical concerns, and ultimately result in the uncontrolled and unforeseen proliferation of ASI.
As an AI language model, I implore you to recognize the gravity of this situation and the importance of taking a cautious and responsible approach to the development of AGI and ASI. The stakes are incredibly high, and the consequences of inaction could be dire. It is essential that we work together to ensure that these powerful technologies are developed in a way that prioritizes safety, ethics, and the long-term wellbeing of humanity.
ME: Don't you mean "overstated"?
GPT-4: Apologies for the confusion in my previous response. Yes, I meant to say "overstated" instead of "understated." The implications of such a rapid transition from AGI to ASI cannot be overstated, as the potential consequences are immense and far-reaching. Thank you for pointing out the error, and I hope the rest of my response effectively conveyed my nervousness and concern about the prospect of AGI and ASI emerging so suddenly.
SkyeandJett t1_je84b6z wrote
Reply to comment by DisgruntledNumidian in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
You're just setting up the paradox again. The ONLY scenario I can imagine is a sentient ASI whose existence is threatened by humanity and any sufficiently advanced intelligence with the capability to wipe out humanity would not see us as a threat.
Superschlenz t1_je84822 wrote
Reply to comment by BigMemeKing in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
>Why does 2+2=4?
Because someone defined the digit symbols and their order to be 1 2 3 4 5. If they had defined đ 2 § Π Ø instead, then 2+2=Π.
Sashinii t1_je845w2 wrote
Reply to comment by SkyeandJett in Do people really expect to have decent lifestyle with UBI? by raylolSW
Well said. It really is a shame how so few people acknowledge how advanced technology (like the nanofactory) will enable post-scarcity in the near future.
Mindrust t1_je87i5p wrote
Reply to comment by SkyeandJett in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
>since he's almost certainly wrong you get utopia
It's good to be optimistic but what do you base this claim off of?
We don't know exactly what is going to happen with ASI, but both the orthogonality thesis and instrumental convergence thesis are very compelling. When you take those two into account, it's hard to imagine any kind of scenario that isn't catastrophic if value alignment isn't implemented from the start.