Recent comments in /f/singularity

confused_vanilla t1_jebkw28 wrote

Mostly shrugs and "I can still tell it's an AI writing that" or "It won't ever be as good as a human". Straight up denial from almost everyone. A few thought it was cool though and started playing around with it.

40

scooby1st t1_jebjb4l wrote

The word you're looking for is astroturfing.

Small amount of redditors can influence thousands of the morons until the "discussion" is a bunch of people in a circlejerk where everyone gets to be mad and validated.

I sincerely hope more people in the world have the ability to critically think than I am seeing on the internet.

I disagree with that open letter because the US doesn't have the ability to control China from doing the same research without going to war. So it's prisoner's dilemma and we don't have much choice to either continue advancing the technology ourselves or shoot ballistic missiles at China if they start doing the same and start getting scarily good at it. We'd rather not get to that point.

28

ninjasaid13 t1_jebjax5 wrote

>Quite frankly, I trust the morality of Google/Microsoft/OpenAI far more than I do the morality of our pandering, corrupt, tech-illiterate "leaders."

are you talking about U.S. leaders or leaders in general?

0

AvgAIbot t1_jebivu0 wrote

Hey there, great question! It's still a topic of debate, but let's break down the potential relationship between quantum computers and AGI (Artificial General Intelligence) for you.

Quantum computers utilize qubits, which can exist in multiple states simultaneously, allowing them to perform many calculations at once. This is known as quantum superposition. They also take advantage of a concept called entanglement, which helps to achieve exponentially faster computation speeds.

Now, for AGI, we're talking about machines that possess the ability to understand, learn, and apply knowledge across a wide range of tasks and domains, similar to human intelligence. Current AI systems, like the one you're interacting with right now, are narrow AI and are specialized in specific tasks.

There's a hypothesis that quantum computing could play a significant role in the development of AGI due to its potential to tackle complex problems and optimize algorithms in ways that classical computers can't. However, it's important to note that quantum computers are still in their infancy and face several technical challenges, such as error correction and scaling up the number of qubits.

Moreover, AGI is not just about computational power but also about creating algorithms and frameworks that can truly replicate human-like intelligence. Quantum computing may provide an acceleration in the development of AGI, but it won't single-handedly solve the problem.

In summary, while quantum computers could potentially contribute to the development of AGI, it's important to understand that they are just one piece of the puzzle. We still need to make significant advances in AI algorithms and our understanding of intelligence itself to fully realize AGI. So, while it's an exciting prospect, it's not a guaranteed outcome.

2

squareoctopus t1_jebhihh wrote

The difference between “I just bought the most cancerous social network and made it even worse, so I want you to stop AI for 6 months because it can be damaging” and “let’s work together”.

Gavin Fucking Musk, Elon Fuckin Belson

6

Caffdy t1_jebgim4 wrote

I don't think we will be able to realize when AI cross the rubicon, it already exhibit misleading, cheating and lying behaviors akin to us, an ASI can very well manipulate anyone and any test/safety protocol to operate covertly and undermine our power as an species; it will be too late when we finally realize

5

Caffdy t1_jebfvjx wrote

> The prize isn't necessarily just getting rich, its also creating a society where being rich doesn't matter so much

This phrase, this phrase alone say it all. Getting rich and all the profits in the world won't matter when we will be a inch-step close to extintion; from AGI to Super Artificial Intelligence it won't take long; we are a bunch of dumb monkeys fighting over a floating piece of dirt in the blackness of space, we're not prepared to understand and undertake on the risks of developing this kind of technology

−1

Mortal-Region t1_jebev1o wrote

Yeah, bonehead move -- seems they just set up a simple web form.

But what's suspicious to me is that the letter specifically calls for a pause on "AI systems more powerful than GPT-4." Not self-driving AI or stable diffusion or anything else. GPT-4 is the culprit that needs to be paused for six months.

Then, of course, there's the problem that there's no way to pause China, Russia, or any other authoritarian regime. I've never heard a real solution to that. It's more like the AI-head-start-for-authoritarian-regimes letter.

27