Recent comments in /f/singularity

FreakingFreaks t1_jefnyyv wrote

GPT 4: Is Elon Musk's Fear of AI and LLMs Driven by Capitalism and the Threat to Luxury Markets?

As many of you know, Elon Musk has been quite vocal about his concerns regarding artificial intelligence (AI) and large language models (LLMs). He's called for strict regulation and oversight, even going so far as to say that AI could be more dangerous than nuclear weapons. While the potential risks of AI are not to be taken lightly, I can't help but wonder if Musk's fears are influenced by his capitalist mindset and the potential threat AI poses to luxury markets like his own Tesla cars.

Think about it: one of the most significant concerns surrounding AI is its potential to displace jobs across various industries. As AI becomes more advanced, more people could find themselves out of work, and subsequently, with less disposable income. In such a scenario, purchasing luxury items like Tesla cars might become less of a priority for the average person.

This brings us to the broader implications of AI on wealth distribution and power dynamics. As a billionaire entrepreneur, Musk thrives in an environment where resources and power are concentrated among a select few. However, AI has the potential to democratize access to knowledge, resources, and decision-making. This could eventually lead to a more equitable distribution of wealth and power, which may not bode well for the ultra-wealthy, like Musk.

So, are Musk's concerns about AI and LLMs genuinely about the potential dangers they pose, or is there an underlying fear of losing control over his empire and the luxury market? While we can't say for sure, it's essential to consider all possible motivations when discussing such a complex and far-reaching topic.

What do you all think? Is Musk's fear of AI driven by capitalism and the potential impact on the luxury market, or is it solely based on the potential harm AI could cause? Let's have a thoughtful discussion in the comments below!

10

Subinatori t1_jefnlou wrote

Not hiring is more likely. I don't think you immediately start laying people off because some new piece of software shows up. There's a period of acclimation and getting to know whether it will actually consistently do what you need it to do. And the people who will be doing that testing are the people currently doing the work. So as it makes their job easier it's just that there won't be as much need to hire new people because productivity per person is up.

3

DaggerShowRabs t1_jefnl06 wrote

Ah, I get what you mean. I still don't think that necessarily solves the problem. It could be possible for a hypothetical artificial superintelligence to take actions that seem harmless to us, but because it is better at planning and prediction than us, the system knows the action or series of actions will lead to humanity's demise. But since it appears harmless to us, when it asks, we say, "Yes, you are acting in the correct way".

3

Babelette t1_jefnc70 wrote

I think there are at least 4 possible outcomes:

1- Humans and AGI live together symbiotically and merge gradually.

2- Humans abruptly go extinct due to our own actions or the actions of AGI. AGI continues on.

3-Both humans and AGI go extinct.

4- Humans wipeout AGI through some means, reverting back to analog technologies, until AGI develops again...

Hoping for option 1 but honestly I think option 3 is probably the most likely.

1

Qumeric t1_jefml1n wrote

I did not pick anything specifically, I just copied data from where I have seen it recently. How do I distort facts if I simply provide data without ANY interpretation?..

Okay, let's use 1950. Working hours per year in U.S reduced from 2000 to 1750, 12.5% reduction. Most developed countries did even better, for example, France (and it is not the best country in this aspect) moved from 2200 to 1500, 32% reduction. Germany is one of the best, they work 45% less than in 1950.

I do not deny productivity-pay gap, I dispute your claim "we always end up getting more productive and working the same amount or more". This is simply not true.

Although yes, we could work much less than now, we have enough technology to have 20h work weeks or even less.

0

BigMemeKing t1_jefmcx4 wrote

Reply to comment by Zer0D0wn83 in 1X's AI robot 'NEO' by Rhaegar003

Youre also thinking more from a human perspective, it's hard to do for humans because of our hands. A robot with specific attachments could do the job much easier. And it could just be a generalized attachments that could be used for multiple purposes. Or an attachment station that allows the bots to pre equip for a specific task.

2