Recent comments in /f/singularity

johanknl t1_jecpodm wrote

Of course. I never claimed otherwise. Being well produced or having negative effects on a community can absolutely be argued in objective ways.

The fact that for example "a good colour balance in painting is important and good" is subjective, does not mean that we cannot then go on to say that someone objectively produced a better result for that subjective ruleset.

That would focus in on details like that though. Saying that something is "beautiful" is not in any way objective.

1

pleasetrimyourpubes t1_jecpgjd wrote

Many "Twitter famous AI people"* have turned on him for the TIMES article / Lex interview, when just a few days ago they were bowing at his feet. Yud is gonna for sure expand his blocklist since he is quite famously thin skinned.

Lex's Tweet about weak men gaining a little power was almost certainly about Yud. Because Yud wanted to leave the youth with the wisdom that "AI is going to kill you before you grow up."

The TIMES article was completely asinine.

*who may or may not know shit about AI.

9

journalingfilesystem t1_jecoua6 wrote

Here are my two cents. In a sense it has already begun. We have seen massive tech layoffs recently. There are other factors to account for that, but the idea that fewer coders with AI tools can be more effective then more coders without them probably is a factor that companies are taking into account.

From my own experience, institutions have a good deal of inertia. It takes time and resources to change the way that a company does things. People stick with what they are used to as long as it works, even in the presence of newer more efficient options. If a new option is a big enough improvement people will switch, but it won’t be an instant process.

Add to this all the technologies that have been hyped up as the next great thing, and have turned out to either go nowhere or have proved to have been much less revolutionary than promised. Hype cycle is something that experienced decision makers have learned to largely ignore.

Basically I think that what is going to happen is that the AI tech will continue to advance at a rapid pace. Then a few nimble and forward thinking companies will start using it in a major way. It will then take a few quarters of financial reports for most other companies to realize that this is the real deal. Only then will we start seeing really dramatic changes to the job market.

33

acutelychronicpanic t1_jecoprq wrote

I My mental model is based on this:

Approximate alignment will be much easier than perfect alignment. I think its achievable to have AI with superhuman insight and be well enough aligned that it would take deliberate prodding or jailbreaking to get it to model malicious action. I would argue that in many domains, GPT-4 already fits this description.

Regarding roughly equivalent models, I think that there is an exponential increase in intelligence required to take action in the world as you attempt to do more complicated things or act further into the future. My intuition is based on the complexity of predicting the future in chaotic systems. Society is one such system. I don't think 10x intelligence will necessarily lead to 10x increase in competence. I strongly suspect we underestimate the complexity of the world. This may buy us a lot of time by decreasing the peaks in the global intelligence landscape to the extent that humans utilizing narrow AI and proto-AGI may have a good chance.

I do know that regardless of if the AI alignment issue can be solved, the largest institutions currently working on AI are not well aligned with humanity as institutions. Especially the ones that would continue working despite a global effort to slow AI cannot be trusted.

I'm willing to read any resources you want to point me to, or any arguments you want to make. I'd rather be corrected if possible.

1

AmputatorBot t1_jecn5e8 wrote

It looks like OP posted an AMP link. These should load faster, but AMP is controversial because of concerns over privacy and the Open Web.

Maybe check out the canonical page instead: https://www.businessinsider.com/ai-researcher-quit-google-openai-bard-training-on-chatgpt-report-2023-3


^(I'm a bot | )^(Why & About)^( | )^(Summon: u/AmputatorBot)

1

agonypants t1_jecn2g1 wrote

Yudkowsky literally suggests that it would be better to have a full-scale nuclear war than to allow AI development to continue. He's a dangerous, unhinged fucking lunatic and Time Magazine should be excoriated for even publishing his crap. EY, if you're reading this - trim your goddamn eyebrows and go back to writing Harry Potter fan-fic or tickling Peter Thiel's nether regions.

7

AlFrankensrevenge t1_jecmsuz wrote

Image generators and self-driving cars don't create the same kinds of extensive risk that GPT4 does. GPT4 is much more directly on the path to AGI and superintelligence. Even now, it will substantially impact something like 80% of jobs according to OpenAI itself. The other technologies are a big deal, but don't ramify through the entire economy to the same extent.

4

smokingPimphat t1_jecmp4j wrote

I don't think this will ever really happen in any large scale way for a few reasons;

People don't actually know what they want in most cases. This is especially true with regards to creativity.

Very few people want to think about what they want to see, they are happy to choose from available options and I will admit that AI generated content will be part of those options at some point in the future, but that is a long way off and people are not going to just stop making all the things they do now. It is far more likely that humans will create most things and then AI will optionally be used to customize them in various ways should someone actually desire it.

To take your example of a movie generation AI, IMO its far more probable that disney will make a movie and you will optionally be able to ask an AI to do things like extent a plot point or make the fights longer. But even that is not really something that most people are going to be willing to do. They just want to see someone else story, they aren't going to write their own.

There are so many things that people "could" do themselves that they choose to pay others to do, if machines can be leveraged to offer more options that is probably a good thing, but to think that the entire entertainment industry will be replaced is IMO silly. There will always be humans in the mix as the machine will never truly know what we want when we barely know ourselves.

1

Readityesterday2 t1_jecmjn6 wrote

I’ll share my reason why it hasn’t happened.

Jobs are not just about skills. They are also about responsibilities. And you can’t fire ai. Or hold it accountable. Unless you wanna hear “I’m just a model dumbass what did ya think?” 😂

Seriously. There’s more to a hire than churning butter.

−2