genericrich

genericrich t1_j0vp0hm wrote

Nobody knows what "it" is, to begin with. AI is constantly being redefined as "stuff computers don't do yet".

If we agree that *all* LLMs are doing are applying a sophisticated statistical algorithm to choose the next relevant word, and that "extended LLMs" will be doing the same tricks using different techniques...does it makes sense to at some point declare "we have AGI" when all we'll really have are a bunch of layered models doing the same sort of applied statistics?

Is that all human beings are? Is cognition just computation?

1

genericrich t1_j0ma7wr wrote

Researchers don't sell products in the marketplace. Amoral corporations which are the living embodiment of the system we've designed to centralize and amass capital are the ones who sell products. The researchers may "design" for ethics but the corporations put products into productions and if they have to make a tradeoff between profit and ethics I will give you one guess which way they're going to jump.

We've seen it now in Midjourney, etc. where they just snarfed up a bunch of copyrighted images to power their lookalike/derivative works machine (which is very cool tech, etc.), but which abuses copyright at scale. They retconn their liability by saying you can't copyright these images either but they know full well people are going to do just that for book covers, printed works, etc. It can't be stopped. By the time the glacial courts get around to addressing it, the world will have been changed and at best there will be some minor changes to the law which won't help anyone whose rights have been violated already.

Not saying we shouldn't try but the deck is stacked by capitalism against us. Corporations are never going to be ethical until they are forced to be ethical, and that takes far too long to enact meaningful course correction.

6

genericrich t1_j0iez6j wrote

Scary scenario: What is the US government's policy on AGI? The DOD has plans, revised yearly, for invading every country on Earth, just in case. Think they've overlooked this?

What do they do if they suspect Google has one in a lab? Or OpenAI? Or some lab in China?

AGI is a game changer in geopolitics. Would US government policy want to just "allow" China to have one, if it didn't have one?

What's China's similar policy towards the US?

2

genericrich t1_j0ceu46 wrote

Why not worry? Don't you give a shit about the people who WILL die? Isn't it worth sacrificing a few of our excessive comforts so that millions won't die?

If you're too selfish for this, just say so. Or maybe you just did idk.

−4