turnip_burrito

turnip_burrito t1_j95ezks wrote

We're talking about Gato, a generalist agent....

Not ChatGPT. Context man!

For what it's worth though, I'll add in a bit of what I think in regard to ChatGPT or LLMs in general: IMO if they get any smarter in a couple different ways, they are also an existential risk due to roleplay text generation combined with ability to interface with APIs, so we should restrict use on those too until we understand them better.

1

turnip_burrito t1_j95d9xr wrote

I agree, and it does make me nervous that we may not have alignment solved by then.

Hey AI researchers on this sub. I know you're lurking here.

Please organize AI safety meetings in your workplace. Bring your colleagues to conference events on AI existential safety. Talk with your bosses about making it a priority.

Thanks,

Concerned person

7

turnip_burrito t1_j94f95b wrote

> They do not have some magic semiconductor technology that is unknown to the public. They just have a lot of money.

Well, I certainly don't have proof that they don't have magic semiconductor technology and aren't secretly benefiting from advanced tech companies.

So we can't reasonably 100% negate their argument. After all, they could be right. We've been checkmated, and outvoted it looks like. If popular opinion is anything to go by, we should reconsider our position, and maybe change our mind?

0

turnip_burrito t1_j93ljw0 wrote

Yeah right. You're telling us the military has better LLM AI tech than Google, OpenAI, DeepMind, Microsoft, Nvidia, and Apple? The entities that have the hardware and software engineering experts on their payroll? The ones that openly publish research papers and collaborate, which increases their research efficiency?

The only way the military would have better tech is if the scientists at these companies willingly sent their discoveries to only the military, or if the military had some small number of secret hypergeniuses that somehow are smarter than all the many known geniuses at these tech giants without needing to collaborate. That sounds like some sort of sci-fi movie.

1

turnip_burrito t1_j90rf1d wrote

Basically, in my eyes the US government has dropped the ball with respect to AI. They for some reason are not competing with corporations for AI researchers, which means that instead, researchers are being pulled into tech companies with a profit motive. Ground-breaking AI research papers come from people working at either Google AI Research, DeepMind, Meta, Nvidia, and there may be a couple others I'm forgetting. There are also researchers at universities mixed in with the authors on those papers often, but even so. For example: the 2017 transformer architecture (the T in GPT) for example was published by then-Google employees (and one University of Toronto guy who was working at Google).

The result is AI for profit. What better way to misalign our AI than using it for money? This accelerates AI development but creates larger existential risk.

2

turnip_burrito t1_j90q5t3 wrote

Humanity needs someone to control the transition singularity so that it has an increased likelihood of turning out in our favor. I'd rather it be OpenAI than many other groups of people.

And it goes without saying that not attempting to control the transition to singularity will have wildly more unpredictable results (which we all may like to avoid).

1

turnip_burrito t1_j902d9k wrote

You may not be, but think of how many people there are of varying wiseness/foolishness and smartness/dumbness.

There's someone out there who's the right combination of smart enough to make the AI do shitty things, and foolish enough to use it do that.

On top of that, the search AI is just outputting pretty disturbing things. I think the company is in their right to withhold the service because of that.

0

turnip_burrito t1_j8zzr3s wrote

Reply to comment by crazycalvin22 in Microsoft Killed Bing by Neurogence

When kids on reddit are more concerned about having a waifu bot or acting out edgelord fantasies with a chatbot than ensuring humanity's survival or letting a company use their search AI as a search AI. smh my head

4

turnip_burrito t1_j8zysj7 wrote

Reply to comment by [deleted] in Microsoft Killed Bing by Neurogence

They are right. These algorithms can generate code and interact with external tools already. It's been demonstrated already, in real life. I want to make this clear: It has been done.

I don't want to see a slightly smarter version of this AI actually trying to hack Microsoft or the electrical grid just because it was prompted to act out an edgy persona by a snickering teenager.

Or mass posting propaganda online (so that 90% of all web social media posts on anonymous message boards is this bot) in a very convincing way.

It's very easy to do this. The only thing holding it back from achieving these results consistently is that it's not yet smart enough.

Best to keep it limited to be a simple search engine. If they let it have enough flexibility to act as a waifu AI, then it would also be able to do the other things I mentioned.

1