Recent comments in /f/singularity

Alternative_Fig3039 t1_jed8mc5 wrote

Can someone explain to me, an idiot, not if AI with super intelligence could wipe us out, that I can comprehend easy enough, but why? And how? Let’s say, as he does in the article, we cross this threshold and build a super-intelligent AI then we all die and all die within what seems like weeks, days, minutes? Would it nuke us all? It’s not like we have robot factories laying around it could manufacture Sentinels in or something. I understand, in theory, that we can’t really comprehend what super intelligence is capable of because we ourselves are not super intelligent. But other then launching our current WMD’s, what infrastructure exists for AI to eliminate us. I’m talking the near future. In 50-100 years things might be quite different. But this article makes it sound like we’ll be dead in 3 months. I’d really appreciate an even headed answer, not gonna lie, this freaked me out a bit. Not great to read right before bed.

1

Scarlet_pot2 OP t1_jed8dir wrote

These articles are talking about in our modern society. Our technology is to the point where it takes a lot of effort to make modest improvements (in most areas). for most of time the innovations found didn't cost much, like how to make a bow, or how to smith metal. If you think all inventions were made by wealthy people, you are delusional. It wasn't the king that learned how to make chainmail armor, and it wasn't the noble that learned how to raise bigger crops.

P.S. Your insults don't help your point at all.

−1

tiselo3655necktaicom t1_jed7uvh wrote

>Most of human innovations were made by small groups or even a single person, without much capital. Think of the wheel, agriculture, electricity, the light bulb, the first planes, Windows OS. The list goes on and on.

You have a childlike naivety about business and live in a fantasy world.

"Data shows US inventors aren’t just good at science—they come from rich families" (2017)

"Entrepreneurs come from families with money" (2015)

3

TMWNN t1_jed7ucn wrote

There absolutely are human NPCs, who react in predictable ways without intelligence.

A recent Reddit post discussed something positive about Texas. The replies? Hundreds, maybe thousands, of comments by Redditors, all with no more content than some sneering variant of "Fix your electrical grid first", referring to the harsh winter storm of two years ago that knocked out power to much of the state. It was something to see.

If we can dismiss GPT as "just autocomplete", I can dismiss all those Redditors in the same way that /u/AvgAIbot did; as NPCs.

CC: /u/lurking_intheshadows

4

Scarlet_pot2 OP t1_jed7tts wrote

Fine-tuning isn't the problem.. if you look at the alpaca paper, they fine tuned the LLaMA 7B model on gpt-3 and achieved gpt-3 results with only a few hundred dollars. The real costs are the base training of the model, which can be very expensive. Also having the amount of compute to run it after is an issue too.

Both problems could be helped if there was a free online system to donate compute and anyone was allowed to use it

1

Scarlet_pot2 OP t1_jed747y wrote

Okay now that's just incorrect. Most of human innovations were made by small groups or even a single person, without much capital. Think of the wheel, agriculture, electricity, the light bulb, the first planes, Windows OS. The list goes on and on.

It's only recently that it takes super teams and large capital to make these innovations. I'm saying we should crowdsource funds, with free resources to learn from together, donating compute, etc. It's totally possible but modern people aren't very good at forming groups. Maybe its because people are too tired from work, or they have become much less social. For whatever reason, still, we could improve AI progress and decentralize AI if the people learned to talk and collaborate again

0

Scarlet_pot2 OP t1_jed67k5 wrote

True alpaca is competent, but we need more models, better and larger models.. a distributed system where people donate compute could also be used to allow people to run larger models. maybe not 175 billion parameters, but maybe 50-100B as long as everyone donating compute isn't using it at the same time

that being said more smaller models like alpaca / LLaMA are needed too. if we made sufficient resources / training available to anyone, models like that could be created and made available more often

1