Recent comments in /f/singularity

visarga t1_jedl81q wrote

I think the social component of AI is picking up steam. What I mean is the culture around AI - how to train, fine-tune, test and integrate AIs in applications, how to mix and match the AI modules, this used to be the domain of experts. Now everyone is assimilating this culture and we see an explosion of creativity.

The rapid rate of AI advancement is overlapping with the rapid rate of social adoption of AI and making it seem to advance even faster.

12h later edit: This paper comes out HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace . AI is orchestrating AI by itself. What can you say?

19

CypherLH t1_jedkz91 wrote

I assume the ongoing layoff wave in tech is probably AI-related. Perhaps not explicitly but most of the people making those hire/fire calls at tech companies are well aware of AI developments and probably have at least played around with chatGPT, etc. A lot of those jobs won't be coming back.

1

420BigDawg_ t1_jedkm4y wrote

I can see us notice it in our monthly census probably September 2023. We’ll see tech bro office jobs fall off a cliff. Writer strikes by October 2023. Too many writers in the media from journalists to sitcom writers feel they are at threat of losing their jobs.

This will give Elon an undeserved “i told ya so!” Moment and he’ll probably joke about running for president (🚨)

1

systranerror t1_jedkgw2 wrote

Yeah I definitely believe that he's finished it, but the traditional publishing industry can no longer keep up with the speed this stuff is moving at. He might have to change large sections of the book or start predicting further out somehow. To be honest, I find his "computronium" idea the weakest of his predictions, so I hope he doesn't go that route!

2

sideways t1_jedkfko wrote

You don't really know what level GPT-5 is going to be.

Regardless, you're right - we're not going to leapfrog right over the scientific method with AI. Experimentation and verification will be necessary.

But ask yourself how much things would accelerate if there was an essentially limitless army of postdocs capable of working tirelessly and drawing from a superhuman breadth of interdisciplinary research...

67

Kafke t1_jedkbke wrote

Some childrens tv shows or media programs stating incorrect information does not make it correct. Additive primaries are RGB, subtractive primaries are CMY. The idea that RBY are primary colors is a popular misconception, but is incorrect. It has it's roots in art classes prior to proper scientific investigation of color, light, and modern technology. If your goal is art history, then yes, people in the past incorrectly believed that the primary colors (both additive and subtractive) were RBY. They were wrong. Just as people believed the earth was flat, yet were wrong.

1

ptxtra t1_jedk9yh wrote

That doesn't help with reasoning. It only connects multiple AIs with code. If the AI gives an unreasonable answer to a prompt and forgets the context, you can't help that with chaining it to other AIs.

1

Stinky_the_Grump23 t1_jedjzqe wrote

I have very young kids and I'm already wondering what our discussions will be around when reminiscing about the days "before AI", like I used to ask my dad who grew up without cars or electricity in a village of illiterate farmers. The crazy thing is, we have no real idea where AI is taking us, good or bad. I don't think our future has ever looked so uncharted as it does right now.

76

Louheatar t1_jedjx7p wrote

i can also share some insight from a tech startup i worked in until a week ago, no layoffs yet. there was a lot of fear in the air, but in general, everyone, including the leadership, the developers, and the artists, generally lived in a state of serious denialism. they literally keep just laughing about whatever ai tools are popping out, sharing whatever buggy and weird gifs they find (instead of the actually great material the tools can produce), trying to somehow maintain their old world view i guess.

while (generative) ai poses a fundamental business risk for the company and most of the jobs, there was absolutely no deep conversations about it. nobody is taking it seriously, because they don't want to. instead of accepting and adapting, they're doubling down on their obviously wrong path and burning everyone out while doing it.

4

visarga t1_jedjvrn wrote

> The next phase shift happens when artificial systems start doing science and research more or less autonomously. That's the goal. And when that happens, what we're currently experiencing will seem like a lazy Sunday morning.

At CERN in Geneva they have 17500 PhD's working on physics research. Each of them GPT-5 or higher level, and yet it takes years and huge investments to get one discovery out. Science requires testing in the real world, and that is slow and expensive. Even AGI needs to use the same scientific method with people, it can't theorize without experimental validation. Including the world in your experimental loop slows down progress speed.

I am reminding people about this because we see lots of magical thinking along the lines of "AGI to ASI in one day" ignoring the experimental validation steps that are necessary to achieve this transition. Not even OpenAI researchers can guess what will happen before they start training, scaling laws are our best attempt, but they are very vague. They can't tell us what content is more useful, or how to improve a specific task. Experimental validation is needed at all levels of science.

Another good example of what I said - the COVID vaccine was ready in one week but took six months to validate. With all the doctors focusing on this one single question, it took half a year, while people were dying left and right. We can't predict complex systems in general, we really need experimental validation in the loop.

72

Longjumping_Feed3270 t1_jedj04k wrote

If the media quotes a name, they will only quote one that they can safely assume their audience recognizes.

Elon Musk is such a name.

All the other ones? I have never heard of them and I'm an AI curious software engineer, though not deeply entrenched in the AI community.

0% chance that the average reader knows any of them.

2

FoniksMunkee t1_jediwl9 wrote

Reply to comment by [deleted] in GPT characters in games by YearZero

Yes, that's a good point I didn't think of is API costs.

The other issue is games kinda move slowly in some respects. There are games that started before ChatGPT was commonly known that wont be finished until after ChatGPT 5 is out. And there is no way these tools will be integrated during that process.

They would also have to convince Sony, MS and Nintendo to have their SDK's in a model. I don't think MS will necessarily have a problem with that... but there's a ton of third party libraries that would need to come on board, not to mention how you deal with existing legacy code. There are still more companies in the AAA industry with custom engines than are using commercial engines like UE4/UE5.

Then comes the RND, then comes the new game... so what, 5 years before we see wide spread adoption in the AAA market?

2

akat_walks t1_jediqoe wrote

I tried explaining it in a work meeting a few days ago. Talked about how it will change how our department will work forever moving forward. Blank stares. They had no idea what I was talking about. I think some of them thought I had made it up. I’ll be giving a presentation on it in a couple of weeks. Wish me luck!

3