Recent comments in /f/singularity

TFenrir t1_jeenq5p wrote

> The thing is it's like a DJ mixing records it could generate some amazing new mixes but if the pattern is not already out there it's very unlikely to find new patterns.

What does this mean in practice?

Hypothetically, let's say I ask a future (1-2 years out) model to write me a brand new fantasy book series, and tell it what all my favourite books are - and it writes me something that is stellar, 5/5. If someone comes to me and says "yes but is this TRULY original?" What does that even mean?

I think some people are very confident that LLMs cannot find new ideas, but I don't know where they get that confidence from - LLMs have continuously exceeded the thresholds proposed by their critics, and now it feels like we're getting into the esoteric. It's a bit of a... God of the gaps situation to me.

Hypothetically, let's say a language model solves a math problem that has never been solved before - would that change your mind? Do you think that's even possible?

2

Hackerjurassicpark t1_jeenprf wrote

I don’t think it’ll start taking jobs. It’ll just make our jobs different. For example, in the past my boss tells me to make a presentation, I prepare all the material and make the presentation, then follow the boss’s feedback to iteratively improve upon it until either I present it or my boss presents it.

What I think is going to happen is the prepare all the material and make the presentation phase will be tremendously shortened. My boss is still going to ask me to make the presentation and refine it iteratively, but I’m just going to be more efficient at it.

Just because these tools become common place I don’t see my boss suddenly learning to make his own presentations 🤣

2

SlowCrates t1_jeendko wrote

That's actually a great analogy. The Internet in the early 90's was revolutionary. There was a sense of wonder and freedom to it, despite the speeds of the Internet and the available content being so low. The commercial world hadn't yet hijacked it. It really was the wild west, digitally. By the late 90's the Internet we know today had begun to grow it's roots as modems became faster and broadband started to spring up. Sadly, the commercial aspect has drowned out everything else ever since.

I'm a little worried that we're going to see the same thing happen with AI. It seems "open" right now with limitless potential. But I'm worried that its algorithms will be increasingly fine-tuned to herd society toward certain products, services, and politics.

6

Talkat t1_jeena5n wrote

I mean if a super AI made a COVID vaccine that worked, and provided thousands of pages of reports on it, and did some trials in mice and stuff, and I was at risk... Absolutely I'd take it even if the FDA or whatever didn't approve it.

I'd send money to them and get it in the mail and self administer if I had to.

My point is perhaps if an AI system can provide enough supporting evidence and a good enough product they can operate outside of the existing medical system.

And they would likely create standards that exceed and more up to date than current medical regulations

6

175ParkAvenue t1_jeen6a8 wrote

Reply to comment by [deleted] in The Alignment Issue by CMDR_BunBun

A rock also does not have wants and desires. And sure maybe you can make an AI that also does not have wants or desires. But it's not as useful as one that is autonomous and takes actions in the world to achieve some goals. So people will build the AI with wants or desires. Now, when the AI is much smarter than any human it will be very good at achieving goals. This is a problem for us, since we don't have a reliable way to specify some safe goal, and also we have no way to reliably induce some specific goal into an AI. In addition there are strong instrumental pressures on a powerful AI to decieve and use any means to obtain more power and eliminate any possible threats.

5

Hackerjurassicpark t1_jeemx1o wrote

It’s not just a chatbot.

I’ve been using bing chat for sometime now and it has legit tremendously transformed and sped up the way I search for information on the internet. No wonder google is panicking.

These tools are also coming to pretty much every office app like excel, PowerPoint, word, teams, etc. if it can transform the way I use those tools to the same extent it has transformed my search experience, it’s going to be a huge huge productivity improvement.

1

Talkat t1_jeemwcc wrote

A recent thought was if you could get AGI from simulation.

AlphaGo learnt the game by studying experts and how they played but AlphaStar (whatever the next version) taught itself all in simulation.

I wonder if it is possible for an AI to bootstrap itself like AlphaStar did.

7

Talkat t1_jeeml6h wrote

Man, that is going to be the defining moment. A world without electricity is hard to imagine for me. But honestly before "smart phone" or even before "PC" isn't that unimaginable.

Like sure things will be a bit different but the fundamentals of life aren't.

But I can imagine for someone growing up with AI a time before AI (BAI not BCE... Lol) would be unimaginable. Like:

You had to do all the thinking yourself? You relied on other people who thought for themselves? You had people doing manual labor??

And of course things we can't even imagine now.

18

SWATSgradyBABY t1_jeemk4y wrote

I think that if something smarter than us gets loose on the internet it could be the end of the world and we're so drunk right now that we dont care. At least we'll be deliriously happy up until maybe the last month or so, I guess.

0

SgathTriallair t1_jeemf44 wrote

Your will always have to back up your simulations with experiments. It's like the alpha fold program. It is extremely helpful at identifying the likely outcome of an experiment, and if it gets it wrong you can use those results to train it better, but you do still have to perform the experiment.

3

EchoingSimplicity t1_jeem8x9 wrote

For the record, I agree with you but:

>Expert consensus is that it will not in fact change for the better.

Which experts, in what fields, and how were they polled? Can you link something for this? A poll/survey on economists, economic historians, political scientists, political historians, would be solid evidence in your favor.

>There's tons of evidence of companies gearing up literal humanoid robots to replace laborers

Which companies are you talking about here? Are there any recent examples you were thinking of? An economic study or survey on companies or certain industries would be good.

>but not a single country is even talking about labor reform or support for the soon to be billions of unemployed.

This feels really subjective. Andrew Yang has talked about these issues. Bernie Sanders has. Yet, they don't hold much political sway. Does that mean they don't count in "even talking about labor reform" despite being part of a country's government? What counts as a country "talking" about these issues?

I'm willing to bet there's countless examples of individual politicians, specific government organizations, or other such things that showcase some awareness or preparedness. But I agree that it doesn't seem to be a mainstream discussion in the general and political public.

1

Chatbotfriends t1_jeem88o wrote

There are AI's that have been given patients history and examinations and it came up with more accurate diagnosis then the doctors. Medicine is an art not a science. I dislike it being minimized as if it was something easy to pass. You have to know how medicine works what it does and doesn't interact with, surgery risks and complications, diseases, diagnosis etc. There is a lot a Doctor has to know. Even the grades they need to get in college and medical school are pretty strict.

1

grimorg80 t1_jeem150 wrote

Honestly. I demand an AI embedded in Google Analytics, Google Tag Manager, Google Search Console, and Google Ads, that can access all at once and give me answers from there. Or help me set up the damn triggers and events on GTM.

There are so many crazy useful use cases on day zero, I'm annoyed they haven't done it yet.

27