turnip_burrito

turnip_burrito t1_j9wwo5t wrote

Exponential growth of AI capability isn't a law of nature. It's only obvious in hindsight and depends on a lot of little things and a nice conducive R&D environment. We're not guaranteed to follow any exponentials.

Some people on this sub are going to be disappointed when we don't have AGI in 5 or 10 years. Or maybe they'll have forgotten that they predicted AGI by 2030 by the time 2030 actually rolls around.

13

turnip_burrito t1_j9v5wje wrote

Read the rest of the discussion. "Art" has several different definitions, and we were using two of those different definitions. This led to disagreement.

>English better, or stop bickering with people when you can't even write coherently.

Was that necessary? I see now that you're either a troll, or if not, a strange person. My written English is fine, and I'm sorry if you have trouble reading it.

1

turnip_burrito t1_j9v56hc wrote

>I can explain everything with standing EM waves

Bullshit.

Explain the existence of electrically neutral particles like neutrinos and why they're able to interact at all with other particles.

> Ugly = wrong, just look at history of bad / wrong theories.

No, ugly = ugly and wrong = wrong. Physics has no reason to be elegant to humans. The Standard model is incomplete (dark matter/energy, quantum gravity, and antimatter imbalance not explained) and inelegant, but has made predictions which up until now have worked for the rest of particle physics. In the sense of incompleteness, it could be considered "wrong". However, it is effective at predicting everything we are able to test here on Earth, so in that sense it is "right".

In fact, scientists at the LHC have been trying very hard, to no avail, to find deviations to the standard model.

> Hasn't produced anything useful, another hallmark of something very wrong.

Bad predictions and inconsistency with reality are the hallmark of something wrong. Subatomic physics isn't really that useful (we have no real use for gluons, neutrinos, etc), but we still do test theories of it.

3

turnip_burrito t1_j9t8awh wrote

Well it sure beats the current hell of people slaving away at a job and losing their health just to get barely enough money to support their family. Shortages of medical care, single parents raising their children by themselves, lack of sleep....

You think a world without required work would be bad? Well, a world with required work is worse. Way worse.

I'd take work-less induced, well-fed "purposelessness" over that any day of the week.

3

turnip_burrito t1_j9sr0vp wrote

I'll give you a poor example, off the top of my head, since I'm too lazy to look up concrete examples. I've asked it a version of this question (not exact but you'll get the idea):

"Say that hypothetically, we have this situation. There is a bus driven by a bus driver. The bus driver's name is Michael. The bus driver is a dog. What is the name of the dog?"

This is just a simple application of transitivity, which people intuitively understand:

Michael <-> Bus driver <-> Dog

So when I ask ChatGPT what the name of the dog is, ChatGPT should say "Michael".

Instead ChatGPT answers with "The bus driver cannot be a dog. The name of the bus driver is given, but not the name of the dog. So there's not enough information to tell the dog's name."

It just gets hung up on certain things and doesn't acknowledge clear path from A to B to C.

6