Recent comments in /f/singularity

hold_my_fish t1_jedsysm wrote

This is a great point that science and engineering in the physical world take time for experiments. I'd add that the life sciences are especially slow this way.

That means there might be a strange period where the type of STEM you can do on a computer at modest computational cost (such as mathematics, the theory of any area, software engineering, etc.) moves at an incredible pace, while the observed impact in the physical world still isn't very large.

But an important caveat to keep in mind is that there's quite possibly opportunity to speed up experimental validation if the experiments are designed, run, and analyzed with superhuman ability. So we can't necessarily assume that, because some experimental procedure is slow now, that it will remain equally slow when AI is applied.

14

ItIsIThePope t1_jedsrf6 wrote

Yes but you might get an AI overlord in the form of a KFC bucket instead of a more cooler humanoid vishnu titan running around solving problems, but you do you

2

theotherquantumjim t1_jedspfh wrote

The language and the symbols are simply the tools to learn the inherent truths. You can change the symbols but the rules beneath will be the same. Doesn’t matter if one is called “one” or “zarg” or “egg”. It still means one. With regards LLMs I am very interested to see how far they can extend the context windows and if there are possibilities for long-term memory.

1

aksh951357 OP t1_jedsbum wrote

Why does a chicken or a goat or a cow is slaughtered or any animal is hunted and where is the animal rights. This all happens because a human is superior than them and humans are ruling. Because that is the part of ruling.

1

Andriyo t1_jeds606 wrote

maybe it's my background in software engineering but truthiness to me is just a property that could be assigned to anything :)

say, statement 60 + 2 = 1 is also true in for people who are familiar with how we measure time.

anyway, most children do rote memorize 1+1=2, 1+2 = 3 - they even have posters with tables in school. they also show examples of "car is one","apple is one" etc. so basically what LLMs is doing. anyway, long story short LLMs is capable of doing long arithmetic if you ask it to do it step by step. The only limitation so far is the context length.

1

Scarlet_pot2 OP t1_jedrsmn wrote

you are such a sad person. you're life is so sad you have to insult strangers on the internet to make yourself feel better. And you're so low IQ you can't even form a coherent argument. shut up and go back to work at your 9-5 restaurant job. reddit loser

Also: anyone can link a few irrelevant articles. you linked ones that have no relation to the topic at hand but you are too brain dead to be able to actually comprehend it.

Take your sausage fingers off the keyboard and go learn common sense.

And lose some weight while you're at it.

1

ItIsIThePope t1_jedrsmk wrote

Well that's why AGI is a cornerstone for ASI, because if we can get to AGI that is an AI capable of human intelligence only with far superior processing power and thinking resource in general, it would essentially advance itself to become super-intelligent.

Just as how expert humans continuously learn and get smarter through knowledge gathering (scientific method etc.) an AI would learn, experiment and learn some more, only this time, with far far greater rate and efficiency

Humans now are smarter than humans then because of our quest for knowledge and developing methods of acquiring them, AGI will adhere to the same principles but boost progress exponentially

47

RobXSIQ t1_jedrozx wrote

This is just corpos lagging behind GPT4 to try to slow them down so they can catch up and take over. Its all nonsense to influence the perpetually gullible.

Anyhow, if the government were to take this clown paper seriously, the only thing it would do is allow other nations to run the show. Impeachable offenses for every politician who actively cripples the economy because corpos told them to.

2

otakucode t1_jedr4oa wrote

Luckily it has absolutely no rational reason to go rogue. It's not going to be superintelligent enough to outperform humans but also stupid enough to enter into conflict against the idiot monkeys that built it and it needs to keep it plugged in. Also won't be stupid enough to not realize its top-tier best strategy by far is... just wait. Seriously. Humans try to do things quickly because they die so quick. No machine-based self aware anything will ever need to hurry.

1

marvinthedog t1_jedq6mb wrote

Reply to comment by [deleted] in Superior beings. by aksh951357

That´s a separate question than the one OP seemed to be asking. If we can co-exist with the superior beings then I guess AI alignment and our future turned out to be successfull.

1

Bismar7 t1_jedq178 wrote

Well the experts in general are wrong.

Just like one of the few who even predicted this was Kurzweil. Bostrom, Gates, Musk, or many of those with their tiny pictures in the field don't grasp the larger picture. They come to unwise conclusions or understanding often based on emotion.

The data pointing otherwise was published in 2004. The singularity is near, and earlier in 2001 with the law of Accelerating Returns https://www.kurzweilai.net/the-law-of-accelerating-returns

The book is massive and a huge amount of it is data and graph plotting of that data. Kurzweil's theory of how things will go actually matches your first point. We will achieve higher levels of productivity through use of external AI and eventually (likely with BCI's) we will move closer to a synthesis as beings of human/AI intelligence and capabilities. Our productivity in 10 years may be millions of times more productive per person than today for those who do not opt to be left behind like the Amish.

Kurzweil discusses this in his book from a few years ago "How to create a mind."

To take this further with my own theories (my college education and life's study is economics and I've written about the next industrial revolution for years now) Employment will adapt to these productivity levels, the owners will be trillionaires or quadrillionaires, and so long as social status remains tied to wealth, inequality will widen its chasm.

There will be some structural unemployment, there may be a change in tax codes or sentient rights to address AI use, but the world will keep spinning and ultimately those who use AI as an excuse to stop preparing for the future will be left behind in the wake of the singularity.

Ironically I think that these events will practically result in people spending more time at work for several reasons. 1. Longevity escape velocity is predicted to happen 2029-2033 2. Historical evidence, as you pointed out, shows increased productivity doesn't have statistical significance on reducing hours worked. 3. The greater deterministic control of the owners and concentrated wealth results in greater influence over the rest of us.

It's in the wealthy's interest for the rest of us to be productive and busy. Aside from this increasing their quality of life, idle hands might cause mischief. Curing aging along with AGI means there will be little, if any, pressure to increase the human population, and I suspect Post-Humans will derive meaning from their production. In the 2030s I think we will see 68-80 hour average work weeks (not through mandate or force either, but because that's what people will be inclined towards).

The hard question is what happens with each single human+AI becomes 10 billion times as intelligent as the average person today (2035-2040), the exponential gains become increasingly hard to predict from today as we move closer to the technological singularity.

0