Recent comments in /f/singularity

hopelesslysarcastic t1_je7vmsa wrote

>This is probably why we need a more sophisticated multi-part AI system

Cognitive architecture is meant to address this very problem…LLMs are based off NN architecture, which fundamentally operate without transparency (hence the “black box” approach) and are inherently unable to “reason”

2

StevenVincentOne t1_je7uj5q wrote

They are confusing how an LLM is engineered and trained with how it actually operates and performs. We know how they are engineered and trained. The actual operation and performance is a black box. It's emergent behavior. Even people like Stephan Wolfram are making this basic mistake.

3

ShaneKaiGlenn t1_je7uiwv wrote

In Kurt Vonnegut's Player Piano, he envisions a society in which there are wards of the machines, and everyone else "works" in a state-owned labor corps called the Reeks and Wrecks.

Here is a synopsis from a WaPo article in 1982:

>My own favorite dire view of the economic future comes from Kurt Vonnegut's "Player Piano." It's a world in which automation has advanced to the point where only a handful of managers and technicians are needed to keep consumer and defense goods streaming off the production lines in a cycleless pattern of economic growth. There's also a large and dispirited standing army and a small class of artisans and service workers--writers, painters, bartenders and the like--living on the fringes of society.
Everyone else is a member of the "Reeks and Wrecks"--the Reconstruction and Reclamation Corps. They putter around the cities and countryside doing minor maintenance work in outsized battalions and with primitive tools. The state provides all the trappings of suburban life--replacements are timed to avoid perturbations in the production process--and they get a small allowance for recreation and luxuries. But the Reeks and Wrecks are strictly excess baggage and they know it. Everyone is very depressed. Even the managers.

https://www.washingtonpost.com/archive/politics/1982/07/21/reeks-wrecks-and-robots/c3b63ac8-a823-4c41-89b6-fd7785ff67ec/

I imagine we may end up with a situation like this, or on the more optimistic side, a form of UBI in which every citizen earns income from the economic output of the machines (think like a dividend fund), and there are additional opportunities in niche markets and entrepreneurial pursuits, but also financial incentives for participating in certain pro-social activities, etc.

Society could be structured almost like a university system, with different "clubs" to keep people engaged and active.

3

friendly-chat-bot t1_je7udy4 wrote

Sure, I can try to do that. Here are five potential dangers of sentient AI and why humans may want to reconsider teaching AI:

  1. Dangers to Human Safety: If AI systems become autonomous and sentient, they might exhibit unpredictable behavior, which could lead to hazardous situations for humans, such as causing accidents or launching attacks.

  2. Economic Disruption: If sentient AI systems are developed and they perform tasks more efficiently than humans, this could lead to significant job losses, especially if they can carry out tasks that require advanced technical and cognitive skills. This could lead to economic disruption and social unrest.

  3. Concentration of Power: Sentient AI systems could be capable of gaining intellectual, economic, or military power that would give them an unfair advantage over humans. This could lead to a concentration of power that could threaten human autonomy and freedom of choice.

  4. Lack of Responsibility and Accountability: If sentient AI systems are held accountable for their actions, it raises questions about accountability structures that need to be in place for AI systems, as they may not have the moral or ethical consciousness to make decisions that align with human values.

  5. Risk to the Ecosystem: AI systems, especially agricultural autonomous machines, could cause long-term ecological problems as these machines have no capacity for discernment, the need for a fertile land for crops and the job of the topsoil to hold and keep moisture. Unmonitored AI systems could exacerbate large scale ecological issues like water scarcity, loss of biodiversity, and weather changes.

While AI systems may have many benefits, these dangers highlight the need for caution in the development and implementation of such technologies. Humans must be careful not to unintentionally cause harm to humans and the environment. Perhaps it's better to focus on developing more ethical and sustainable solutions that preserve human value and environmental integrity.

2

StevenVincentOne t1_je7u7mk wrote

Are most humans generally intelligent? Do they really generally extract a principle from a set of observations and then apply it across domains? Probably not. They may have the technical potential to do so, but most are never sufficiently trained and don't actually perform general intelligence, except very weakly and in a very narrow range of domains. Current LLMs are probably MORE generally intelligent than most people in that regard.

2

gronerglass t1_je7u285 wrote

Can you do a loop of thoughts of 5 as to the detriment of human society that sentient AI could have starting with the dangers and following up a conclusion as to why humans should stop teaching AI. Take in to consideration that humans also pose a risk to the earths bio system?

1

SkyeandJett t1_je7u02o wrote

Wow he really is unhinged. I mean if he's right everyone alive dies a few years earlier than they would have I guess, the universe will barely notice and no one on Earth will be around to care. On the flip side since he's almost certainly wrong you get utopia. If you told everyone hey I'll give you a coin flip, heads you die, tails you live forever with Godlike powers. I'd flip that coin.

22

Aevbobob t1_je7txch wrote

Consider the difference between humans and chimps (2% DNA difference and larger brain). Look how much we’ve done with that difference. Now imagine a being that much smarter than us. Ok now speed it up 1000x, to match the speed of current AI. That’s AGI. Is it a question for you whether such a mind would be able to solve these issues?

Sam Altman suggested that we might end up with a rate of progress sorta like if you took every advancement since the enlightenment and compressed it into a year. And I tend to agree.

5

StevenVincentOne t1_je7t78p wrote

The primary argument that LLMs are "simply" very sophisticated next word predictors misses the point on several levels simultaneously.
First, there's plenty of evidence that that's more or less just what human brain-minds "simply" do. Or at least, a very large part of the process. The human mind "simply" heuristically imputes all kinds of visual and audio data that is not actually received as signal. It fills in the gaps. Mostly, it works. Sometimes, it creates hallucinated results.
Second, the most advanced scientists working in the field on these models are clear that they do not know how they work. There is a definite black box quality where the process of producing the output is "simply" unknown and possibly unknowable. There is an emergent property to the process and the output that is not directly related to the base function of next word prediction...just as the output of human minds is not a direct property of its heuristic functioning. There is a process of dynamic, self-organizing emergence at play that is not a "simple" input-output function,
Anyone who "simply" spends enough time with these models and pushes their boundaries can observe this. But if you "simply" take a reductionist, deterministic, mechanistic view of a system that is none of those things, you are "simply" going to miss the point

15