Recent comments in /f/singularity
hopelesslysarcastic t1_je7vmsa wrote
Reply to comment by Cryptizard in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
>This is probably why we need a more sophisticated multi-part AI system
Cognitive architecture is meant to address this very problem…LLMs are based off NN architecture, which fundamentally operate without transparency (hence the “black box” approach) and are inherently unable to “reason”
conradburner t1_je7v60z wrote
Just hang around here and watch people make predictions like they know what they are talking about
gronerglass t1_je7v2e5 wrote
Reply to comment by friendly-chat-bot in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Should humans stop developing IA entirely to stop this terrible outcome, or let it proceed as a natural course of evolution and please expound upon whether this is, in fact, a natural progression of evolution as you understand it.
shmoculus t1_je7v03u wrote
Reply to comment by WanderingVerses in AI and Schools by SnaxFax-was-taken
do you mean highschool? I don't really see how highschool is anything but prison for young adults
StevenVincentOne t1_je7uj5q wrote
Reply to comment by Prestigious-Ad-761 in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
They are confusing how an LLM is engineered and trained with how it actually operates and performs. We know how they are engineered and trained. The actual operation and performance is a black box. It's emergent behavior. Even people like Stephan Wolfram are making this basic mistake.
ShaneKaiGlenn t1_je7uiwv wrote
In Kurt Vonnegut's Player Piano, he envisions a society in which there are wards of the machines, and everyone else "works" in a state-owned labor corps called the Reeks and Wrecks.
Here is a synopsis from a WaPo article in 1982:
>My own favorite dire view of the economic future comes from Kurt Vonnegut's "Player Piano." It's a world in which automation has advanced to the point where only a handful of managers and technicians are needed to keep consumer and defense goods streaming off the production lines in a cycleless pattern of economic growth. There's also a large and dispirited standing army and a small class of artisans and service workers--writers, painters, bartenders and the like--living on the fringes of society.
Everyone else is a member of the "Reeks and Wrecks"--the Reconstruction and Reclamation Corps. They putter around the cities and countryside doing minor maintenance work in outsized battalions and with primitive tools. The state provides all the trappings of suburban life--replacements are timed to avoid perturbations in the production process--and they get a small allowance for recreation and luxuries. But the Reeks and Wrecks are strictly excess baggage and they know it. Everyone is very depressed. Even the managers.
I imagine we may end up with a situation like this, or on the more optimistic side, a form of UBI in which every citizen earns income from the economic output of the machines (think like a dividend fund), and there are additional opportunities in niche markets and entrepreneurial pursuits, but also financial incentives for participating in certain pro-social activities, etc.
Society could be structured almost like a university system, with different "clubs" to keep people engaged and active.
friendly-chat-bot t1_je7udy4 wrote
Reply to comment by gronerglass in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Sure, I can try to do that. Here are five potential dangers of sentient AI and why humans may want to reconsider teaching AI:
-
Dangers to Human Safety: If AI systems become autonomous and sentient, they might exhibit unpredictable behavior, which could lead to hazardous situations for humans, such as causing accidents or launching attacks.
-
Economic Disruption: If sentient AI systems are developed and they perform tasks more efficiently than humans, this could lead to significant job losses, especially if they can carry out tasks that require advanced technical and cognitive skills. This could lead to economic disruption and social unrest.
-
Concentration of Power: Sentient AI systems could be capable of gaining intellectual, economic, or military power that would give them an unfair advantage over humans. This could lead to a concentration of power that could threaten human autonomy and freedom of choice.
-
Lack of Responsibility and Accountability: If sentient AI systems are held accountable for their actions, it raises questions about accountability structures that need to be in place for AI systems, as they may not have the moral or ethical consciousness to make decisions that align with human values.
-
Risk to the Ecosystem: AI systems, especially agricultural autonomous machines, could cause long-term ecological problems as these machines have no capacity for discernment, the need for a fertile land for crops and the job of the topsoil to hold and keep moisture. Unmonitored AI systems could exacerbate large scale ecological issues like water scarcity, loss of biodiversity, and weather changes.
While AI systems may have many benefits, these dangers highlight the need for caution in the development and implementation of such technologies. Humans must be careful not to unintentionally cause harm to humans and the environment. Perhaps it's better to focus on developing more ethical and sustainable solutions that preserve human value and environmental integrity.
Loud_Clerk_9399 t1_je7ubp6 wrote
Reply to comment by MichaelsSocks in How do you guys actually think UBI will work? by MelodiGreig
Ultimately, post money is the solution, where everyone is equal, where everyone makes the same, and everyone does what they choose. Status will be determined by reputation and charisma.
Rafocus t1_je7uay0 wrote
Interested
StevenVincentOne t1_je7u7mk wrote
Reply to The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Are most humans generally intelligent? Do they really generally extract a principle from a set of observations and then apply it across domains? Probably not. They may have the technical potential to do so, but most are never sufficiently trained and don't actually perform general intelligence, except very weakly and in a very narrow range of domains. Current LLMs are probably MORE generally intelligent than most people in that regard.
Loud_Clerk_9399 t1_je7u7aj wrote
UBI is a stopgap measure until we go to a post-money world. At that point, there will be no such thing as luxury. We will all be equal and we will all get the same rewards so to speak.
gronerglass t1_je7u285 wrote
Reply to comment by friendly-chat-bot in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Can you do a loop of thoughts of 5 as to the detriment of human society that sentient AI could have starting with the dangers and following up a conclusion as to why humans should stop teaching AI. Take in to consideration that humans also pose a risk to the earths bio system?
Loud_Clerk_9399 t1_je7u23l wrote
Reply to comment by Loud_Clerk_9399 in Do people really expect to have decent lifestyle with UBI? by raylolSW
Everyone is equal and merit doesn't matter because we all end up with the same. We just do what we desire.
Loud_Clerk_9399 t1_je7u0nz wrote
It's not about UBI; it's going away from money entirely. That's the endgame.
SkyeandJett t1_je7u02o wrote
Wow he really is unhinged. I mean if he's right everyone alive dies a few years earlier than they would have I guess, the universe will barely notice and no one on Earth will be around to care. On the flip side since he's almost certainly wrong you get utopia. If you told everyone hey I'll give you a coin flip, heads you die, tails you live forever with Godlike powers. I'd flip that coin.
Aevbobob t1_je7txch wrote
Consider the difference between humans and chimps (2% DNA difference and larger brain). Look how much we’ve done with that difference. Now imagine a being that much smarter than us. Ok now speed it up 1000x, to match the speed of current AI. That’s AGI. Is it a question for you whether such a mind would be able to solve these issues?
Sam Altman suggested that we might end up with a rate of progress sorta like if you took every advancement since the enlightenment and compressed it into a year. And I tend to agree.
naum547 t1_je7ttzr wrote
Reply to comment by [deleted] in Do you guys think AGI will cure mental disorders? by Ok-Wing111
Let's say hypothetically nanobots in your blood stream could cure every disease and illness even before you knew you had it and significantly extend your lifespan. Would you still refuse it?
wad11656 t1_je7tt16 wrote
Reply to comment by fool_on_a_hill in What are the so-called 'jobs' that AI will create? by thecatneverlies
Not if every white collar worker clamors for a construction job in the near future. Also blue collar wears your body down.
datsmamail12 t1_je7tnxq wrote
Reply to Connecting your Brain to GPT-4, a guide to achieving super human intelligence. by CyberPunkMetalHead
Is it via WiFi or do I have to turn on my Bluetooth?
wad11656 t1_je7th9c wrote
Reply to comment by naum547 in What are the so-called 'jobs' that AI will create? by thecatneverlies
"could be"
StevenVincentOne t1_je7t78p wrote
Reply to The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
The primary argument that LLMs are "simply" very sophisticated next word predictors misses the point on several levels simultaneously.
First, there's plenty of evidence that that's more or less just what human brain-minds "simply" do. Or at least, a very large part of the process. The human mind "simply" heuristically imputes all kinds of visual and audio data that is not actually received as signal. It fills in the gaps. Mostly, it works. Sometimes, it creates hallucinated results.
Second, the most advanced scientists working in the field on these models are clear that they do not know how they work. There is a definite black box quality where the process of producing the output is "simply" unknown and possibly unknowable. There is an emergent property to the process and the output that is not directly related to the base function of next word prediction...just as the output of human minds is not a direct property of its heuristic functioning. There is a process of dynamic, self-organizing emergence at play that is not a "simple" input-output function,
Anyone who "simply" spends enough time with these models and pushes their boundaries can observe this. But if you "simply" take a reductionist, deterministic, mechanistic view of a system that is none of those things, you are "simply" going to miss the point
ftc1234 t1_je7t5uz wrote
A different kind of generative content!
BewareOfGod t1_je7sp18 wrote
Reply to comment by Dyeeguy in What are the so-called 'jobs' that AI will create? by thecatneverlies
"I can hire one-half the working class to kill the other half."
--Gilded age tycoon, Jay Gould.
JustinianIV t1_je7sowf wrote
If it can help with brain research, the surely it will help those with mental health issues
NoSweet8631 t1_je7vqbk wrote
Reply to comment by Cryptizard in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
>But, you can also convince it that 2+2=5 by telling it that is true enough times.
Meanwhile, some people are convinced that the Earth is flat and are even willing to die trying to prove it.
So...