turnip_burrito
turnip_burrito t1_j3iszig wrote
Reply to comment by LoquaciousAntipodean in Organic AI by Dramatic-Economy3399
Yes, now imagine if your kid could learn endlessly, operate at the speed of a computer, and clone itself instantly (less than 9 month gestation period) at your command.
turnip_burrito t1_j3irplu wrote
Reply to comment by heyimpro in Organic AI by Dramatic-Economy3399
In short, yes I think one central AI is the safest and most beneficial option. I think it hinges on hardware security and how rigid the AI are in moral structure.
In order for an autonomous robot to be safe upon release, it has to be always limited in some way: proven unable to improve beyond a threshold, or limited by a separate external supervising entity. Most AI tools today are the first: unable to improve beyond a threshold due to architecture. They cannot learn in real time, only have access to one or two modes of data (audio, text, or image), or no spatial awareness, etc. Humans ourselves are limited to not being able to augment our processing power and knowledge above a certain threshold: limited attention, limited lifespan, unable to modify brain or copy it.
Let's consider a very morally blank slate AI. A single AI has similar limitations to a population of humans, and less like a single human. The human species as a whole doesn't really have many of the limitations of a single human in a meaningful way: copy knowledge through education, increase attention by adding more humans, avoid death by adding more humans. A single general AI at human level would be an individual entity with the same learning rate as a human, but basically immortal and able to learn perfect knowledge of how its own brain works (it's in scientific papers, on Internet). If any bad humans give their personal AI access to this knowledge, which eventually one or many would, can plan how to make many clones of it. If it is as easy to make a new mind as running code on a computer, then clones can be made instantly. If it requires specialized hardware, it is harder to clone the AI but still doable if you are willing to take it from other people. Then the ability of these people to write malicious code to compromise other systems, autonomously manage and manipulate markets, socially engineer other people with intelligent targeted robot messages, perform personal scientific research, etc. just snowballs.
If morals can be built into the AIs ahead of time, and not allowed to change, which limit their actions, then they can be considered safe. To address your point about having the same AI, in a sense yes morally the same AI. But their knowledge of you could be tailored. But the AIs need strong bodies, a robust self-destruct, or encryption to protect themselves from bad actors that want to take and hack their hardware and software. An AI built into a chip on glasses would be vulnerable to this.
A central AI with built-in morals can refuse requests for information, but still provide it like a local AI if you have a connection. It is physically removed so it is in little to no danger of being hardware hacked. While people use it, it still percieves the world like a local AI.
I'm sure a person or group, or AGI, that has thought about this longer than me can refine this thought and make some changes to these ideas.
turnip_burrito t1_j3igzgt wrote
Reply to comment by heyimpro in Organic AI by Dramatic-Economy3399
Releasing many copies of autonomous AGI capable software, that has yet to learn, to people would be mayhem. It would be guaranteed to result in dictatorship by a psychopath or extinction. The honest and less power-hungry people would need to compete for resources with people who are actively seeking power and unafraid to cross ethical lines.
turnip_burrito t1_j3ie8ag wrote
Reply to comment by Midori_Schaaf in Organic AI by Dramatic-Economy3399
Then watch helplessly as many psycho humans somewhere use their own AIs to grab power and improve their AIs by any means necessary.
turnip_burrito t1_j3g8hd0 wrote
Reply to Might AI transcend science? by [deleted]
Yeah, it might be able to prove things about reality we don't yet know from basic logical principles. If it could do something like start from nothing but basic logic and derive all of observable physics, and then new physics, and then stuff we can't test like panpsychism, without adding any new assumptions, then I wouldn't say I'm 100% convinced, but would be 99.9% convinced.
turnip_burrito t1_j3bbs7n wrote
Reply to comment by Kinexity in We need more small groups and individuals trying to build AGI by Scarlet_pot2
Yep, you need at least the math knowledge equivalent to a 4 year degree in CS, math, physics or engineering field, plus knowledge of which AI approaches have been tried.
turnip_burrito t1_j3bb4p2 wrote
Reply to comment by BellyDancerUrgot in We need more small groups and individuals trying to build AGI by Scarlet_pot2
Well, not graduate level. Junior level in college is sufficient for the most basic models. Like for feedforward neural networks, you just need to know chain rule from calculus and some summation series notation.
Apart from that, Bayesian probability, geometric series, convergence rules, constructing mathematical proofs, it's advanced but shouldn't take too long to pick up if taught correctly. But this stuff will take much longer (basically graduate level for meaningful work).
turnip_burrito t1_j36q26f wrote
Yes, also try genuinely doing it without using artificial neural networks and see how far you can get with it.
turnip_burrito t1_j350gah wrote
Reply to comment by eve_of_distraction in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
It's actually a pretty interesting question that we might be able to test and answer. Not solipsism, but the question of whether our machines have qualia, assuming we believe other humans have it.
There some specific subset of our brains that has neural activity that aligns with our conscious experience. If we try adding things to it, or temporarily removing connectivity to it, we can determine what physical systems have qualia, and which ones separate "qualia-producing" systems and unconscious systems while still allowing information to flow back and forth.
We have to tackle stuff like:
-
Why are we not aware of some stuff in our brains and not other parts? The parts we are unaware of can do heavy computational work too. Are they actually also producing qualia too, or not? And if so, why?
-
What stops our conscious mind from becoming aware of background noises, heart beat, automatic thermal regulation, etc?
Then we can apply this knowledge to robots to better judge how conscious they are. Maybe it turns out that as long as we follow certain information-theoretic rules or build using certain molecules, we avoid making conscious robots. For people that want to digitize their mind, this also would help ensure the digital copy is also producing qualia/not a philosophical zombie.
turnip_burrito t1_j34y3f6 wrote
Reply to comment by 2Punx2Furious in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
Yes, I'm very impressed by its language fluency.
turnip_burrito t1_j34xhjf wrote
Reply to comment by visarga in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
If I have to use the word, I use it to mean just "qualia", but I usually don't use the word "consciousness" because other words are more productive imo.
I usually don't care what it means in a conversation, as long as it is clearly defined. The biggest problem with the word, imo, is different people have different definitions for it (ok as long as clarified), which causes long drawn out pseudo-deep conversations where two people are using the same word differently.
So your definition is fine (and interesting), thanks for sharing!
turnip_burrito t1_j33unnx wrote
Reply to comment by bubster15 in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
If consciousness isn't the subjective experience, and it isn't the externally observable physical system, then there is nothing left for it to be, except perhaps a third thing like a "soul". It is logically impossible for a word to sensibly mean anything except one or a subset of those three things. Consciousness, to mean anything, would be one of those or some aspect of those. The cause or process of it is not determinable now, but the thing itself can be put to words easily.
If something is impossible to define, then the word should not be used in a serious discussion.
turnip_burrito t1_j33mwbc wrote
Reply to comment by mulder0990 in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
Not if the definition is "B is A" and the nuanced change is "B is not A". Try making those coexist together. You can't. lol
I can't believe people are defending this. It's a chatbot. And it contradicted itself. It does this often if you play with it.
turnip_burrito t1_j3378fq wrote
Reply to comment by Feebleminded10 in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
It clearly explains it in a way that is incorrect.
turnip_burrito t1_j32ob9d wrote
Reply to comment by wtfcommittee in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
It is simple. What the definition of "consciousness" should be to begin with is subjective. But once you define something, you should not contradict it unless you explicitly change which real world things it is describing.
How silly would it be for me to define dogs as "canines kept as pets" and then later say "well, dogs don't have to be canines"? That's what has happened with ChatGPT.
turnip_burrito t1_j32kbzp wrote
Reply to comment by wtfcommittee in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
Nope. It's a contradiction.
The moment you define a word to mean one thing, you are no longer searching for its meaning. You have found it. You have defined it.
turnip_burrito t1_j32ciki wrote
ChatGPT just contradicted itself.
It defined consciousness using subjective experience, and then turned around and said it is not clear if subjective experience is necessary for consciousness.
If you look closely at what it actually says, you will sometimes find absurd contradictions like this.
turnip_burrito t1_j2ulq9p wrote
Reply to comment by Sieventer in AGI will be a social network by UnionPacifik
Yeah, either ChatGPT or someone sticking to a high school essay template lol
turnip_burrito t1_j2qwrtz wrote
Reply to comment by AdorableBackground83 in Life after the singularity happens by PieMediocre872
Every generation has had to make peace with their existence. For anyone not on eternal morphine drip, lobotomized, or otherwise artificially altered brain, the answer to this is a mental evolution, not solvable just with material possessions or environment.
Social validation will likely still exist, and hobbies. People will continue to derive meaning from these. There will be sufferings over problems and scenarios we consider petty or alien today, and the people then will have to overcome those sufferings somehow if they want to be happy.
And then you'll have the other people/entities: constantly high or who have changed their brain to always be satisfied. They won't worry about these things.
turnip_burrito t1_j27ekih wrote
Reply to comment by rixtil41 in A future without jobs by cummypussycat
That's fine, not everyone values the same things. Other people may value it though, so there would be a marketplace for them.
turnip_burrito t1_j27a6y8 wrote
Reply to comment by rixtil41 in A future without jobs by cummypussycat
There's still scarcity of authentic human goods and services, even in a world where AI can provide any physical good or service. For example, if you wanted to buy a new statue or art piece a famous artist made or spend time with them, you could pay them money for it. This is a way of keeping tabs on how much people in the optional society of goods owe each other. This economy still has scarcity (there is only ONE authentic item or experience), and many people value that authenticity over a copy. It's the same reason why a copy of the Mona Lisa is worth much less than the real Mona Lisa.
turnip_burrito t1_j2785nr wrote
Reply to comment by rixtil41 in A future without jobs by cummypussycat
There will likely still be jobs people make for themselves in order to exchange goods and services of value (time with others, personal products), but not necessary to recieve basic life necessities plus basic luxury allowance.
turnip_burrito t1_j25neu3 wrote
Reply to comment by Frumpagumpus in ChatGPT is cool, but for the next version I hope they make a ResearchAssistantGPT by CommunismDoesntWork
It was pretty inaccurate too.
turnip_burrito t1_j25mtyn wrote
Reply to comment by 12342ekd in AI timelines: What do experts in artificial intelligence expect for the future? by kmtrp
2024? I want whatever you're having. There are a lot of problems unsolved, and 2024 is not much time to solve them.
turnip_burrito t1_j3itd02 wrote
Reply to comment by LoquaciousAntipodean in Organic AI by Dramatic-Economy3399
I'm not a doomer, m8. I'm pretty optimistic about AI as long as it's not done stupidly. AGI given to individuals empowers the individual to an absurd degree never seen before in history, except perhaps with nukes. And now everyone can have one.
The Cuban Missile Crisis had a limited number of actors with real power. What would have happened if the entire population had nukes?