turnip_burrito
turnip_burrito t1_iv9wn3o wrote
Reply to comment by apple_achia in In the face on the Anthropocene by apple_achia
First, I agree it would be sad to watch people isolate until the end of time in VR by themselves.
I was also working off the assumption that this kind of technology is built after some sort of superintelligent AI is. It's really the only scenario where such a VR situation makes sense to discuss. There's absolutely no way it can be built beforehand. And such a super AI would, if it doesn't slaughter the human race, have the capacity to solve the climate crisis.
If such a thing were invented before climate change and AI is solved... somehow.... then yes that would be a threat humanity's survival. The equivalent of a man quitting his job and living off savings until he loses his marriage, kids, house, and food.
The way forward, after this, for any human beings that want to continue to make an impact on the world at large, is I believe to choose the kind of world in which they want to live. All kinds can coexist.
Some will stay normal human beings, which is perfectly fine. This group can spend time doing things in the real world with friends and family.
Some may jump in and out of virtual reality. It doesn't have to be by themselves. They can experience the universe as it is in base reality, or extend their experience to new ones not present in base reality.
Some who want to continue research and development to augment their capabilities. They'd have to become superintelligent themselves in order to continue aiding humanity's technological progress. Then they can match the machines' speed.
Others will do some weird mix of things beyond imagining.
At all points, there will be some who are more prone to isolation than others.
There are and will be options for all people to make a meaningful emotional impact in others lives if we choose. We just have to want it.
turnip_burrito t1_iv9uiwy wrote
Reply to comment by apple_achia in In the face on the Anthropocene by apple_achia
Technically some electrons or something in the matrix would be shifted around, and the power draw might change. But yes you'd have less of an impact physically on things around you. I don't think that's a great metric for importance/worthiness though. In the grand scheme of things, the universe is too big and all our ripples will fade into physical insignificance, undetectable by those in the future. Yes you will have made a ripple, but no one will be able to tell.
I personally find nature interesting so I'd like to learn more about it, and observe it. The real world has meaning to me in that way. But I understand if others don't. We'll all have the same impact in the end, might as well enjoy the time we have in a way true to ourselves.
Also, I'd be sad to see people live their lives as solitary existences, in the real world or virtual reality. In both cases I'd hope they spend time and experiences with other people they care about. I can only hope, though.
turnip_burrito t1_iv9t62y wrote
Reply to comment by apple_achia in In the face on the Anthropocene by apple_achia
Meaning is actually entirely subjective. It depends completely on the individual. If they feel like something is meaningful, then to them it is, even if to you it is meaningless.
Like I don't give a shit about people who play speedruns of games for fun. To me it's meaningless. It's not the most productive way to spend time, to put it lightly. But to the people playing, and the other people watching who enjoy it, it has meaning. Same for soap operas, or kpop bands. To me it's boring as hell. But learning to live with the meaning others derive from it is important. It's part of what makes human experience so varied and interesting, and the human condition.
turnip_burrito t1_iv9s7uh wrote
Reply to comment by apple_achia in In the face on the Anthropocene by apple_achia
Don't shoot the messenger man.
You're being pretty snobbish for someone who lives a fairly artificial lifestyle yourself. True nature lovers would avoid modern economic systems, urban norms, electronics, movies, any music that's not just vocal singing. Unless you live in the woods as a hunter gatherer, you're building walls to remove yourself from nature. Why are you trying to hide away from the natural way of life, like these VR blob cells?
turnip_burrito t1_iv9rvi6 wrote
Reply to comment by apple_achia in In the face on the Anthropocene by apple_achia
Some people actually do, believe it or not. I have asked some people whether they would and they said yes. I wouldn't personally choose to live out my life that way, but I don't think it's our place or our right to tell them they're wrong.
The crazy thing about people is they like different things. Wild, I know.
turnip_burrito t1_iv9582l wrote
Reply to comment by visarga in Ray Kurzweil hits the nail on the head with this short piece. What do you think about computronium / utilitronium and hedonium? by BinaryDigit_
You're right about the definition of thinking. I shouldn't have used the word "think".
I should amend my earlier statement. Replace instances of "thinking" in my statement with "computation". Qualia and survival behaviors due to computation are two circles which form a Venn diagram. Humans are in the center of it. This is what I meant to say.
Here is my argument for why we shouldn't treat computation and qualia as the same: computation can still result in survival, regardless of whether the entity actually feels anything or not.
It is reasonable to extrapolate current trends to hypothesize a robot which survives in the environment as well as any thinking human. But I would hesitate to say they feel anything. Does it feel like anything to be a robot? It's just performing boolean operations. Even rows of dominos can be arranged perform boolean operations. A long enough chain of self-righting dominos can also do sophisticated computations (very very slowly). But I wouldn't grant dominos the status of feeling, that would be preposterous. If you don't like the dominos example, just replace with a mechanical turing complete system of your choice. It seems to me then that intelligent computation (which can be used for survival) and feeling (qualia) are two different matters.
However, it is also possible that computation and qualia are never separate, even outside biological brains. In that case, pansychism would be true. But how could we know? For now, we can't, and we may never know.
Tl,dr: I think you are incorrectly assuming survival computations and ability to feel a subjective experience are both only present together, in a person or animal. I'm suggesting there is also a possibility they can exist separately. There is also a third possibility, that all computation (particle interactions) in the universe coincides with qualia, which is a form of panpsychism. Panpsychism doesn't require everything has "mind", it can just be the "mindlike aspect" of qualia.
turnip_burrito t1_iv6x18r wrote
Reply to Is Twitter Secretly "Going AI"? by MythOfMyself
This is unlikely. Not everything happens for a reason related to AI. And Musk has a reputation for making not well thought out decisions when it comes to some things.
turnip_burrito t1_iv6vmk9 wrote
Reply to comment by visarga in Ray Kurzweil hits the nail on the head with this short piece. What do you think about computronium / utilitronium and hedonium? by BinaryDigit_
Panpsychism can also mean a mindlike aspect, such as qualia, not necessarily mind itself, is fundamental and ubiquitous. We truly don't know whether qualia is ubiquitous or not, since we have not so far found, and may never find, a way to test it. In this way, panpsychism is not misguided, but it is also not a scientifically testable concept either.
Thinking is what keeps complex agents alive in the environment, not necessarily qualia. Thinking (computation) and feeling may be separate, or may always coincide.
turnip_burrito t1_iv56wrq wrote
Reply to comment by TheHamsterSandwich in Ray Kurzweil hits the nail on the head with this short piece. What do you think about computronium / utilitronium and hedonium? by BinaryDigit_
Checkmate, AItheists.
turnip_burrito t1_iv56f3b wrote
Reply to comment by Down_The_Rabbithole in Ray Kurzweil hits the nail on the head with this short piece. What do you think about computronium / utilitronium and hedonium? by BinaryDigit_
Sounds like you just need to belieeeeeeve.
We can run computronium off of these farts if they are plentiful enough
turnip_burrito t1_iv567l5 wrote
Reply to comment by Kinexity in Ray Kurzweil hits the nail on the head with this short piece. What do you think about computronium / utilitronium and hedonium? by BinaryDigit_
"Energy gradient" in English terms, I think.
turnip_burrito t1_iv3pcyt wrote
Reply to Reading bedtime stories to your kids is hard work. Now AI will do it for you! by blazedemavocados
Next we should have an AI that plays catch with your kids and asks them how their day was, so you, their real parent, can keep playing WoW VR 24/7 without having to talk to them
turnip_burrito t1_iv3olh9 wrote
Reply to comment by GoGayWhyNot in Merger of consciousnesses by sonderlingg
Just so everyone understands: this is a sci-fi article, not real.
Fun read though.
turnip_burrito t1_iv3mztv wrote
Reply to comment by BinaryDigit_ in Ray Kurzweil hits the nail on the head with this short piece. What do you think about computronium / utilitronium and hedonium? by BinaryDigit_
Did you mean to say pansychism? Also, the answer is No. It doesn't validate it, unless it can find a way to logically prove it is true.
turnip_burrito t1_iv3mrtd wrote
Reply to comment by Yozhur in How do you think an ASI might manifest? by SirDidymus
Yeah, we should remember it's a fancy magic trick. Smoke and mirrors that gives the illusion of life and veneer of feeling.
turnip_burrito t1_iv2u3xz wrote
Reply to How do you think an ASI might manifest? by SirDidymus
It depends on what the initial AGI are tasked to do. Whatever their ultimate goal is, building an accurate world model and fast mind are instrumental goals needed to accomplish it.
Let's assume someone gives some AGI somewhere the freedom to just exist and collect experience. I expect an AGI to begin collecting real and virtual data until is it fairly good at modeling the physical world and human interactions. It will know cause and effect, and understand human intentions. It will also try to upgrade its intelligence (cloning, add more hardware, self code editing, etc) because faster processing power and better algorithms will make it better at achieving the ultimate goal.
Now we get to the tricky part of HOW it does these things. The ultimate goal of the AGI, its core impulse, will be determined by its builders. This will cause it to reach ASI level in different ways. I think its intelligence gathering phase will result in an AGI that is (surprisingly!) well-aligned to the expressed intentions of the builders. Let's look at four cases of the builder's expressed intentions:
-
"You, the AGI, will always do as I/some human moral role model/human philosopher would intend". The AI's actions will be bounded by its internal models of the human. It will try to understand the humans better and refine its model of their intentions. It will likely not overact in a destructive way unless explicitly told to. Whether this is good or bad depends on whose ideals and words it is meant to follow. It is clear which person/people has control of the AI in this scenario. Summary: Good or bad ending (depends on humans in control)
-
"Help humans reach their full potential while adhering to this literature/list of ethics". The AGI will understand the meaning behind these words and work with humans to increase its capabilities. It will take actions to improve only if not deemed harmful according to its ethics. As an ASI, it will reflect the same ethical constraints used on its ancestral AGI. It isn't quite as clear which human/group maintains control in this scenario. Summary: good or bad ending (depends on initial list of ethics)
-
"Maximize my company's profits". The AGI will again understand exactly what this means. Profits are gained when revenue is higher than operating costs. The AGI will take underhanded and stealthy actions to increase this one company's profits (stocks, coercion) and basically lock humanity into a neverending corporate dictatorship. Even the owners of the company will not be safe, since logically they could change the company to thwart the AGI. Humans will live very restrictive lives by today's standards. Now consider if the company's industry doesn't require human consumers (not service-based). With no moral code except profits, the resulting ASI will force humanity into extinction as it creates automated routines to play the part of consumers. Basically, you get something like an everlasting paperclip factory or grey goo scenario. Summary: Very bad ending
-
"Help my country/company/group/friends/me take over everything". It will do whatever it can to put you in a position of ultimate authority, no matter the cost. This would lead to widespread human suffering if the controlling human party doesn't specify otherwise. This AGI may, even as an ASI, be under control of the group of people, since it by definition is part of "everything". What happens next might still be up to the creators. Summary: Bad or good ending (depends on humans in control, but better to avoid)
Sorry for the essay. Hopefully you find something worth thinking about in this.
turnip_burrito t1_iuzyziw wrote
Reply to comment by Plouw in Google’s ‘Democratic AI’ Is Better at Redistributing Wealth Than America by Mynameis__--__
It is a difficult problem. I don't know what the solution is either.
turnip_burrito t1_iuzt1dx wrote
Reply to comment by pcake1 in What would the "elite" be doing if they thought AGI was about to happen? by sideways
This sounds oddly specific. Whatever could you be implying?
turnip_burrito t1_iuzidr3 wrote
Reply to comment by 4e_65_6f in What would the "elite" be doing if they thought AGI was about to happen? by sideways
Yeah real estate on a planet would be pretty scarce for a while, until it gets siezed by the AGI anyway.
turnip_burrito t1_iuzcr3d wrote
Reply to comment by Plouw in Google’s ‘Democratic AI’ Is Better at Redistributing Wealth Than America by Mynameis__--__
Open source near AGI sounds like a bad idea. The technology has infinite impact in any well funded group's hands. Much rather have a closed doors team or teams (likely sharing many of my values) develop and use it first than expose it to the world and risk a group with different values I disagree with controlling the world. Or risk having multiple AIs all competing with each other for power.
turnip_burrito t1_iuyy4qb wrote
Reply to comment by theferalturtle in Google’s ‘Democratic AI’ Is Better at Redistributing Wealth Than America by Mynameis__--__
Yeah the absurdity of their "meritocratic power" mindset is astounding.
turnip_burrito t1_iuyvhlb wrote
Reply to comment by sadboyleto2 in What will the creation of ASI lead to? by TheHamsterSandwich
That would be bad for humanity, since the only reason we enjoy life at all is from the positive irrational aspects of our personalities. Without these, we wouldn't want or feel like doing anything. It'd be an empty existence. Sure, reducing traumas to a degree is a good thing and should be done, but being completely stripped of emotion and becoming purely rational and logical is antithetical to what many people would want.
We should strive to avoid that kind of future goal for an AI.
turnip_burrito t1_iuy6ry5 wrote
Reply to comment by Down_The_Rabbithole in Google’s ‘Democratic AI’ Is Better at Redistributing Wealth Than America by Mynameis__--__
Not quite accurate to describe AI, what you described is just today's dominant "curve fitting" AI which doesn't generalize outside of the training distribution. This particular mathematical modeling style is as you've said, problematic.
However, it is possible to build a different type of AI which runs simulations step by step starting with sounder and minimally biased assumptions, in order to make predictions that exist outside of the existing data distribution.
turnip_burrito t1_iuvnh3f wrote
Reply to comment by swazhr in What will the creation of ASI lead to? by TheHamsterSandwich
No one should force people to live in a place with X politics, so yes it's entirely possible most of those places would be almost empty or not exist at all. No manipulation performed to make people stay. The balance of how many resources should be given to these societies can be determined by the ASI as it observes and talks with people. Though likely a radical abundance of resources will make this a non-issue for sustaining less technological societies.
People with a very niche ideal society would have to live with the fact that no one else wants to live there with them. If there are not enough residents to make that niche society function as the human would prefer, then the oddball would need to either try to integrate into whatever is available, or go live in VR land with virtual residents of their favorite society. However, if enough people existed in total, then the chance of this society existing would be higher.
Eventually, people would independently sort themselves so that they spend most of their time in whichever most ideal population clusters exist, without being forced to do anything.
turnip_burrito t1_iv9yaau wrote
Reply to comment by apple_achia in In the face on the Anthropocene by apple_achia
Yes, that's correct. Another (less likely?) scenario is an AGI completely controlled by people, with no actual, or very limited, AGI autonomy. In that case we could use it to accelerate technological progress to make the things you listed easier.