turnip_burrito
turnip_burrito t1_j5m4aoq wrote
Reply to comment by sticky_symbols in AGI will not happen in your lifetime. Or will it? by NotInte
Yes, it depends on the timescale of the system.
turnip_burrito t1_j5lydc0 wrote
Reply to comment by Cryptizard in Are we a step closer to L.E.V? by Middle_Cod_6011
That's a funny thought.
turnip_burrito t1_j5gdvss wrote
Many people already find purpose in doing things that don't require being a cog in a machine or comparison to other entities. If they rely on those things to feel fulfilled, then they will need to find something else when ASI is created.
turnip_burrito t1_j5ege91 wrote
Reply to comment by ZaxLofful in It is important to slow down the perception of time for future sentient A.I, or it would become a living LOOP hell for itself by [deleted]
Yep, also tired of people claiming "consciousness" or "sentience" are undefinable enigmas. Like you said, we don't know why the stuff the words refer to exists or a "cause", but we sure as hell can define what those words mean.
turnip_burrito t1_j5e92iz wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Yes, I would also add that we just need them to fall into patterns of behavior that we can look at and say "they are demonstrating these specific values", at which point we can basically declare success. The actual process of reaching this point probably involves showing them stories and modeling behavior for them, and getting them to participate in events in a way consistent with those values (they get a gift and you tell them "say thank you" and wait until they say "thank you" so it becomes habituated). This is basically what you said "relying on our collective responses to 'learn'...."
turnip_burrito t1_j5bg75e wrote
Reply to What do you think an ordinary, non-billionaire non-PhD person should be doing, preparing, or looking out for? by Six-headed_dogma_man
Campaigning your local elected officials, if they exist, for more progressive policies to help workers displaced by automation.
turnip_burrito t1_j58uhwm wrote
Reply to comment by 23235 in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
> We might influence AI values in ways other than enforcement, like through modelling behavior and encouragement, like raising children who at some point become (one hopes) stronger and cleverer and more powerful than ourselves, as we naturally decline.
What you are calling modelling and encouragement here is what I meant to include under the umbrella term of "enforcement". Just different methods of enforcing values.
We will need to put in some values by hand ahead of time though. One value is mimicking, or wanting to please humans, or empathy, to a degree, like a child does, otherwise I don't think any amount of trying to role model or teach will actually leave its mark. Like, it would have no reason to care.
turnip_burrito t1_j58ty9o wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Sounds like instilling values to me. You may disagree with the phrasing I'm using but that's what I'd call this process, since it sounds like you're trying to get it to accustomed to exploring philosophical viewpoints.
turnip_burrito t1_j58ptmu wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Lets say you create an AI. What would you have it do, and what values/goals would you instill into it?
turnip_burrito t1_j5841mx wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
You're acting like an asshole, that makes people less likely to listen to you. If your goal is convince people, then your tone is actively working against that.
turnip_burrito t1_j583mcf wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
AI escalating beyond our control is a very extremely bad thing if its values don't overlap with ours.
We must enforce our values on the AI if we are going to enjoy life after its invention.
turnip_burrito t1_j582edu wrote
Reply to just out of curiosity can we create a more vivid and larger world than the real world? by Most_Confusion8428
If you want to constantly simulate a larger universe in places unobserved to the user, no you can't, in real time. If you slow down the simulation by many many orders of magnitude and use a quantum computer, you may be able to pull it off, but it just isn't worth it.
If you want the simulation to only be computed at high resolution near the user (and low resolution at long distances), yes you can give the illusion of a more vivid and larger universe.
turnip_burrito t1_j54msg5 wrote
Reply to comment by Thiccboifentalin in What is easier to create a paradise for one or for all? by Thiccboifentalin
Maybe, but if they believe it's the real world, then that's what matters to them. It is also more real because the machinery running the VR world is built within it.
Also the simulation hypothesis is unfalsifiable, so not particularly any more or less helpful than asking questions like "what if dying sends our souls to heaven" or "what if food is actually magic but undetectable drugs and I'm hallucinating my life".
turnip_burrito t1_j54ix6t wrote
Because as a matter of subjective opinion, some people might want it to be real, not simulated. That's all.
turnip_burrito t1_j4yoclv wrote
Reply to OpenAI's CEO Sam Altman won't tell you when they reach AGI, and they're closer than he wants to let on: A procrastinator's deep dive by Magicdinmyasshole
Come on guys. Just wait a few months until they show it off and then you can see how AGI/not AGI it is, haha.
turnip_burrito t1_j4s0yqx wrote
Reply to comment by AsheyDS in Perhaps ChatGPT is a step back? by PaperCruncher
That's just sad. :(
Thanks for the answer though.
turnip_burrito t1_j4rwf9n wrote
Reply to comment by AsheyDS in Perhaps ChatGPT is a step back? by PaperCruncher
Taxes, I mean.
The US for example has national labs paying scientists to do research, so it's not unheard of.
turnip_burrito t1_j4pe9vm wrote
Reply to comment by AsheyDS in Perhaps ChatGPT is a step back? by PaperCruncher
Why don't we have more publically funded research groups like the private OpenAI, DeepMind, and Meta research making cutting edge progress? Seems naïvely like a easy to solve this problem, to me.
turnip_burrito t1_j4p9hov wrote
Reply to comment by vernes1978 in Researchers develop an artificial neuron closely mimicking the characteristics of a biological neuron by MichaelTen
It's a joke though, not meant to be believed. It even says its fake right in the post. If someone reads that and actually believes it, let's say it'll be tough to educate them.
turnip_burrito t1_j4ozpd5 wrote
Reply to comment by Hazzman in Researchers develop an artificial neuron closely mimicking the characteristics of a biological neuron by MichaelTen
"A total fabrication" ;)
turnip_burrito t1_j4ndvmw wrote
Reply to comment by vernes1978 in Researchers develop an artificial neuron closely mimicking the characteristics of a biological neuron by MichaelTen
It's a made up lie (a joke). Not real. Hoping that was obvious from... well, the post.
If you knew that and were playing along with it, then disregard.
turnip_burrito t1_j4ndpvo wrote
Reply to comment by Hazzman in Researchers develop an artificial neuron closely mimicking the characteristics of a biological neuron by MichaelTen
This was the funniest thing I've read in a while.
turnip_burrito t1_j4iehdq wrote
Reply to When will humans merge with AI by [deleted]
2140
No, 3140.
turnip_burrito t1_j4fhm2j wrote
Reply to comment by [deleted] in What void are people trying to fill with transhumanism? by [deleted]
Okay.
> I wouldn't.
You wouldn't believe if a person who constantly accrues wealth says they aren't satisfied, isn't satisfied? Odd.
> So, since drugs are a form of technology, would you then say that spending weeks on end strung out on heroin is living?
A very narrow form of living which also has extremely adverse, painful effects on the individual and others.
But I am sure you understand heroin is not the only form of technology, and not the way most people use it, and not what my focus is on. Are you here to have an honest discussion, or to be a bad faith contrarian?
turnip_burrito t1_j5m4r19 wrote
Reply to Google AI's Great Comeback of 2023 - Will it be able to Respond to ChatGPT? by BackgroundResult
They can compete. They have the technology. Question is, can they do so without destroying their revenue?