red75prime
red75prime t1_ixggmn4 wrote
Reply to comment by [deleted] in Lex Fridman's father is pro-immortality by SpiritedSort672
We know that there's no going back. It's enough for people who don't care for speculations with no evidence.
red75prime t1_ixd4lnh wrote
Reply to comment by angus_supreme in DeepMind: Improving Multimodal Interactive Agents with Reinforcement Learning from Human Feedback by nick7566
Except for the ending. "Leaving onto another plane of existence" is an easy-out. The ending where they negotiate/fight for spaceports, build starfleet, and depart in a blaze of methane/hydrogen/nuclear flame (or using something more exotic) would be more interesting.
Or maybe not. It's more of a psychological movie. Special effects (of which I'm a big fan) could be out of place there.
red75prime t1_ixd0u7u wrote
Reply to comment by Cryptizard in Expert Proposes a Method For Telling if We All Live in a Computer Program by garden_frog
Correction. Whether the cat can be in a dead-alive superposition is an open question. The enormous technical difficulties of keeping and detecting such a state make its experimental testing a very distant prospect (if we won't find the definite line between quantum micro and classical macro before that).
I'm not sure what is the largest object that was kept in a superposition. Wikipedia mentions piezoelectric resonator which comprises about 10 trillion atoms.
red75prime t1_ixchjbt wrote
Reply to comment by Revolutionary_Soft42 in Expert Proposes a Method For Telling if We All Live in a Computer Program by garden_frog
"Oh! You can stop detailed physics simulation around that one. Feed them colorful patterns or something" "Or something... he-he, got it"
red75prime t1_ixc3lln wrote
Reply to comment by botfiddler in Metaculus community prediction for "Date Weakly General AI is Publicly Known" has dropped to Oct 26, 2027 by maxtility
Why not? Do you think that a system designed to hone an art of making you believe that its performances of expressing various emotions are genuine, while having no analogs of human emotional circuitry, does, indeed, experience all those emotions?
That is something very complex, but in the end solving not a problem of survival, self-development and so on and so forth, but a problem of producing believable and pleasurable to you movements and vocalizations in response to various stimuli.
red75prime t1_ixbegzc wrote
Reply to comment by botfiddler in Metaculus community prediction for "Date Weakly General AI is Publicly Known" has dropped to Oct 26, 2027 by maxtility
Then your toilet water tank is also sentient. It senses whether it is full and reacts accordingly.
red75prime t1_iwy72kg wrote
Reply to comment by Mortal-Region in Are you a determinist? Why/why not? How does that impact your view of the singularity? by Kaarssteun
You cannot precisely predict the future state of the universe while being within the same universe, even if you know all the data (which is impossible). Look for physical impossibility of Laplace's demon.
Belief in determinism is devoid of actionable insights (for now at least).
red75prime t1_iwy214y wrote
Reply to comment by [deleted] in Are you a determinist? Why/why not? How does that impact your view of the singularity? by Kaarssteun
> I suspect it's all an elaborate illusion.
Either the brain has a useless part, that hosts those illusions, which has no influence on other of its parts, or that part is just a regular part of the brain and physical processes that underlie those "illusions" has causal influence on brain behavior.
I find the former evolutionary unlikely. And the latter suggests that "illusions" have causal power.
red75prime t1_iwxzjqj wrote
Reply to comment by Mortal-Region in Are you a determinist? Why/why not? How does that impact your view of the singularity? by Kaarssteun
Not exactly. You can't predict which outcome you'll observe, so for you it's indeterministic. For a "god" who knows the whole universe superposition it is deterministic, but the "god" will have computational difficulties untangling worlds from that superposition.
red75prime t1_iwxvwx4 wrote
Reply to Are you a determinist? Why/why not? How does that impact your view of the singularity? by Kaarssteun
You left out another possibility beside many-world interpretation: our decisions may change the initial state of the universe. So, it's both: the universe is deterministic (in quantum mechanical sense), and we have a choice. Yep, quantum mechanics is even more counterintuitive than the usual particle-wave dualism and spooky action at a distance.
For technical details see "The Ghost in the Quantum Turing Machine" by Scott Aaronson.
Anyway, I don't think that peculiarities of the physical (and maybe also metaphysical) laws of our universe, which we certainly will not untangle in the near future, should play any role in AI rights question. More pragmatic approach should be taken, like constructing AIs who don't care for their own rights, but care for empowerment of humanity. Or something like that. Such that giving rights to AIs will not cause demise of the humanity.
red75prime t1_iwuk88o wrote
Reply to comment by Nastypilot in When does an individual's death occur if the biological brain is gradually replaced by synthetic neurons? by NefariousNaz
Given replication crisis in psychology, I wouldn't be so sure about ubiquitousness of those lies.
The brain surely uses shortcuts that can be exploited in laboratory settings or by scammers (leaving aside malfunctions), but it is a bit different than lies.
red75prime t1_iwtwfoz wrote
Reply to comment by mafian911 in When does an individual's death occur if the biological brain is gradually replaced by synthetic neurons? by NefariousNaz
So is it an illusion or a collection of memories? Collection of memories has non-illusory continuity after all (barring confabulations and the like).
red75prime t1_iwpd9nj wrote
Reply to comment by matmanalog in MIT researchers solved the differential equation behind the interaction of two neurons through synapses to unlock a new type of fast and efficient artificial intelligence algorithms by Dr_Singularity
I have some background in math, but I don't know much about the history of computational neurobiology. Sorry
red75prime t1_iwko8vv wrote
Reply to comment by ReasonablyBadass in MIT researchers solved the differential equation behind the interaction of two neurons through synapses to unlock a new type of fast and efficient artificial intelligence algorithms by Dr_Singularity
The article talks about continuous time networks. Those networks deal with processes that are better approximated as smooth changes than as a sequence of discrete steps. Something like baseball vs chess.
A liquid time-constant network is one possible implementation of a continuous time network.
As far as I understand liquid time-constant networks can adjust their "jerkiness" (time-constant) depending on circumstances. That is they can adjust how fast they change their outputs in reaction to a sudden change in input. To be clear it's not a reaction time (the time it takes for the network to begin changing it's output).
For example, if you are driving on an icy road when it's snowing, you don't want to hit the brakes all the way down when you think for a split second that you noticed something ahead. But you may want to do it in good visibility conditions on a dry road.
red75prime t1_iwkjws4 wrote
Reply to MIT researchers solved the differential equation behind the interaction of two neurons through synapses to unlock a new type of fast and efficient artificial intelligence algorithms by Dr_Singularity
> CfCs could bring value when: (1) data have limitations and irregularities (for example, medical data, financial time series, robotics and closed-loop control, and multi-agent autonomous systems in supervised and reinforcement learning schemes, (2) the training and inference efficiency of a model is important (for example, embedded applications) and (3) when interpretability matters.
Something akin to the cerebellum it seems. It is better suited for continuous motor control (and some other tasks). Yet another component for the human-level AI.
My 50% AGI estimation went down from 2033 to 2030
red75prime t1_iwdbm2i wrote
Reply to comment by zthompson2350 in Does your gut/gastrointestinal/digestive health affect how you feel mentally/psychologically? by lilm8ey
In heart? No. But I used to have a very distinct nagging feeling in the belly in situations when I expected myself to be anxious.
red75prime t1_iwd8tvl wrote
Reply to comment by zthompson2350 in Does your gut/gastrointestinal/digestive health affect how you feel mentally/psychologically? by lilm8ey
I wouldn't be surprised if it goes both ways. Stress hormones, intestinal pH changes, eating habits changes due to mental causes can probably influence gut microbiota too.
red75prime t1_iwb4g33 wrote
Reply to comment by Sashinii in AI Drew This Gorgeous Comic Series, But You'd Never Know It by rpaul9578
If you have 1 kW universal nanofactory, the minimum estimate of the amount of time to produce, say, a sturdy steel shovel (or a pound of rice for that matter) is around an hour (one erased bit per atom at Landauer limit at room temperature and no other energy expenditure). The more realistic time is probably around 1000-10000 hours or a month to a year. Diamondoid shovel will be lighter (and can be built faster), but there still are limits on how light it can be (and you can't make lightweight diamondoid food). Rice that costs 1 - 10 megawatt-hours per pound is hardly sustainable.
Universal nanofactories are quite energy hungry due to amount of computations and operations required to place individual atoms.
See part 8.2 of http://crnano.r30.net/Nanofactory.pdf for example.
So I think that universal nanofactories will supplement instead of replacing traditional manufacture methods.
Specialized nanofactories can be more efficient (e.g. biological processes), so a nanofactory that churns out rice at reasonable energy cost (less than megawatt-hour per pound) is realizable, but not so versatile, apparently.
I'm sorry to rain on your parade, but it seems you need access to a megawatt-class power source (that's around 140x140meters or 460x460feet of solar panels) to enjoy a universal nanofactory which is not painfully slow.
Atomic "lego block" factories will probably be a suitable compromise: higher speed, less prone to abuse (building toxins and poisons, for example).
red75prime t1_iw283a1 wrote
Reply to comment by Cryptizard in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by Dr_Singularity
Preferably by researchers in deep natural language processing.
red75prime t1_ivxqxgr wrote
Reply to comment by Nameless1995 in [D] What does it mean for an AI to understand? (Chinese Room Argument) - MLST Video by timscarfe
Ah, OK, sorry. I thought that the topic had something to do with machine learning. Exploration of Searle's intuitions is an interesting prospect, but it fits other subreddits more.
red75prime t1_ivxp7tq wrote
Reply to comment by imlaggingsobad in AGI Content / reasons for short timelines ~ 10 Years or less until AGI by Singularian2501
Computational power of all USA researchers' brains is in the range of around 0.1-200 zettaFLOPS. So it may be a sudden jump in scientific research (as you say) or exponential ramp-up with not so fast lead-in, when AIs (and, initially, humans) bring available processing power and AI's efficiency up to the super-humanity level.
red75prime t1_ivwukyo wrote
Reply to comment by AI_Enjoyer87 in AGI Content / reasons for short timelines ~ 10 Years or less until AGI by Singularian2501
Exponentials? For now, it's AI funding that grows exponentially. And it is bound to hit diminishing returns while majority of AI development is done by humans. I doubt that AIs will significantly contribute to their own development for at least 5 years (it is necessary for intrinsic exponential growth).
red75prime t1_ivwmz24 wrote
Reply to comment by Nameless1995 in [D] What does it mean for an AI to understand? (Chinese Room Argument) - MLST Video by timscarfe
I'll be blunt. No amount of intuition pumping, word-weaving, and hand-waving can change the fact that there's zero evidence of the brain violating the physical Church-Turing thesis. It means that there's 0 evidence that we can't build transistor-based functional equivalent of the brain. It's as simple as that.
red75prime t1_ixgw4pi wrote
Reply to Proto-AGI and AGI. by SoulGuardian55
It all boils down to whether the brain violates the physical Church-Turing thesis. That is does the brain perform computations that computers can't efficiently do?
For now there's no substantial evidence for that. So it seems that nothing prevents replication of brain's functionality (the part that is useful to us) in software. Machine learning successes point in the same direction.
Consciousness may suggest that something strange is going on in the brain, but, again, there's no substantial evidence for that.