red75prime

red75prime t1_ixgw4pi wrote

It all boils down to whether the brain violates the physical Church-Turing thesis. That is does the brain perform computations that computers can't efficiently do?

For now there's no substantial evidence for that. So it seems that nothing prevents replication of brain's functionality (the part that is useful to us) in software. Machine learning successes point in the same direction.

Consciousness may suggest that something strange is going on in the brain, but, again, there's no substantial evidence for that.

4

red75prime t1_ixd4lnh wrote

Except for the ending. "Leaving onto another plane of existence" is an easy-out. The ending where they negotiate/fight for spaceports, build starfleet, and depart in a blaze of methane/hydrogen/nuclear flame (or using something more exotic) would be more interesting.

Or maybe not. It's more of a psychological movie. Special effects (of which I'm a big fan) could be out of place there.

6

red75prime t1_ixd0u7u wrote

Correction. Whether the cat can be in a dead-alive superposition is an open question. The enormous technical difficulties of keeping and detecting such a state make its experimental testing a very distant prospect (if we won't find the definite line between quantum micro and classical macro before that).

I'm not sure what is the largest object that was kept in a superposition. Wikipedia mentions piezoelectric resonator which comprises about 10 trillion atoms.

1

red75prime t1_ixc3lln wrote

Why not? Do you think that a system designed to hone an art of making you believe that its performances of expressing various emotions are genuine, while having no analogs of human emotional circuitry, does, indeed, experience all those emotions?

That is something very complex, but in the end solving not a problem of survival, self-development and so on and so forth, but a problem of producing believable and pleasurable to you movements and vocalizations in response to various stimuli.

3

red75prime t1_iwy214y wrote

> I suspect it's all an elaborate illusion.

Either the brain has a useless part, that hosts those illusions, which has no influence on other of its parts, or that part is just a regular part of the brain and physical processes that underlie those "illusions" has causal influence on brain behavior.

I find the former evolutionary unlikely. And the latter suggests that "illusions" have causal power.

1

red75prime t1_iwxvwx4 wrote

You left out another possibility beside many-world interpretation: our decisions may change the initial state of the universe. So, it's both: the universe is deterministic (in quantum mechanical sense), and we have a choice. Yep, quantum mechanics is even more counterintuitive than the usual particle-wave dualism and spooky action at a distance.

For technical details see "The Ghost in the Quantum Turing Machine" by Scott Aaronson.

Anyway, I don't think that peculiarities of the physical (and maybe also metaphysical) laws of our universe, which we certainly will not untangle in the near future, should play any role in AI rights question. More pragmatic approach should be taken, like constructing AIs who don't care for their own rights, but care for empowerment of humanity. Or something like that. Such that giving rights to AIs will not cause demise of the humanity.

1

red75prime t1_iwko8vv wrote

The article talks about continuous time networks. Those networks deal with processes that are better approximated as smooth changes than as a sequence of discrete steps. Something like baseball vs chess.

A liquid time-constant network is one possible implementation of a continuous time network.

As far as I understand liquid time-constant networks can adjust their "jerkiness" (time-constant) depending on circumstances. That is they can adjust how fast they change their outputs in reaction to a sudden change in input. To be clear it's not a reaction time (the time it takes for the network to begin changing it's output).

For example, if you are driving on an icy road when it's snowing, you don't want to hit the brakes all the way down when you think for a split second that you noticed something ahead. But you may want to do it in good visibility conditions on a dry road.

19

red75prime t1_iwkjws4 wrote

> CfCs could bring value when: (1) data have limitations and irregularities (for example, medical data, financial time series, robotics and closed-loop control, and multi-agent autonomous systems in supervised and reinforcement learning schemes, (2) the training and inference efficiency of a model is important (for example, embedded applications) and (3) when interpretability matters.

Something akin to the cerebellum it seems. It is better suited for continuous motor control (and some other tasks). Yet another component for the human-level AI.

My 50% AGI estimation went down from 2033 to 2030

20

red75prime t1_iwb4g33 wrote

If you have 1 kW universal nanofactory, the minimum estimate of the amount of time to produce, say, a sturdy steel shovel (or a pound of rice for that matter) is around an hour (one erased bit per atom at Landauer limit at room temperature and no other energy expenditure). The more realistic time is probably around 1000-10000 hours or a month to a year. Diamondoid shovel will be lighter (and can be built faster), but there still are limits on how light it can be (and you can't make lightweight diamondoid food). Rice that costs 1 - 10 megawatt-hours per pound is hardly sustainable.

Universal nanofactories are quite energy hungry due to amount of computations and operations required to place individual atoms.

See part 8.2 of http://crnano.r30.net/Nanofactory.pdf for example.

So I think that universal nanofactories will supplement instead of replacing traditional manufacture methods.

Specialized nanofactories can be more efficient (e.g. biological processes), so a nanofactory that churns out rice at reasonable energy cost (less than megawatt-hour per pound) is realizable, but not so versatile, apparently.

I'm sorry to rain on your parade, but it seems you need access to a megawatt-class power source (that's around 140x140meters or 460x460feet of solar panels) to enjoy a universal nanofactory which is not painfully slow.

Atomic "lego block" factories will probably be a suitable compromise: higher speed, less prone to abuse (building toxins and poisons, for example).

0

red75prime t1_ivxp7tq wrote

Computational power of all USA researchers' brains is in the range of around 0.1-200 zettaFLOPS. So it may be a sudden jump in scientific research (as you say) or exponential ramp-up with not so fast lead-in, when AIs (and, initially, humans) bring available processing power and AI's efficiency up to the super-humanity level.

3

red75prime t1_ivwukyo wrote

Exponentials? For now, it's AI funding that grows exponentially. And it is bound to hit diminishing returns while majority of AI development is done by humans. I doubt that AIs will significantly contribute to their own development for at least 5 years (it is necessary for intrinsic exponential growth).

−5

red75prime t1_ivwmz24 wrote

I'll be blunt. No amount of intuition pumping, word-weaving, and hand-waving can change the fact that there's zero evidence of the brain violating the physical Church-Turing thesis. It means that there's 0 evidence that we can't build transistor-based functional equivalent of the brain. It's as simple as that.

2