turnip_burrito

turnip_burrito t1_iuv45tj wrote

I agree with this contract idea. It is a good proposal to protect yourself and others from your own actions. Very sensible.

If we ever reach a point where we know how to artificially create conscious beings, then we should (as you've pointed out) have a set of rules to prevent abuse. To add something new to the discussion: there is also a possibility of material or energy resource shortages (resulting in lower quality of life for you, others, or the new beings) if too many conscious beings are allowed to exist at one time, so it will need to be regulated somehow.

3

turnip_burrito t1_iuv0eym wrote

Humans all have different ideas on how life should be lived. An ASI would recognize this. Assuming it is human-aligned, I think that the proper route for ASI to take would be allowing every individual to choose which society of like minded populations they want to live in:

Want to live in a non-automated society with X politics? This land or planet will be where you live, free from automation.

Late 20th century culture and technology? Over there. You will die at the age of 70ish without the new antiaging treatments, but it's your choice.

Want to live in a VR world? Here you go. Let the ASI know whenever you want out.

Want to become luxury gay space communists whose material prosperity increases every year, powered by ASI-managed Dyson sphere? This way.

Want to live without technology and with no government, off the grid? Here's this place you can live in. Send a signal when you get tired of living like a caveman, or not, it's your call.

Want to move to a different society because the one you're in right now doesn't fit or is abusive? Ask the AI and it will help you migrate to a different society.

Each society should be easy to migrate to/from, but protected from other societies. Want to nuke a different society? Or release a supervirus? The AI will quietly prevent it as much as it can, minimizing violence and other interference while it does so. There have to be some rules like this, and the ASI can figure them out by considering human preferences.

The amount of interference should be minimal to allow a lot of human freedom and liberty (likely even more than anyone alive has now) while still ensuring protection (also more than anyone has now).

It would do this without forcing everyone to live the same way.

Then the multitude of human preferences can be accomodated. Humanity can continue to explore and live out the future of its choosing, with minimal infringements on freedoms.

12

turnip_burrito t1_iuuey6g wrote

I don't know how small you'd have to go to see 50/50 odds. But I can suggest how a physicist would initially continue approaching the problem, if you're interested.

Blood cells are still too big to see these effects. Typically in labs you strive to measure these quantum effects with roughly atom-scale things: electrons, nuclei, and atoms are all less than 10^(-9) meters large. In contrast, red blood cellsare roughly 10^(-6) or 10^(-5) meters, so still at least 10 thousand or more times longer. Humans are larger still, order 1 meter. By volume, the difference is even larger. To have objects made of many objects tunnel, you need every particle inside it to tunnel. The chance of 1 particle tunneling is much higher than 10, and enormously higher than 100, etc.

If you're interested in learning how to calculate it though, here's a place to get started: you need to solve Schrodinger's equation for a particle of some given energy, and a potential wall. When you solve the equation, you get a wave function. The probability of seeing tunneling is given by integrating the squared amplitude of this complex-valued wavefunction over the region of space you want to see it in (other side of the wall) and dividing it by the integral of the squared magnitude of the wavefunction over all space.

The electric fields outside of your thing you want to see tunnel are responsible for producing the potential barriers. This could be things like liquids, vascular walls, other blood cells, objects in the liquid, etc. The larger the potential barriers are between where you are now and where you want to be, the exponetially lower the probability of tunneling across the barrier is.

For another reference, see this: https://physics.stackexchange.com/questions/223277/if-quantum-tunneling-is-possible-is-there-a-maximum-thickness-of-material-a-par

An introductory textbook to quantum mechanics like Griffiths' will also help.

1

turnip_burrito t1_iuts85v wrote

The more massive an object (for simplification, say a single particle) is in quantum mechanics, the less likely you are to observe tunneling/"teleportation". This is calculable using quantum mechanics, so not so mysterious.

It's not a sudden transition from quantum to classical. There is a continuous transition away from quantum effects being noticeable as you move up the mass/size scale. At large object scales, the resulting decoherence removes our ability to observe superposition, and mass removes the noticeability of tunneling.

2

turnip_burrito t1_iur5v9m wrote

Cells are pretty warm. It would be difficult to maintain any sort of quantum coherence at these temps on the spatial scale of a whole cell. It is constantly interacting with its environment and will decohere "instantly". At best, you can hope for quantum characteristics to be maintained in only subcellular pieces of it like chlorophyll, and for an extremely short time. Networks of neurons are much larger, so they are vastly more likely to operate classically (non-quantum). Maaaaybe small small micronetworks in the brain leverage quantum effects, or consciousness does, somewhere, but I'd need to see strong evidence to entertain the idea further regarding human scale intelligence. Until then, classical physics seems a more likely bet.

Tl,dr: it's a fun idea but seems unlikely that human level intelligence must rely on quantum physics. You may still find use of quantum processors in artificial intelligence, however.

12

turnip_burrito t1_iuponc9 wrote

Yeah, having months like this will probably be somewhat common going forward.

We're definitely in full transformer-exploitation territory now. Also stable diffusion and other alternative methods that have popped up in the last few months are likely to inspire even more out of the box thinking. And that's what we need.

10

turnip_burrito t1_iuo5k7p wrote

They don't. People really are smart enough to figure this stuff out without needing a smart AI. Remember, you have universities and corps (with many hired geniuses) across the world all sharing research techniques. And lots of well fed supporting technicians. This is what peak human progress looks like.

If it was AGI, we'd be seeing everyone getting fired almost all at the same time, over the span of less than a decade.

19

turnip_burrito t1_iuhipf4 wrote

To remain a functional society, we'll have to trust centralized news outlets more than social media sources, or have computer programs which validate images/videos based on either metadata or statistical noise. Maybe even a suite of these programs and several centralized news outlets verifying. Or maybe some form of content delivery that encrypts legitimate videos at the time of recording so that when we recieve a video encrypted in this way, we know it's unaltered? I'm tired so i haven't thought this through.

Anyway, when video synthesis is perfected, we will need to treat video and images exactly like we do text now. Goodbye to the days of automatically trusting all video as authentic unfortunately. :(

2