red75prime

red75prime t1_ivj0yb6 wrote

It is intended to be Vox Populis (it's probably something like "voice (given?) to the (multitiple groups of) people") instead of Vox Populi (voice of the people)?

1

red75prime t1_iverg6i wrote

> the thousands of years I could be a human, I wind up in this time?

It can be continued. The decades I could ponder those questions. The minutes I could observe this date and time on a calendar. And so on. Reference class problem.

Going the other direction, if you disentangle consciousness from everything that links it to whatever you observe now, it would be equally present in every conscious being, so the question "why it is present in me?" loses surprise. It is present wherever whenever, so in me too, no biggie.

1

red75prime t1_ive4zwp wrote

Solar and wind power has a problem with intermittency, you need to store energy oftentimes (or set negative prices). With right incentives air-to-syntetic-fuel process could probably be made a viable alternative to storing excess energy in hydrogen or some other form.

Solar updraft tower, for example, can provide both energy and airflow.

ETA: Ah, I see the problem. You also need to pay for permanent carbon storage and there's conflict of interests. Why would you bury all that carbon if you can profit on fuel? It applies to privately owned facilities as well as governments.

On the other hand, going carbon negative requires political will in either case, and if you go air-to-fuel route you'll have carbon-capture-ready infrastructure.

2

red75prime t1_ive3lne wrote

I suspect that there's something wrong with the idea that I'm randomly chosen from a pool of all sentient beings. I can't express the problem clearly, but it looks like that the idea requires existence of supernatural "essence of me" that could have been instantiated in other sentient being, while that being has nothing in common with me (beside sentience).

1

red75prime t1_ive1jo3 wrote

Extra CO2 that is already out there is not going away if we stop burning fossil fuels. Well, it goes away by natural means like phytoplankton and forest carbon capture, but too slow. Anyway, usage of carbon capture as a publicity stunt doesn't contradict it's usefulness in combating climate change. People just need to recognize when it's being used as a deception (but, yeah, it may be a bit too high standard to meet).

3

red75prime t1_ivafssn wrote

Population growth: education is the best contraceptive, and AGI can immensely improve the educational system.

Fossil fuels: if you have a fully automated synthetic fuel factory that needs sunlight, water, air, a bit of materials for robot maintenance, and a carbon tax in place, you will outcompete automatic fossil fuel extractors. The green will probably go mad over the perspective of disrupting fragile desert ecosystems and returning brine to the oceans on unprecedented levels, but you win some, you lose some.

Resource extraction: the same thing, recycling is not profitable and maybe even not ecologically beneficial right now (you need energy, that mostly comes from fossil fuels, to process all that stuff). AGI can change that by providing negative carbon energy (and brains) to sort and process it.

Ecology: it will probably suffer for some time. Delays in UBI introduction will push more people into subsistence farming.

Nuclear waste: deep geological storage is not "kicking the can down the road". After 200-300 years the waste will be not much more harmful than natural uranium deposits and it will be a useful source of radioactive elements.

3

red75prime t1_iuuzbuq wrote

> medical science/research is overwhelmingly biased towards white men

And when it isn't it creates another kind of political problems like the ones with isosorbide dinitrate/hydralazine.

2

red75prime t1_iuh1w96 wrote

> It would be like if you got your kid a car and they destroyed it by neglect.

Nah. Humanity isn't a single intelligent agent. The analogy would be more like: people are wrecking a shared car and there's no central authority to assign responsibility of fixing it, so a smaller group of people starts building their own car, because they cannot unilaterally fix the shared one.

1

red75prime t1_iu4in64 wrote

1

red75prime t1_iu4hadn wrote

Ah, engineering problems. They are certainly a factor. However, AIs seem to be good at coming up with potential solutions (take AlphaFold for example) and prototyping and testing could be made highly parallelized in AI-controlled R&D.

2

red75prime t1_iu4c2ky wrote

I agree on the unrealistic expectation of 2 years. The closest thing to human agility we have is Boston Dynamics robots which use hand-tuned dynamic control algorithms. This approach is not scalable by itself and it's unlikely that it will be integrated with machine learning approaches in 2 years. Or that the transformer-based robotic control will scale to realtime control of humanoid (or equally complex) robot.

But at some point AI controlled robots will start feeding back into manufacture of AI hardware. At that point AI-based economy will explode by removing inefficiencies of human-based economy (coordination problems, lengthy learning time, wages and so on).

It will not take much time after that for operating cost of a universal robot to sink below minimum wage.

Every year that passes increases probability of such an explosion. So 2040s can (and most likely will) be in an entirely different era than 2030s.

That's why I distrust confident technological predictions on the scales of 20 years or more.

2

red75prime t1_itvfq06 wrote

You have to be a god to observe it though. I guess that you aren't, so if you write down something like "I decided to do such and such, because so and so" and you aren't prone to procrastination and impulsivity, you'll find yourself doing that and not some other random thing. And the question is: why do you care?

2

red75prime t1_ittpv0p wrote

We have a working non-artificial superintelligence: humanity as a whole. So, "never" is not an option barring some bizarre and unlikely discoveries (computationally superior "souls" that we cannot replicate technologically, for example). Taking such possibilities seriously with no evidence looks more like superstition than open mind to me.

2

red75prime t1_itq84xs wrote

Working memory (which probably can be a stepping stone to self-awareness).

Long-term memory of various kinds (episodic, semantic, procedural (which should go hand in hand with lifetime learning)).

Specialized modules for motion planning (which probably could be useful in general planning).

High-level attention management mechanisms (which most likely will be learned implicitly).

1

red75prime t1_itk9f2j wrote

> (Q4 2028) An average to low end computer or cheap subscription service is capable of generating high resolution and frame rate videos spanning several minutes.

If it will take days to render them, then maybe.

AIs don't yet significantly feed back into design and physical construction of the chip fabrication plants, so by 2028 we'll have one or two 2nm fabs and the majority of new consumer CPUs and GPUs will be using 3-5nm technology. Hardware costs will not drop significantly too (fabs are costly), so 2028 low-end will be around today's high-end performance-wise (with less RAM and storage).

Anyway, I would shift perfect long-term temporal consistency to 2026-2032 as it depends on integrating working and long-term memory into existing AI architectures and there's yet no clear path to that.

1

red75prime t1_itk6c0n wrote

I'm sure that any practical AI system that will be able to generate movies will not do it all by itself. It will use external tools to not waste its memory and computational resources on mundane tasks of keeping exact 3d positions of objects and remembering all the intricacies of their textures and surface properties.

2