Thatingles

Thatingles t1_ivzpbqk wrote

I wonder when the inflection point will be for wider social acceptance of what is, seemingly, about to happen. I don't think I've seen any mainstream public figure address the issue in an honest way and the general public is blithely unaware. In many ways I hope the transition to AI is fairly slow, because society isn't prepared in the slightest.

92

Thatingles t1_iujqul7 wrote

It was discovered accidentally. The mixing with phosphorous sounds like standard PhD research stuff; take this bit of chemistry no one has examined closely and work it around to see what happens. Most of the time it doesn't really go anywhere exciting but in this case they found something unexpected and potentially game changing.

I really hope it plays out, it would really help with a lot of issues if we could replace rare earths with nickel-iron alloys. That would be sweet.

39

Thatingles t1_iujq6rf wrote

Potentially huge, with the proviso that they don't know if the tetrataenite they have made has the same properties as the tetrataenite that occurs naturally. You would think that would be easy to confirm but the article doesn't say if it has been done or not.

It's somewhat crazy that it was found by accident but it backs up a point I've made before about battery technology. With the entire spectrum of materials science to play with, it's very hard to know what will turn up.

8

Thatingles t1_irtjmyp wrote

One of the arguments for us being in a simulation is that it's purpose is to train AGI's for whoever is running the simulation. Because if we could, it's what we might do.

The non-existent of grey goo covering the entire galaxy, turning everything into substrate for a runaway ASI is certainly worth noting, but given the number of possible outcomes it's hardly a decisive piece of information.

The control problem will be solved or not solved only once and we'll only get to find out by doing it, which is not a super appealing prospect to put it mildly. Trial and error won't be available to us (or at least, not for long...).

Personally I think we should veer away from creating an ASI and head to the calmer waters of narrow AI's that we can keep under control by introducing some limitation or flaw that will allow us to switch them off if they are troublesome. I'm hoping there is a big gap before ASI is possible, long enough for people to see it as an unnecessary and dangerous goal. Of course AGI is still dangerous, but it's also kind of inevitable, so that just has to be accepted and managed.

1

Thatingles t1_ir106e5 wrote

I remain convinced AGI will emerge from linking together many modules and one of those would of course be a world model module, but I don't think it's the final step. We still seem to be missing the components that would allow an AI to solve complex multi-step problems by a combination of memory and reasoning. I'm sure it will come but this ain't it.

15