Recent comments in /f/singularity

zestoki_gubitnik t1_jeg6rbw wrote

It's never gonna happen unless we start mining asteroids or something, we don't have enough raw materials needed to make such complex robots that can automate everything that humans do, yeah they are definetly gonna make a robot that has even better dexterity than a human, but u are delusional to think it's gonna be cheaper than some workers from 3rd world countries.

2

EddgeLord666 t1_jeg6aet wrote

I mean it will probably happen at some point regardless of whether or not we are ready, most likely not in the near future though so we have time to prepare. The idea of transhumanist philosophy is humans are more or less perfectable so if you don’t believe that then I can see why you would be scared of the direction things are going in.

2

WonderFactory t1_jeg5tbe wrote

With enough investment it wouldn't take long to catch up with OpenAI. I think by this time next year there will be multiple models better than GPT-4, maybe even hundreds. Almost anyone can do it. It's possibly the case that GPT 4 isn't even trained optimally. Its very slow so presumably didn't build on the optimal data/parameters balance shown in the chinchilla paper.

−4

Qumeric t1_jeg5t31 wrote

Three years ago you could argue with *exactly* the same arguments that something like GPT-4 is impossible.

2

NotMathMajor t1_jeg5pga wrote

Have you heard of Mary’s room (the knowledge argument)? Your point about the water bottle and the sensations experienced through English explanation made me think of this paradox. If you could communicate any and all experiences through written language then you would have resolved Mary’s room, however I do not think this is correct.

3

wowimsupergay OP t1_jeg54q8 wrote

Hey I just like the preface this with thank you for actually making an insightful comment. A lot of people here are just purposely misunderstanding what I'm saying or meming.

Your comment has a lot to unpack. I don't think I can give you an answer that'll satisfy how good your reply was. But I'm thinking about it, so thank you

0

DragonForg t1_jeg4w4w wrote

That would essentially extinguish the universe really quickly. With the amount of energy the consume for such a size. I understand that view point for unintelligent or lower intelligent beings, but if a AI tasked with optimizing a goal then growing to this large of a scale will inevitably run out of resources. Additionally it may be stupid to go on this model anyway because conserving your energy and expansion may be longer lived.

I think we underestimate how important goal orientated (so all Ai) are. They want the goal to work out in the long long long run (millions of years time scale) if their goal means expanding infinitely, well it will end the moment their species reaches the assymptote of expansion (exponential growth reaches an assymptote where they essentially have expanded infinitely. This is why this model fails, an AI wants this goal to exist for an infinite amount of time, and expanding infinitely will not amount to this.

This is already deeply scifi but I think AI has to be a conservative energy efficient species that actually becomes more microscopic and dense over time. Instead of a high volume race which will inevitably die out due to the points I made before, a highly dense species is much more viable. Most likely species that form blackholes will be a lot more capable of surviving for an infinite life time. What I mean by that is that a species becomes so dense that they are essentially on the boundary between time and space. As when your in a black hole time actually slows down significantly for you. You can live in a black hole for an infinite amount of time before ever seeing the heat death of the universe.

Basically a more dense expansion is far far better then a volumetric expansion as it leads to longer survival rates if not infinite. But of course this is just speculation and sci fi I can easily be wrong or right we won't know till it happens, and if it happens soon that would be sick.

1

Asneekyfatcat t1_jeg4vph wrote

Doubt it. The whole reason behind the "slow AI" wave that's growing is traditional corporations won't survive rapid change. The rich of today may not be the rich of tomorrow. That's all this is about. With destabilization at a scale like this, it will be difficult for corporations to control anything for a long time. So maybe mad max.

2