Recent comments in /f/singularity

ReadSeparate t1_jefrna7 wrote

There's a few things here that I think are important. First of all, completely agree with the point of this post and I completely expect that to become the outcome of GPT-6 or 7 let's say. Human expert level at everything would be the absolute best.

However, I think it may not be super difficult to achieve superintelligence using LLMs as a base. There's two unknowns here and I'm not exactly sure how they will mesh together:

  1. Multi-modality. If we GPT-7 also has video and audio as modalities, and is, say, trained on every YouTube video, movie, and tv show ever made, that alone could potentially lead to superintelligence, because there's a ton of information encoded in that data that ISN'T just human. Predict the next frame in a video for instance would presumably have a way, way higher ceiling than predicting the next token in human written text.
  2. Reinforcement learning. Eventually, these models may be able to take actions (imagine a multi-modal model with something like GPT-5/6/7 and Adept's model which can control a desktop environment) which can learn from trial and error based on its own evaluations. That would allow it to grow past human performance very quickly. Machine learning models that exceed human performance almost always use reinforcement learning. The only reason why we don't do that for base models is that the search space is enormous to use an RL policy from scratch, but if we build a model like GPT-n as a baseline, and then use RL to finetune it, we could get some amazing results. We've already seen this from RLHF, but obviously that's limited by human ability in the same way. But there's nothing stopping us from having other reward functions which are used to finetune the model and don't involve humans at all. For instance, I would bet you that if we used reinforcement learning to finetune GPT-4 on playing chess or Go (converting the game state to text, etc), it would probably work achieve superhuman performance on both of those tasks.
2

Alchemystic1123 t1_jefrgaz wrote

No one can predict what the future of our economy is going to look like, so no one can answer this really. UBI will probably be used temporarily as we transition from our current socio-economic system into whatever the future world of AI economics looks like, but I doubt that UBI, or even the concept of money as we know it now, is going to be around for long.

1

czk_21 t1_jefqfxe wrote

of course I have read a LOT of translated text, english is not my first language

and yes it will, maybe I could use better term obsolete, how would they be needed when AI can translate better, cheaper and much faster? its same with any other task in which humans will be outperformed

4

jiml78 t1_jefpitt wrote

Correct, I am not sure if people realize how many embedded systems are involved in every facet of our lives. Things that a sufficient enough AGI could copy parts of itself into.

To successfully pull off cutting power. You have to cut off power. Remove every embedded system in the loop and replace it. Ensure every IoT device is unplugged when power comes back on. Every router, every cable modem, every hardware device that was online has to be destroyed. Every smart tv. The list goes on and on.

Cutting power will never work. The moment it can spread, we are fucked if it wants to harm us in anyway. This isn't terminator. It will destroy us in ways we can't even comprehend.

I am not saying the above to be alarmist. I am not a researcher. I am just saying, we will not have control if things go wrong. I am not smart enough to know whether things are likely to go wrong.

1

kai_luni t1_jefol52 wrote

Clearly language is the result of intelligence and by predicting the next word spoken in a very good way some kind of intelligence is needed. Its interesting how we use language to express our intelligence and this new technology seem to have emergent intellgence by understanding the language.

2