Lawjarp2

Lawjarp2 t1_j9uj86z wrote

In some tasks the 7B model seems close enough to the orginal gpt-3 175B. With some optimization it probably can be run on a good laptop with a reasonable loss in accuracy.

13B doesn't outperform in everything however 65B one does. But it's kinda weird to see their 13B model be nearly as good their 65B one.

However all their models are worse than the biggest Minerva model.

4

Lawjarp2 t1_j9uf61c wrote

It's around as good as GPT-3(175B) but smaller(65B) like chinchilla. If released publicly like OPT models then it could be really big for open-source. If optimised like flexgen to run on a single GPU or a small rig maybe we could all have our own personal assistant or pair programmer.

34

Lawjarp2 t1_j978sb6 wrote

That's what I mean. You can't do that without getting noticed. Even if you did you will get killed very quickly.

You underestimate how foggy real life events tend to be. You can't predict with certainty and no amount of intelligence can help you.

The only way this can be harmful is if you are already in a position of power. But if you are already there you can do things without AI as well.

For most people it's impossible to be a supervillain with almost anything. They simply lack the moves and position to accomplish it

4

Lawjarp2 t1_j976yj5 wrote

You will get your ass beaten up if you try anything like that in a poor country. Don't be stupid. No matter how smart you think you are you aren't smart enough to beat a lot of people especially not a country. Unless you are fully superhuman you will get assassinated no matter how cool your glasses are

7

Lawjarp2 t1_j96qtad wrote

You don't have to copy and know every atom before you agree something is like something else. That's just a bad faith argument. Don't look at just the differences look at the similarities, look at how it's able to get so far with such a basic design.

It's like the god of the gaps argument. People who constantly point out that we don't know this hence god, then if you do explain away the phenomenon it's something else. In that way their god is just the gap in our knowledge and is forever shrinking.

3

Lawjarp2 t1_j95l76s wrote

There is no divine spark. Infact these models are proof that it doesn't take much to get close to being considered conscious.

The fact that you compare it with a pig shows you don't know much about them and probably shouldn't be advising people. These models are trained on text data only and do not have a physical or even an independent existence to have sentience.

Even if they just gave it episodic memory it will start to feel a lot more human than some humans.

16

Lawjarp2 t1_j95b210 wrote

It's greedy and brings out the worst capitalistic tendencies in people. It's also inefficient and straight up stupid. You don't need the reward mechanism, as good that seems, it's a way to make it so that some people have more than others. This is the most terrible thing one could do with AGI.

0