Recent comments in /f/MachineLearning

daugaard47 t1_jdkkyds wrote

Reply to comment by signed7 in [N] ChatGPT plugins by Singularian2501

Wish they would have stayed open source, but can understand why they would sell out. There would have been no way they could handle the amount of traffic/need if they would have remained a non-profit. But as someone who works for a non-profit, I don't understand how they legally changed to a for-profit over a weeks time period. 😐

2

kromem t1_jdkfj5w wrote

> The model underlying Dolly only has 6 billion parameters, compared to 175 billion in GPT-3, and is two years old, making it particularly surprising that it works so well. This suggests that much of the qualitative gains in state-of-the-art models like ChatGPT may owe to focused corpuses of instruction-following training data, rather than larger or better-tuned base models.

The exciting thing here is the idea that progress in language models is partially contagious backwards to earlier ones by using newer models to generate the data to update older ones not in pre-training but in fine tuning (and I expect, based on recent research into in context learning, this would extend into additional few shot prompting).

I'm increasingly wondering if we'll see LLMs develop into rolling releases, particularly in the public sector. Possibly with emphasis on curating the data set for fine tuning with a platform agnostic stance towards the underlying pre-trained model powering it.

In any case, it looks more and more like the AI war between large firms will trickle down into open alternatives whether they'd like it to or not.

38

__Maximum__ t1_jdkepie wrote

Also, it's very shady for a company called OpenAI. They claimed they became for profit because they needed the money to grow, but these restrictions just show that they are filthy liars and only care about keeping the power and making profit. I'm sure they already have a strategy going around that 30B cap, just like they planned stealing money and talent by calling themselves non-profit first.

17

nicku_a OP t1_jdkdxy8 wrote

Good question! So what we’re doing here is not specifically applying evolutionary algorithms instead of RL. We’re applying evolutionary algorithms as a method of HPO, while still using RL to learn and it’s advantages. Take a look at my other comments explaining how this works, and check out the docs for more information.

1

DragonForg t1_jdkb8w9 wrote

This is fundamentally false. Here is why.

In order to prove something and then prove it incorrect you need distinct guidelines. Take gravity their are plenty of equations, plenty of experiments etc. We know what it looks like, mathematically what it is and so on. So if we take a computation version of gravity we have a reliable comparison method to do so. Someone can say this games gravity doesn't match with ours as we have distinct proofs for why it doesn't.

However what we are trying to prove/disprove is something we have 0 BAISIS ON. We barely understand the brain, or consciousness or why things emerge the way we do, we are no where near close enough to make strict definitions of theory of mind or creativity. The only comparison is if it mimics ours the most.

Stating it doesnt follow my version of theory of mind is ridiculous its the same as saying my God is real and yours isn't, your baises of why we have creativity is not based on a distinct proved definition but rather an interpretation of your experiences studying/learning it.

Basically our mind is a black box too, we only know what comes out not what happens inside. If both machine and human get the same output and the same input, it legitimately doesnt matter what happens inside. Until we either can PROVE how the brain works to exact definitions. Until then input and output data is sufficient enough for a proof otherwise AI will literally kill us because we keep obsessing over these definitive answers.

It's like saying nukes can't do this or that. Instead of focusing on the fact that nuclear weapon can destroy all of humanity. The power of these tools just like nuclear weapons shouldn't be understated because of semantics.

3