Lawjarp2
Lawjarp2 t1_j9v0zzv wrote
Reply to comment by kindred_asura in New SOTA LLM called LLaMA releases today by Meta AI 🫡 by Pro_RazE
Yes I know that. That's exactly what I've said above. Did you imagine something else?
FYI the smallest model could possibly be trained for under 50k dollars.
Lawjarp2 t1_j9uj86z wrote
Reply to comment by TeamPupNSudz in New SOTA LLM called LLaMA releases today by Meta AI 🫡 by Pro_RazE
In some tasks the 7B model seems close enough to the orginal gpt-3 175B. With some optimization it probably can be run on a good laptop with a reasonable loss in accuracy.
13B doesn't outperform in everything however 65B one does. But it's kinda weird to see their 13B model be nearly as good their 65B one.
However all their models are worse than the biggest Minerva model.
Lawjarp2 t1_j9ui7bd wrote
GitHub link : https://github.com/facebookresearch/llama
Not really fully free to use right away as you have to fill a Google form and they may or may not approve your request to download the trained model. Training the model yourself is expensive anyway.
Lawjarp2 t1_j9uf61c wrote
It's around as good as GPT-3(175B) but smaller(65B) like chinchilla. If released publicly like OPT models then it could be really big for open-source. If optimised like flexgen to run on a single GPU or a small rig maybe we could all have our own personal assistant or pair programmer.
Lawjarp2 t1_j9tkgeg wrote
Reply to What are the big flaws with LLMs right now? by fangfried
(1) Expensive to run
(2) No temporal/episodic memory
(3) Limited context
(4) Makes stuff up/hallucinates
(5) Only surface level intelligence or understanding.
Lawjarp2 t1_j9py18b wrote
If it becomes very easy to do something all copyrights on it should stop. In a few years copyright itself will become meaningless.
Lawjarp2 t1_j9og5au wrote
Reply to How long do you estimate it's going to be until we can blindly trust answers from chatbots? by ChipsAhoiMcCoy
You mean equivalent to being as trustworthy as a search engine. Probably 1-3 years.
Lawjarp2 t1_j9no2cq wrote
Reply to comment by darthdiablo in Bernie Sanders proposes taxes on robots that take jobs by Scarlet_pot2
That's why you tax on profits not on labour itself. If you remove all the loopholes and exceptions used to hide profit it would be enough. If deflation sets in we can always print money.
Lawjarp2 t1_j9nnhjr wrote
Stupid idea. We will all be working forever not because we couldn't create a fully automated utopia but because an old idiot couldn't come with anything creative to solve UBI
Lawjarp2 t1_j9liaa5 wrote
Reply to comment by VeganPizzaPie in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
No. Once an LLM gets a keyword a lot of related stuff will come up in probabilities. Also you can go backwards on reasoning. This makes it easier for an LLM to answer if trained for this exact scenario.
Lawjarp2 t1_j9lhw2e wrote
Reply to comment by Borrowedshorts in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
Because you are not an llm
Lawjarp2 t1_j9jdxur wrote
Reply to What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
It's multiple choice, choosing among 4 options is easier because you just have to consider the 4 possibilities and answer is among the 4 possibilities. But most conversations are open ended with possibilities branching out to insane levels.
Lawjarp2 t1_j9ivxdr wrote
So the last one with 32k token context could be gpt-4 or gpt-4 will atleast have 32k token context.
Lawjarp2 t1_j9g9ojl wrote
Reply to Pardon my curiosity, but why doesn’t Google utilize its sister company DeepMind to rival Bing’s ChatGPT? by Berke80
Deepmind is better at reinforcement learning. They probably did not expect LLMs to get this good. Still it is likely deepmind will get to AGI sooner than openAI. They have the breadth of knowledge required to build something like that. Unlike openAI they don't go all in on just one thing.
Lawjarp2 t1_j978sb6 wrote
Reply to comment by helpskinissues in Human Intelligence augmentation is probably more dangerous than regular AI by [deleted]
That's what I mean. You can't do that without getting noticed. Even if you did you will get killed very quickly.
You underestimate how foggy real life events tend to be. You can't predict with certainty and no amount of intelligence can help you.
The only way this can be harmful is if you are already in a position of power. But if you are already there you can do things without AI as well.
For most people it's impossible to be a supervillain with almost anything. They simply lack the moves and position to accomplish it
Lawjarp2 t1_j977s5y wrote
Reply to comment by helpskinissues in Human Intelligence augmentation is probably more dangerous than regular AI by [deleted]
Scamming people is not becoming a supervillain
Lawjarp2 t1_j976yj5 wrote
Reply to comment by helpskinissues in Human Intelligence augmentation is probably more dangerous than regular AI by [deleted]
You will get your ass beaten up if you try anything like that in a poor country. Don't be stupid. No matter how smart you think you are you aren't smart enough to beat a lot of people especially not a country. Unless you are fully superhuman you will get assassinated no matter how cool your glasses are
Lawjarp2 t1_j96qtad wrote
Reply to comment by Difficult_Review9741 in Stop ascribing personhood to complex calculators like Bing/Sydney/ChatGPT by [deleted]
You don't have to copy and know every atom before you agree something is like something else. That's just a bad faith argument. Don't look at just the differences look at the similarities, look at how it's able to get so far with such a basic design.
It's like the god of the gaps argument. People who constantly point out that we don't know this hence god, then if you do explain away the phenomenon it's something else. In that way their god is just the gap in our knowledge and is forever shrinking.
Lawjarp2 t1_j96aqxf wrote
It's much harder though. It is likely we will have AGI before an augumented supervillain.
Lawjarp2 t1_j95l76s wrote
There is no divine spark. Infact these models are proof that it doesn't take much to get close to being considered conscious.
The fact that you compare it with a pig shows you don't know much about them and probably shouldn't be advising people. These models are trained on text data only and do not have a physical or even an independent existence to have sentience.
Even if they just gave it episodic memory it will start to feel a lot more human than some humans.
Lawjarp2 t1_j95b210 wrote
It's greedy and brings out the worst capitalistic tendencies in people. It's also inefficient and straight up stupid. You don't need the reward mechanism, as good that seems, it's a way to make it so that some people have more than others. This is the most terrible thing one could do with AGI.
Lawjarp2 t1_j94llfd wrote
Reply to comment by turnip_burrito in Brain implant startup backed by Bezos and Gates is testing mind-controlled computing on humans by Tom_Lilja
You won't have no ads. Nobody wants to sell you anything when you don't produce anything and money is irrelevant. They can make everything they want with AI.
Lawjarp2 t1_j90lk7j wrote
Reply to comment by Sharp_Soup_2353 in I’m gradually becoming a doomer. by Sharp_Soup_2353
That is partially true. They want to control the singularity or rather control events leading up to it.
Lawjarp2 t1_j90jek3 wrote
Reply to I’m gradually becoming a doomer. by Sharp_Soup_2353
They are making money to make AGI. What do you want them to do? Make no money, let someone who is truly in for the profit make an AGI?
Lawjarp2 t1_j9x66jp wrote
Reply to People lack imagination and it’s really bothering me by thecoffeejesus
People lack imagination because they are at their core just a next word predictor