polawiaczperel t1_jed1e9h wrote on March 31, 2023 at 3:00 AM Reply to [P] Introducing Vicuna: An open-source language model based on LLaMA 13B by Business-Lead2679 I was playing with Llama 7b, 13b, 30b, 65b, Alpaca 30b native and lora, but this seems to be much better, and it is only 13b. Nice! Will they share the weights? Permalink 17
polawiaczperel t1_jb98qce wrote on March 7, 2023 at 11:33 AM Reply to comment by wywywywy in [R] Created a Discord server with LLaMA 13B by ortegaalfredo Even with one rtx 3090 https://github.com/oobabooga/text-generation-webui/issues/147#issuecomment-1456626387 Permalink Parent 7
polawiaczperel t1_jed1e9h wrote
Reply to [P] Introducing Vicuna: An open-source language model based on LLaMA 13B by Business-Lead2679
I was playing with Llama 7b, 13b, 30b, 65b, Alpaca 30b native and lora, but this seems to be much better, and it is only 13b. Nice! Will they share the weights?