Recent comments in /f/MachineLearning
Disastrous_Elk_6375 t1_jdlj4rn wrote
Reply to comment by dreamingleo12 in [R] Hello Dolly: Democratizing the magic of ChatGPT with open models by austintackaberry
> and uses a different base model and claims it’s a big innovation
Huh? My read of their blog was that they wanted to highlight the fact that you can fine-tune a ~2yo LLM and still get decent results. I don't think they've claimed this is innovative, or that the innovation is theirs to boast...
I've played with GPT-neo (non X) and GPT-J when they were released, and the results were rough. You had to do a ton of prompt engineering work and exploration to find useful cases. This shows that even smaller, older models can be fine-tuned with the method proposed in Alpaca.
Disastrous_Elk_6375 t1_jdlix6j wrote
Reply to comment by Esquyvren in [R] Hello Dolly: Democratizing the magic of ChatGPT with open models by austintackaberry
The demo was up for a couple of days. The first hours of it being online were rough (80-200 people in queue). It got better the following day, and better still the 3'rd day. I believe they removed the demo ~1week later. IMO they've proven a point - the demo was extremely impressive for a 7b model.
Cherubin0 t1_jdlif7r wrote
Reply to [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
Wow so we can hook it up with cargo --check and it will generate perfect Rust code.
skaag t1_jdli80o wrote
Reply to [D] I just realised: GPT-4 with image input can interpret any computer screen, any userinterface and any combination of them. by Balance-
I have not seen a way in the GPT-4 UI by OpenAI to submit an image? How do you do it?
t0slink t1_jdlhje0 wrote
master3243 t1_jdlhj77 wrote
Reply to comment by __Maximum__ in [R] Hello Dolly: Democratizing the magic of ChatGPT with open models by austintackaberry
Knowing how a lot of text data from Reddit comments ends up in these huge text datasets only for them to make it completely closed source rubs me the wrong way.
sweatierorc t1_jdlhgay wrote
Reply to comment by SmLnine in [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
IMHO, I think that cancer and aging are necessary for complex organism. It is more likely that we solve cloning or build the first in vitro womb, than we are at deafeating cancer or aging.
t0slink t1_jdlhf3s wrote
Reply to comment by meregizzardavowal in [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
I wish you were right, but people are calling for investment in AGI to cease altogether:
> There is no way for humans to adapt for alien intelligence. The idea of developing general AI is insanely horrifying from the beginning.
One of the parent comments.
Such absolutist comments leave no room whatsoever for venturing into AGI.
master3243 t1_jdlhb8c wrote
I have a theory that the main reason OpenAI decided to start keeping it's training and architectural details private is because through minor modification in training data and data augmentation they were able to gain significant improvements in the qualitative output of GPT.
Thus any competitor could replicate the pipeline with ease and reproduce the improvements, so they decided to keep it as a trade secret.
Glad more research like this is being done and shared to the rest of the community.
elbiot t1_jdlgxnz wrote
Reply to comment by light24bulbs in [R] Hello Dolly: Democratizing the magic of ChatGPT with open models by austintackaberry
In my understanding, if you have text, it's not a challenge to train on next word prediction. Just keep the learning rate low. The reason there's a focus on the instruction based fine tuning is because that data is harder to come by.
My only experience is I've done this with a sentence embedding model (using sbert) and I just trained on my new text and the original training data 50/50 and it both got better at embedding my text and didn't forget how to do what it was originally trained on
AI-Pon3 t1_jdlgw1x wrote
Reply to [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
Interesting methodology/technology. I realize it's GPT-4+ a refining process but even so, 88% is ~64% fewer errors than 67%, which proves it's a powerful technique even when the underlying model is already fairly capable.
SmLnine t1_jdlgtl8 wrote
Reply to comment by sweatierorc in [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
If an intelligence explosion happens, there's really no telling what's possible. Maybe these problems are trivial to a 1 million IQ machine, maybe not. The only question really is if the explosion will happen. Two years ago I would have said 1% in the next ten years, now I'm up to 10%. Maybe in two more years it'll look like 30%.
agent_zoso t1_jdlgre2 wrote
Reply to comment by cyborgsnowflake in [D] "Sparks of Artificial General Intelligence: Early experiments with GPT-4" contained unredacted comments by QQII
The use of neural nets (ReLU + LayerNorms) layered between each attention step counts as a brain, no? I know the attention mechanism is what gets the most ... attention, but there's still traditional neural nets sandwiched between and in some cases the transformer is just a neck feeding into more traditional modules. ReLU is Turing complete so I can always tune a neural net to have the same response pattern of electrical outputs as any neuron in your brain.
The million dollar question according to David Chalmers is, would you agree that slowly replacing each neuron with a perfect copy one at a time will never cause you to go from completely fine to instantly blacked out? If you answered yes, then it can be shown (sections 3&4) that you must accept that neural nets can be conscious, since by contradiction if there was a gradual phasing out of conscious experience rather than sudden disappearance, that would necessarily require the artificial neurons to at some point begin behaving differently than the original neurons would (we would be aware of the dulling of our sensation).
Considering we lose brain cells all the time and don't get immediately knocked out, I think you can at least agree that most people would find these assumptions reasonable. It would be pretty weird to have such a drastic effect for such a small disturbance.
michaelthwan_ai OP t1_jdlf8g8 wrote
Because the recent release of LLMs has been too vigorous, I organized recent notable models from the news. Some may find the diagram useful, so please allow me to distribute it.
Please let me know if there is anything I should change or add so that I can learn. Thank you very much.
If you want to edit or create an issue, please use this repo.
---------EDIT 20230326
Thank you for your responses, I've learnt a lot. I have updated the chart:
- https://github.com/michaelthwan/llm_family_chart/blob/master/LLMfamily2023Mar.drawio.png
- (Look like I cannot edit the post)
Changes 20230326:
- Added: OpenChatKit, Dolly and their predecessors
- More high-res
To learn:
- RWKV/ChatRWKV related, PaLM-rlhf-pytorch
Models that not considered (yet)
- Models that is <= 2022 (e.g. T5 (2022May). This post is created to help people quickly gather information about new models)
- Models that is not fully released yet (e.g. Bard, under limited review)
Necessary-Meringue-1 t1_jdlezco wrote
Reply to comment by cyborgsnowflake in [D] "Sparks of Artificial General Intelligence: Early experiments with GPT-4" contained unredacted comments by QQII
Yeah, I think you're on the money there. It's very hard for us to not anthropomorphize this behavior, especially because we literally used RLHF in order to make it more human-like.
nekize t1_jdldodi wrote
Reply to comment by learn-deeply in [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
Sadly that is what academia came to. I am doing my phd and 80% od my papers is just padding. And if you don t follow the “template” you can t publish anything
impossiblefork t1_jdlddlt wrote
Reply to comment by big_ol_tender in [R] Hello Dolly: Democratizing the magic of ChatGPT with open models by austintackaberry
Model weights though, are, I assume, not copyrightable.
Is there actually a law giving Stanford any special rights to the weights?
beautifoolstupid t1_jdld3fo wrote
Reply to comment by machineko in [R] Hello Dolly: Democratizing the magic of ChatGPT with open models by austintackaberry
This is what I love about this community.
sweatierorc t1_jdlcwkm wrote
Reply to comment by t0slink in [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
True, but with AI more computing power/data means better models. With medicine, things move slower. If we get a cure for one or two cancer this decade, it would be a massive achievement.
3y3zW1ld0p3n t1_jdlciea wrote
Reply to [N] ChatGPT plugins by Singularian2501
Oh my gosh. Wolfram plugin!
greenskinmarch t1_jdlc952 wrote
Reply to comment by t0slink in [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
I just want humans to stop dying of cancer!
Monkey's paw curls. The humans all die of being shot by drones instead
brucebay t1_jdlc3ix wrote
Reply to comment by Nyanraltotlapun in [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
This is not an alien intelligence yet. We understand how it works how it thinks. But eventually this version can generate an AI that is harder for us to understand, and that version can generate another ai. At some point it will become alien to us because we may not understand the math behind jt,
_Arsenie_Boca_ t1_jdlc2ah wrote
Reply to comment by learn-deeply in [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
Thanks! If that is really the TL;DR, I have never seen an abstract that beats about the bush so much
ThirdMover t1_jdlabwm wrote
Reply to comment by MassiveIndependence8 in [D] I just realised: GPT-4 with image input can interpret any computer screen, any userinterface and any combination of them. by Balance-
What do you mean by "frame"? How many images do you think GPT-4 would need to get a cursor where it needs to go? I'd estimate four or five should be plenty.
plottwist1 t1_jdlj5r8 wrote
Reply to comment by __Maximum__ in [R] Hello Dolly: Democratizing the magic of ChatGPT with open models by austintackaberry
How open are they? I mean having open models is an improvment, but the training methods should be open too. And if we croud source data that should be accessible too.