Recent comments in /f/MachineLearning
Silent-Spirit4827 t1_jdkyuf1 wrote
Reply to comment by zy415 in [D] ICML 2023 Reviewer-Author Discussion by zy415
My experience is the same. The ICLR reviewers were really active in responding.
Puzzleheaded_Acadia1 t1_jdkvut4 wrote
Reply to comment by liyanjia92 in [P] ChatGPT with GPT-2: A minimum example of aligning language models with RLHF similar to ChatGPT by liyanjia92
Thx
ertgbnm t1_jdkv8rw wrote
Reply to [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
Umm wow! I recommend backing up this GitHub before it gets taken down for "safety"
t0slink t1_jdkufvf wrote
Reply to comment by sweatierorc in [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
> AI has gotten really good, but let’s not get carried away.
People were saying the same thing five years ago about the generative AI developments we've seen this year.
mxby7e t1_jdktvqr wrote
Reply to comment by big_ol_tender in [R] Hello Dolly: Democratizing the magic of ChatGPT with open models by austintackaberry
The license won’t change. The dataset was collected in a way that violates the term of service of OpenAI, which they used to generate the data. If they allowed commercial use it would open them up to lawsuit.
ILOVETOCONBANDITS t1_jdktauv wrote
Reply to [D] ICML 2023 Reviewer-Author Discussion by zy415
If you don’t get any responses to the rebuttal, does that mean the scores will remain the same? I had a borderline paper (6-6-5-3 with conf 4-3-3-3) and the rebuttal questions were all easy to address (theory questions with well known answers) and I ran the experiments they wanted.
Rant: they should at least have a button to acknowledge they don’t need to send a message. For some of us applying for faculty or applying for PhD, admissions to these conferences make huge differences in our life. Such malaise is super stressful and really sucks for some of those who would have benefited greatly from an acceptance.
sweatierorc t1_jdkt9uq wrote
Reply to comment by t0slink in [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
A cure for cancer and aging in this decade. AI has gotten really good, but let's not get carried away.
addition t1_jdkssmg wrote
Reply to [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
Wow! I was just thinking the other day, now that we have very advanced statistical models of the world the next step is some search algorithm + feedback loop. In other words, a way for the model to use its statistical understanding of the world to guide a search towards a solution while also updating itself along the way. This feels like an important step. Or at least the idea is the first step in this direction.
meregizzardavowal t1_jdksro1 wrote
Reply to comment by t0slink in [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
I don’t know if people are as much saying we should cut off the pathway because they are scared. What I’m hearing is they think we ought to spend more effort on ensuring it’s safe, because a Pandora’s box moment may come up quickly.
light24bulbs t1_jdks13d wrote
Reply to comment by machineko in [R] Hello Dolly: Democratizing the magic of ChatGPT with open models by austintackaberry
Question: i notice there's a focus here on fine tuning for instruction following, which is clearly different from the main training where the LLM just reads stuff and tries to predict the next word.
Is there any easy way to continue that bulk part of the training with some additional data? Everyone seems to be trying to get there with injecting embedding chunk text into prompts (my team included) but that approach just stinks for a lot of uses.
cyborgsnowflake t1_jdkruku wrote
Reply to comment by stimulatedecho in [D] "Sparks of Artificial General Intelligence: Early experiments with GPT-4" contained unredacted comments by QQII
We know the nuts and bolts of what is happening since it's been built from the ground up by humans. Gptx is essentially a fancy statistical machine. Just rules for shuffling data around to pick word x+ 1 on magnetic platters. No infrastructure for anything else. Let alone a brain. Unless you think adding enough if statements creates a soul. I'm baffled why people think gpt is sentient just because it can calculate solutions based on the hyperparameters of the knowledge corpus as well or better than people. Your Casio calculator or linear regression can calculate solutions better than people. Does that mean your Casio calculator or the x/y grid in your high school notebook is sentient?
ExcidianGuard t1_jdkrsnj wrote
Reply to comment by rePAN6517 in [D] I just realised: GPT-4 with image input can interpret any computer screen, any userinterface and any combination of them. by Balance-
Apocalyptic cults have been around for a long time, this one just has more basis in reality than usual
hangtime79 t1_jdkrpft wrote
The Alpaca dataset DB used to train this model absolutely cannot be used for commercial purposes. It uses the Creative Commons Attribution-NonCommercial 4.0 International Public License.
https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE
addition t1_jdkrd3s wrote
Reply to comment by RealSonZoo in [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
You need chatgpt plus to use 4 at the moment
metalman123 t1_jdkqv8i wrote
Reply to comment by RealSonZoo in [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
What rock have you been under?
The paid version has gpt 4 access. People have access to the gpt 4 api.
This is old information
tysam_and_co t1_jdkqv3e wrote
Reply to comment by RealSonZoo in [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
I would presume that it's a bolt-on external method that utilizes a pretrained model with its own inputs as a dynamically-generated information sieve of sorts. Of course, the inductive prior is encoded in the Reflexion algorithm itself so we are bringing some new information to the table here (not that GPT4+ couldn't somehow do this itself someday, either).
RealSonZoo t1_jdkqjld wrote
Reply to comment by metalman123 in [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
Oh so if I go to the ChatGPT website and start talking with it, that's GPT-4?
metalman123 t1_jdkqd75 wrote
t0slink t1_jdkq5c1 wrote
Reply to comment by 3deal in [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
Nah, full speed ahead please. With enough development, a cure for cancer, aging, and all manner of devastating human ailments could happen in this decade.
It is senseless to cut off a pathway that could literally save and improve tens of billions of lives over the next few decades because you're scared it can't be done correctly.
RealSonZoo t1_jdkoq5c wrote
Reply to [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
Question, maybe dumb - how are they comparing results to GPT-4, which isn't released yet, and I think is mostly closed source?
[deleted] t1_jdknyfe wrote
Reply to comment by rePAN6517 in [D] I just realised: GPT-4 with image input can interpret any computer screen, any userinterface and any combination of them. by Balance-
[removed]
Runthescript t1_jdknxkl wrote
Reply to comment by BinarySplit in [D] I just realised: GPT-4 with image input can interpret any computer screen, any userinterface and any combination of them. by Balance-
Are you trying to break captcha? Cause this is definitely how we break captcha
Always1Max t1_jdkn18v wrote
could there be something like this, but for code?
modcowboy t1_jdkz6of wrote
Reply to comment by MjrK in [D] I just realised: GPT-4 with image input can interpret any computer screen, any userinterface and any combination of them. by Balance-
Probably would be easier for the LLM to interact with the website directly through the inspect tool vs machine vision training.