Recent comments in /f/MachineLearning

ILOVETOCONBANDITS t1_jdktauv wrote

If you don’t get any responses to the rebuttal, does that mean the scores will remain the same? I had a borderline paper (6-6-5-3 with conf 4-3-3-3) and the rebuttal questions were all easy to address (theory questions with well known answers) and I ran the experiments they wanted.

Rant: they should at least have a button to acknowledge they don’t need to send a message. For some of us applying for faculty or applying for PhD, admissions to these conferences make huge differences in our life. Such malaise is super stressful and really sucks for some of those who would have benefited greatly from an acceptance.

4

addition t1_jdkssmg wrote

Wow! I was just thinking the other day, now that we have very advanced statistical models of the world the next step is some search algorithm + feedback loop. In other words, a way for the model to use its statistical understanding of the world to guide a search towards a solution while also updating itself along the way. This feels like an important step. Or at least the idea is the first step in this direction.

20

meregizzardavowal t1_jdksro1 wrote

I don’t know if people are as much saying we should cut off the pathway because they are scared. What I’m hearing is they think we ought to spend more effort on ensuring it’s safe, because a Pandora’s box moment may come up quickly.

13

light24bulbs t1_jdks13d wrote

Question: i notice there's a focus here on fine tuning for instruction following, which is clearly different from the main training where the LLM just reads stuff and tries to predict the next word.

Is there any easy way to continue that bulk part of the training with some additional data? Everyone seems to be trying to get there with injecting embedding chunk text into prompts (my team included) but that approach just stinks for a lot of uses.

8

cyborgsnowflake t1_jdkruku wrote

We know the nuts and bolts of what is happening since it's been built from the ground up by humans. Gptx is essentially a fancy statistical machine. Just rules for shuffling data around to pick word x+ 1 on magnetic platters. No infrastructure for anything else. Let alone a brain. Unless you think adding enough if statements creates a soul. I'm baffled why people think gpt is sentient just because it can calculate solutions based on the hyperparameters of the knowledge corpus as well or better than people. Your Casio calculator or linear regression can calculate solutions better than people. Does that mean your Casio calculator or the x/y grid in your high school notebook is sentient?

0

tysam_and_co t1_jdkqv3e wrote

I would presume that it's a bolt-on external method that utilizes a pretrained model with its own inputs as a dynamically-generated information sieve of sorts. Of course, the inductive prior is encoded in the Reflexion algorithm itself so we are bringing some new information to the table here (not that GPT4+ couldn't somehow do this itself someday, either).

2

t0slink t1_jdkq5c1 wrote

Nah, full speed ahead please. With enough development, a cure for cancer, aging, and all manner of devastating human ailments could happen in this decade.

It is senseless to cut off a pathway that could literally save and improve tens of billions of lives over the next few decades because you're scared it can't be done correctly.

18