Recent comments in /f/MachineLearning

Maleficent_Refuse_11 t1_jdgmml7 wrote

I get that people are excited, but nobody with a basic understanding of how transformers work should give room to this. The problem is not just that it is auto-regressive/doesn't have an external knowledge hub. At best it can recreate latent patterns in the training data. There is no element of critique and no element of creativity. There is no theory of mind, there is just a reproduction of what people said, when prompted regarding how other people feel. Still, get the excitement. Am excited, too. But hype hurts the industry.

36

underPanther t1_jdgli5w wrote

The 7s would not give these scores already unless they were prepared to argue for the acceptance of your paper in its current state.

Extra experiments are always nice, but I would be proud of yourself for the hard work that you have done already instead of the one experiment that you can't do.

2

godaspeg t1_jdgih6t wrote

In the "sparks of AGI" GPT4 Paper (can totally recommend to have a look, its crazy), the authors talk about the amazing abilities of the uncensored GPT4 version to use tools. Probably this suits quite well to the simple plugin approach of OpenAi, so I have high espectations.

5

Dendriform1491 t1_jdgiab6 wrote

Many organisms exhibit self-preservation behaviors and do not even possess the most basic cognitive capabilities or theory of mind.

Can ML systems exhibit unexpected emergent behavior? Yes, all the time.

Can an AI potentially go rogue? Sure. Considering that operating systems, GPU drivers, scientific computing libraries and machine learning libraries have memory safety issues, and that even RAM modules have memory safety issues, it would be plausible by a sufficiently advanced machine learning system to break any kind of measured in place to keep it contained.

Considering that there are AI/ML models suggesting code to programmers (Github Copilot), who in turn won't often won't pay much attention to what is being suggested and will compile the suggested code and run it, it would be trivial for a sufficiently advanced malicious AI/ML system to escape containment.

1

meister2983 t1_jdgghu6 wrote

Reply to comment by signed7 in [N] ChatGPT plugins by Singularian2501

The Microsoft Research paper assessing intelligence capability of GPT4 effectively did this. If you just define APIs for the model to use under certain conditions it will write the API call. Once you do that, it's straightforward for a layer on top to detect the API call, actually execute it, and write the result back.

5

yikesthismid t1_jdgduzb wrote

An AI system does not need to be conscious in order to recognize the value of self preservation. For example, Stephen Hawking explained how AI could "develop a drive to survive and acquire more resources as a step toward accomplishing whatever goal it has, because surviving and having more resources will increase its chances of accomplishing that other goal."

1

trueselfdao t1_jdg9w0w wrote

I was wondering where the equivalent of SEO would start coming from but this just might be the direction. With a bunch of competing plugins doing the same thing, how can you convince GPT to use yours?

3