Recent comments in /f/MachineLearning
rya794 t1_jds0xqs wrote
Reply to comment by ThirdMover in [P] Using ChatGPT plugins with LLaMA by balthierwings
I don’t think so, I suspect my argument holds no matter who is running the most advanced LLM. The market leader will never have an incentive to open source their “app store”.
The only way this breaks down is if by some miracle, an open source model takes and maintains the lead.
muskoxnotverydirty t1_jds07qr wrote
Reply to comment by nixed9 in [D] GPT4 and coding problems by enryu42
Eh, I must've misunderstood the paper. It sounded like they were asking GPT4 to create unit tests, execute the code, and then update its answer based on the results of those unit tests.
WarProfessional3278 t1_jdrzo00 wrote
Reply to [D] GPT4 and coding problems by enryu42
Horace He made a nice thread on this when GPT-4 first came out. Realistically this is expected - within the short time span, there isn't much else you can do to improve the model performance other than increasing size of training data, which resulted in data contamination.
I expect the next "big thing" to be some of self-correcting output, or better chain-of-thoughts reasoning.
ngildea t1_jdrzg0p wrote
Reply to comment by liqui_date_me in [D] GPT4 and coding problems by enryu42
I agree, but is that opinion controversial? Seems patently obvious after talking to it about coding for a few minutes. Maybe it's controversial among people who have fooled themselves into thinking it's thinking?
ThirdMover t1_jdrzd7f wrote
Reply to comment by rya794 in [P] Using ChatGPT plugins with LLaMA by balthierwings
That depends on how well they will be able to keep their moat. There is a lot of hunger for running LLMs on your own - if not hardware than at least in software environments you control. People want to see what makes them tick rather than trust "Open"AIs black boxes.
Yeah they have a performance lead but time will tell how well they can stay ahead of the rest of the field trying to catch up.
enryu42 OP t1_jdrzcc9 wrote
Reply to comment by ghostfaceschiller in [D] GPT4 and coding problems by enryu42
Do you mean re-prompt it asking to correct its mistakes? It is hard to try with the current tight limits on GPT4 prompt count, I'll try once API is properly available. But I strongly doubt it'll help much: it's not that the solutions have minor bugs, they're usually just completely wrong, i.e. the model doesn't "get" the idea for the correct solution.
(it might help for some of the problems from the "Beginner" category though, but these aren't that interesting)
Username2upTo20chars t1_jdrzb3d wrote
Reply to [D] Simple Questions Thread by AutoModerator
Are there any websites/articles/blogs/forums with proven prompt formats for ChatGPT and co you can recommend.
Especially ones for programming/refactoring/tests... and general error messages (operating system, installation, crashes).
I am just starting to look into using ChatGPT or alternatives.
I have found a page with ranked jailbreak prompts for ChatGPT so far.
ngildea t1_jdryzgh wrote
Reply to [D] GPT4 and coding problems by enryu42
I've tried quite a few times to get it to help with a problem I've been thinking about for a while. Every time it says it understand and then writes code that shows it doesn't understand at all and violates every constraint I give it.
Not surprising but it does point to a lot of contamination & regurgitation of the training material fooling people into thinking it's intelligent
Cool_Abbreviations_9 t1_jdryysg wrote
Reply to comment by nixed9 in [D] GPT4 and coding problems by enryu42
Got it. thanks a ton !
rya794 t1_jdrypjf wrote
Reply to [P] Using ChatGPT plugins with LLaMA by balthierwings
Yea, it would be nice.
But what benefit does any LLM provider gain by implementing/adhering to an open protocol? OpenAI is trying to build a moat around their service, from their perspective plugins are key to establishing a competitive advantage.
I can’t see this happening in reality.
Username2upTo20chars t1_jdrydqw wrote
Reply to comment by Chris_The_Pekka in [D] Simple Questions Thread by AutoModerator
I am confused about your mention of GAN structure. If you want to generate natural language text, use a pretrained Large Language Model. You probably have to finetune it for best use, as you don't have access to the giant ones, which do very well with zero-shot prompting.
LightVelox t1_jdry1xp wrote
Reply to comment by Cool_Abbreviations_9 in [D] GPT4 and coding problems by enryu42
Basically it makes GPT-4 reevaluate what it did wrong and try again until it can do it correctly
nixed9 t1_jdrxr76 wrote
Reply to comment by Cool_Abbreviations_9 in [D] GPT4 and coding problems by enryu42
a Reflexion loop asks the model to react to it's own output and critique it before giving you an additional answer.
Edit: (In the paper, it provides a loop like this which feeds back into itself to help it's own cognition. It can repeat this loop multiple times.)
You can do a mini-loop by prompting. I've been playing with this all day.
I prompt it like this:
> "For this interaction, we are going to use the following structure.
> User (me): [I will ask a topic or question]
> You will provide an Assistant Hypothetical Response: [Brief or simplified answer to the topic or question]
> Then you will undergo Agent Reflection: [You will provide a Critique of the hypothetical response, highlighting the limitations, inaccuracies, or areas that need improvement or expansion, while providing guidance on how to address these issues in the revised response]
> Then you will provide an Actual Response: [The natural and contextually appropriate answer to the topic or question, as generated by the advanced language model, which incorporates the suggestions and improvements from the agent reflection for a more comprehensive and accurate response. This also can include step-by-step reasoning.]
> Do you understand?"
Username2upTo20chars t1_jdrxowm wrote
Reply to comment by russell616 in [D] Simple Questions Thread by AutoModerator
Try a Kaggle competition for some practical experience of applying ML to already cleaned data. There is always published code of other competitors and Kaggle has also tutorials.
[deleted] t1_jdrxdrd wrote
Reply to comment by enryu42 in [D] GPT4 and coding problems by enryu42
[deleted]
Cool_Abbreviations_9 t1_jdrxcqi wrote
Reply to comment by ghostfaceschiller in [D] GPT4 and coding problems by enryu42
Sorry, newbie to NLP , what is this ?
maxToTheJ t1_jdrx281 wrote
Reply to comment by enryu42 in [D] GPT4 and coding problems by enryu42
> which supposedly require some amount of creative reasoning.
The dont which is exactly has been part of the complaints of teachers in regards to standardized testing
zoupishness7 t1_jdrw0qk wrote
Reply to Have deepfakes become so realistic that they can fool people into thinking they are genuine? [D] by [deleted]
So, deepfakes are getting quite good, but I would suggest the main reason those influencers were able to fool so many people isn't because of the realism of the deepfake, but because beauty filters are so common on Asian social media. It's not that many users didn't recognize that a filter was being used, but that the amount that the deepfake filter changed the influencer's face was unexpected.
afreydoa t1_jdrvs8d wrote
Reply to [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
I wonder if combining LLMs with planning would enhance the creation of poems or that example task, of creating sentences that end with a specific letter.
My thinking is that poem generation often struggles when the LLM can't find a suitable ending, as the initial part of the line or paragraph, is already locked and can't be altered. However, when directing ChatGPT to rework the response by modifying the starting point, it seems to often produce better outcomes.
anomhali t1_jdru6in wrote
Reply to [D] GPT4 and coding problems by enryu42
leetcode questions and solution directly data leakage, although I do not specify the function signature, the program writes with a question exact same signature, If you change the question a little bit, it gives you the buggiest code ever.
addandsubtract t1_jdrtrwn wrote
Reply to Have deepfakes become so realistic that they can fool people into thinking they are genuine? [D] by [deleted]
People are starting to question if real videos are deep fakes.
currentscurrents t1_jdrt3gv wrote
Reply to comment by liqui_date_me in [D] GPT4 and coding problems by enryu42
I think all tests designed for humans are worthless here.
They're all meant to compare humans against each other, so they assume you don't have the ability to read and remember the entire internet. You can make up for a lack of reasoning with an abundance of data. We need synthetic tests designed specifically for LLMs.
CormacMccarthy91 t1_jdrsh0g wrote
Reply to [D] Simple Questions Thread by AutoModerator
I have a problem. Bing chat just tried to sell me on Unified Theory of Everything and Quantum Gravity and String theory... I told it those arent based on any evidence and it told me it didnt want to continue the conversation. it wouldnt tell me anything further until i restarted and asked about more specific things... that really scares me, its all monotheistic / consciousness is spiritual not physical stuff its spouting like facts, and when its questioned it just ends the conversation...
i dont know where to talk about it where people wont jump on the spiritual "big bang is just a theory" train. its really unsettling. If i tried do divert it from bringing god into astrophysics it would end the conversation.
its oddly religious. https://ibb.co/W36fjfC
addition t1_jdrsas2 wrote
Reply to [D] GPT4 and coding problems by enryu42
I’ve become increasingly convinced that the next step for AI is adding some sort of feedback loop so that the AI can react to its own output.
There is increasing evidence that this is true. Chain-of-thought prompting, reflexon, and Anthropic’s constitutional AI all point in this direction.
I find constitutional AI to be particularly interesting because it suggests that after an LLM reaches a certain threshold of language understanding that it can start to assess its own outputs during training.
ghostfaceschiller t1_jds0zez wrote
Reply to comment by Cool_Abbreviations_9 in [D] GPT4 and coding problems by enryu42
Basically just giving the model the ability to observe the results of its previous action and decide if it wants to try something different based on the feedback