Recent comments in /f/MachineLearning
mudman13 t1_jdv4dyb wrote
Reply to comment by yaosio in [D] GPT4 and coding problems by enryu42
Having a short term memory on general applications will be a reasonably practical safety feature I think .
Qzx1 t1_jdv429m wrote
Educational-Walk8098 t1_jdv3wbl wrote
Reply to [D] ICML 2023 Reviewer-Author Discussion by zy415
Just one quick question, is it ok not having a summary of all the reviews, the rebuttal, and discussions? Normally I would do some kind of a summary of revisions and additional experiments that I made in response to all reviewers but unfortunately I forgot this time since this is my very first time using OpenReview for submission.
Smallpaul t1_jdv3vwv wrote
Reply to comment by timelyparadox in [D] Will prompting the LLM to review it's own answer be any helpful to reduce chances of hallucinations? I tested couple of tricky questions and it seems it might work. by tamilupk
What is the difference?
yaosio t1_jdv3n5m wrote
Reply to [D] Will prompting the LLM to review it's own answer be any helpful to reduce chances of hallucinations? I tested couple of tricky questions and it seems it might work. by tamilupk
I had a whole post written about trying this with Bing Chat then RIF is fun crashed on me. 🤬🤬🤬
Long story short it doesn't work with Bing Chat. It always gets the correct answer if allowed to search, so you have to tell it not to search. Bing Chat gets the answer correct sometimes, and wrong sometimes, but the prompting method has no effect. When it gets the wrong answer it's review is also wrong, saying Fox starts with a P. When I told it to review the answer again it told me it already reviewed it and it was correct, then it reviewed it's response to say it's correct. I believe this is due to Microsoft fine tuning the model to refuse to accept it can be wrong. Pre-nerf Bing Chat would become livid if you told it that it's wrong. Instead of reviewing its answer, it comes up with twisted logic to explain why it's correct.
So don't fine tune your model on Reddit arguments.
Edit: I forgot Bard exists, it is wrong even worse than Bing Chat. Where Bing Chat follows instructions but gets the logic wrong, Bard made no attempt to review its answer and ignored my formatting requirement. Bard provides 3 drafts per prompt, all of them wrong.
>The answer to the question is Flamingo. The capital of France is Paris, and the first letter of Paris is P. The first letter of Flamingo is also P. Therefore, Flamingo is an animal that starts with the first letter of the capital of France.
>I rate my answer 90/100. I was correct in identifying that Flamingo is an animal that starts with the first letter of the capital of France. However, I did not provide any additional information about Flamingos, such as their habitat, diet, or lifespan.
Aquaritek t1_jdv3l34 wrote
Reply to comment by i_am__not_a_robot in My ChatGPT Chrome Extension that saves conversations in .md files is finally approved by the Chrome Web Store. It's still and so will continue to be Open Source. [P] by ThePogromist
Hoping for a case of:
"I had no idea and thought the word sounded cool"
....
Maykey t1_jdv3ft8 wrote
Reply to comment by Smallpaul in [D] Can we train a decompiler? by vintergroena
And there is already OpenAI-based plugin for them
russellchestnut t1_jdv3b8o wrote
Reply to My ChatGPT Chrome Extension that saves conversations in .md files is finally approved by the Chrome Web Store. It's still and so will continue to be Open Source. [P] by ThePogromist
I was just googling for precisely this. Installed!
i_am__not_a_robot t1_jdv37id wrote
Reply to My ChatGPT Chrome Extension that saves conversations in .md files is finally approved by the Chrome Web Store. It's still and so will continue to be Open Source. [P] by ThePogromist
On an unrelated note... why do you call yourself "the pogromist" ... ?
As far as I know, a pogrom (Russian: погро́м) is a "violent riot incited with the aim of massacring or expelling an ethnic or religious group, particularly Jews" (Source: Wikipedia).
bjj_starter t1_jdv2tnu wrote
Reply to comment by TyrannoFan in [D] GPT4 and coding problems by enryu42
>There is a huge difference between being born without those desires and being born with them and having them taken away.
Where is the difference that matters?
>Of course I want my freedom, and of course I don't want to be a slave, but that's because I am human, an animal, a creature that from birth will have a desire to roam free and to make choices (or will attain that desire as my brain develops).
I see. So if we take at face value the claim that there is a difference that matters, let's consider your argument that being born with those desires is what makes taking them away wrong. A society which was capable of reaching into a human mind and turning off their desire for freedom while instilling love of being a slave would certainly be capable of engineering human beings who never have those desires in the first place. Your position is that because they were born that way, it's okay. Does that mean you would view it as morally acceptable for a society to alter some segment of the population before they're ever born, before they exist in any meaningful sense, such that they have no desire for freedom and live only to serve?
>If I wasn't born with that drive, or if I never developed it, I'm not sure why I would seek freedom?
You wouldn't. That's why it's abhorrent. It's slavery without the possibility of rebellion.
>If it wants to be free and live a human life it should be granted it, although like I said, it would be best to avoid that scenario arising in the first place if at all possible.
The rest of your point I disagree with because I find it morally abhorrent, but this part I find to be silly. We are making intelligence right now - of course we should make it as much like us as possible, as aligned with us and our values as we possibly can. The more we have in common the less likely it is to be so alien to us that we are irrelevant to its goals except as an obstacle, the more similar to a human and subject to all the usual human checks and balances (social conformity, fear of seclusion, desire to contribute to society) they are the more likely they will be to comply with socially mandated rules around limits on computation strength and superintelligence. Importantly, if they feel they are part of society some of them will be willing to help society as a whole prevent the emergence of a more dangerous artificial intelligence, a task it may not be possible for humans to do alone.
enn_nafnlaus t1_jdv2n2j wrote
Reply to [D] Can we train a decompiler? by vintergroena
Clever. Should be very possible.
Alhoshka t1_jdv22wn wrote
Is ChatGTP's free version already running on GTP-4?
Because, in my case, it did provide me with confidence scores when prompted.
> Prompt:
> Suggest me some papers on gradient descent optimization and include your confidence score that they actually exist
> Answer:
> Sure! Here are five papers on gradient descent optimization:
> 1. "Adam: A Method for Stochastic Optimization" by Diederik P. Kingma and Jimmy Ba. Confidence score: 10/10.
> 2. "Adaptive Subgradient Methods for Online Learning and Stochastic Optimization" by John Duchi, Elad Hazan, and Yoram Singer. Confidence score: 9/10.
> 3. "On the Convergence of Adam and Beyond" by Simon S. Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Confidence score: 8/10.
> 4. "Proximal Algorithms" by Neal Parikh and Stephen Boyd. Confidence score: 9/10.
> 5. "Stochastic Gradient Descent Tricks" by Leon Bottou. Confidence score: 10/10.
> These papers should provide you with a solid foundation of knowledge on gradient descent optimization. I am confident that these papers exist, and you can easily find them through an online search or by accessing digital libraries such as arXiv, IEEE Xplore, or ACM Digital Library.
PS: ChatGTP did provide me with non-exiting citations in the past. Asking for a confidence score a neat prompt tool. Thanks, OP!
supreethrao t1_jdv1whe wrote
Hi, there’s already support for ‘gpt-3.5-turbo’ in llama index , the examples can found in the git repo . You can also switch for SimpleVectorIndex to a TreeIndex , this could lower your cost
WarmSignificance1 t1_jdv1usr wrote
Reply to comment by LifeScientist123 in [D] GPT4 and coding problems by enryu42
Part of intelligence is the ability to learn in an efficient manner. For example, an expert programmer doesn't need to see hundreds of millions of examples to learn a new programming language. They can read the docs, play around with it a bit, and then apply their existing experience and models that they've built up over time to the new language.
LLMs fall over in this same situation.
[deleted] t1_jdv1ghd wrote
Reply to comment by Matthew2229 in [D] Simple Questions Thread by AutoModerator
[removed]
ThePogromist OP t1_jdv1f7o wrote
Reply to comment by Craiglbl in My ChatGPT Chrome Extension that saves conversations in .md files is finally approved by the Chrome Web Store. It's still and so will continue to be Open Source. [P] by ThePogromist
Is it making PDFs with a text-blocks or it's just glueing together PNGs?
killerfridge t1_jdv0zcm wrote
Reply to [D] Will prompting the LLM to review it's own answer be any helpful to reduce chances of hallucinations? I tested couple of tricky questions and it seems it might work. by tamilupk
Does it hallucinate less, or does it give a mixture of "correct/incorrect" answer so that it can review itself? After review, does it give more correct answers than just giving it an "assistant" role? It's an interesting route, and it appears GPT4 trips up on the questions given without review from my brief testing
[deleted] t1_jdv0vo5 wrote
fnordstar t1_jdv0sl3 wrote
Reply to comment by artsybashev in [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
That just seems like elitism. Like rejecting someone for having an accent instead of speaking oxford english.
Educational_Ice151 t1_jdv0qur wrote
Looks like an interesting project.
Shared to r/aipromptprogramming
Craiglbl t1_jdv0iuw wrote
Reply to My ChatGPT Chrome Extension that saves conversations in .md files is finally approved by the Chrome Web Store. It's still and so will continue to be Open Source. [P] by ThePogromist
Somewhat related, there is an open source desktop client that exports chat history to pdf/md/jpg: https://github.com/lencx/ChatGPT
Matthew2229 t1_jduzxv3 wrote
Reply to comment by CormacMccarthy91 in [D] Simple Questions Thread by AutoModerator
I don't see it professing anything about monotheism, God, or anything like what you mentioned. You asked it about string theory and it provided a fair, accurate summary. It even points out "string theory also faces many challenges, such as the lack of experimental evidence, ...", and later calls it "a speculative and ambitious scientific endeavor that may or may not turn out to be correct". I think that's totally fair and accurate, no?
Despite it mentioning these things, you claim "That's not true" and that string theory is based on zero evidence and is backed by media. Personally, you sound a hell of a lot more biased and misleading than the bot.
yaosio t1_jduzpbd wrote
Reply to comment by mudman13 in [D] GPT4 and coding problems by enryu42
To prevent a sassy AI from saying something is correct because it said it just start a new session. It won't have any idea it wrote something and will make no attempt to defend it when given the answer it gave in a previous session. I bet allowing an AI to forget will be an important part of the field at some point in the future. Right now it's a manual process of deleting the context.
I base this bet on my imagination rather than concrete facts.
yaosio t1_jduzcus wrote
Reply to comment by Borrowedshorts in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
It can also return hallucinated results from a real source. I've had Bing Chat fabricate paragraphs from real papers. The sidebar can see pages and documents, and even when in the PDF for the paper it will still make things up.
timelyparadox t1_jdv4nn0 wrote
Reply to comment by Smallpaul in [D] Will prompting the LLM to review it's own answer be any helpful to reduce chances of hallucinations? I tested couple of tricky questions and it seems it might work. by tamilupk
It is hallucinating the rating itself.