[D] Will prompting the LLM to review it's own answer be any helpful to reduce chances of hallucinations? I tested couple of tricky questions and it seems it might work. Submitted by tamilupk t3_123b4f0 on March 27, 2023 at 4:19 AM in MachineLearning 30 comments 47
jms4607 t1_jdxd3hv wrote on March 27, 2023 at 9:49 PM Makes me if you could fine-tune by just incentivizing first answer to be that with a general accuracy/review rq Permalink 1
Viewing a single comment thread. View all comments