Researchers show two words can reduce AI hallucinations


Researchers from Johns Hopkins University have found a simple technique to reduce hallucinations in large language models (LLMs) and improve the accuracy of their answers. By adding “according to” in queries, LLMs are more likely to quote observed text and provide factual information instead of fabricating answers.

A review of LLM responses using the QUIP score metric shows a 5-15% increase in the accuracy of cited information when using grounding prompts such as “According to Wikipedia…”. While the technique works well across different LLMs, it is most effective with larger instruction-tuned models.

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:

Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top