This work explores methods for automatically detecting hallucinations in large language model outputs through optimized prompting strategies, model selection, and contextual understanding.
Presentation at BayLearn 2025 conference in Santa Clara, California on automated hallucination detection in large language model outputs.
Authors: Sicong Huang, Jincheng He, Shiyuan Huang, Karthik Raja Anandan, Arkajyoti Chakraborty, and Ian Lane