Context, Models and Prompt Optimization for Automated Hallucination Detection in LLM Output

Abstract

This work explores methods for automatically detecting hallucinations in large language model outputs through optimized prompting strategies, model selection, and contextual understanding.

Date
Oct 16, 2025
Location
Santa Clara University
Santa Clara, CA

Presentation at BayLearn 2025 conference in Santa Clara, California on automated hallucination detection in large language model outputs.

Authors: Sicong Huang, Jincheng He, Shiyuan Huang, Karthik Raja Anandan, Arkajyoti Chakraborty, and Ian Lane

Shiyuan Huang
Shiyuan Huang
Ph.D. Student

I am a Ph.D. student in the Department of Computer Science and Engineering at UC Santa Cruz, under the supervision of Dr. Leilani Gilpin and Dr. Ian Lane. My research primarily revolves around the explainability of NLP models.