This research investigates whether large language models can effectively explain their own decision-making processes and outputs, analyzing the quality and reliability of LLM-generated self-explanations.
Presentation at BayLearn 2024 conference at Apple headquarters in Cupertino, California.
Authors: Shiyuan, Huang, Siddarth Mamidanna, Shreedhar Jangam, Yilun Zhou, Leilani H. Gilpin