The Pitfalls of Defining Hallucination

Authors

  • Kees van Deemter Utrecht University

Abstract

Despite impressive advances in Natural Language Generation (NLG) and Large Language Models (LLMs), researchers are still unclear about important aspects of NLG evaluation. To substantiate this claim, I examine current classifications of hallucination and omission in data-text NLG, and I propose a logic-based synthesis of these classfications. I conclude by highlighting some remaining limitations of all current thinking about hallucination and by discussing implications for LLMs.

Author Biography

  • Kees van Deemter, Utrecht University
    I'm a full professor at Utrecht University, Department of Computing and Information Sciences.

Published

2024-11-10