Measuring Attribution in Natural Language Generation Models

Authors

  • Hannah Rashkin Google Research
  • Vitaly Nikolaev Google Research
  • Matthew Lamm Google Research
  • Lora Aroyo Google Research
  • Michael Collins Google Research
  • Dipanjan Das Google Research
  • Slav Petrov Google Research
  • Gaurav Singh Tomar Google Research
  • Iulia Turc
  • David Reitter Google Research

Abstract

With recent improvements in natural language generation (NLG) models for various applications, it has become imperative to have the means to identify and evaluate whether NLG output is only sharing verifiable information about the external world.  In this work, we present a new evaluation framework entitled Attributable to Identified Sources (AIS) for assessing the output of natural language generation models, when such output pertains to the external world. We first define AIS and introduce a two-stage annotation pipeline for allowing annotators to appropriately evaluate model output according to AIS guidelines. We empirically validate this approach on generation datasets spanning three tasks (two conversational QA datasets, a summarization dataset, and a table-to-text dataset) via human evaluation studies that suggest that AIS could serve as a common framework for measuring whether model-generated statements are supported by underlying sources. We release guidelines for the human evaluation studies.

Published

2024-11-14