The (Un)Suitability of Automatic Evaluation Metrics for Text Simplification

Authors

  • Fernando Alva-Manchego University of Sheffield
  • Carolina Scarton University of Sheffield
  • Lucia Specia Imperial College London

Abstract

In order to simplify sentences, several rewriting operations can be performed such as replacing complex words per simpler synonyms, deleting unnecessary information, and splitting long sentences. Despite this multi-operation nature, evaluation of automatic simplification systems relies on metrics that moderately correlate with human judgements on the simplicity achieved by executing specific operations (e.g. simplicity gain based on lexical replacements). In this article, we investigate how well existing metrics can assess simplifications where multiple operations may have been applied and which, therefore, require more general simplicity judgements. For that, we first collect a new and more reliable dataset for evaluating the correlation of metrics and human judgements of overall simplicity. Second, we conduct the first meta-evaluation of automatic metrics in Sentence Simplification, using our new dataset (and other existing data) to analyse the variation of the correlation between metrics' scores and human judgements across three dimensions: the perceived simplicity level, the system type and the set of references used for computation. We show that these three aspects affect the correlations and, in particular, highlight the limitations of commonly-used operation-specific metrics. Finally, based on our findings we propose a set of recommendations for automatic evaluation of multi-operation simplification, suggesting which metrics to compute and how to interpret their scores.

Author Biographies

  • Fernando Alva-Manchego, University of Sheffield
    Research Associate
  • Carolina Scarton, University of Sheffield
    Academic Fellow
  • Lucia Specia, Imperial College London
    Professor

Published

2024-11-22