Selectional Preferences for Semantic Role Classification

Authors

  • Beñat Zapirain University of the Basque Country
  • Eneko Agirre University of the Basque Country
  • Lluís Màrquez Universitat Politècnica de Catalunya
  • Mihai Surdeanu Stanford University

Abstract

This paper focuses on a well-known open issue in Semantic Role Classification (SRC) research: the limited influence and sparseness of lexical features. We mitigate this problem using models that integrate automatically-learned selectional preferences (SP). We explore a range of models based on WordNet and distributional-similarity SPs. Furthermore, we demonstrate that the SRC task is better modeled by SP models centered on both verbs and prepositions, rather than verbs alone. Our experiments with SP-based models in isolation indicate that they outperform a lexical baseline with 20 F1 points in domain and almost 40 F1 points out of domain. Finally, we show that a state-of-the-art SRC system extended with features based on selectional preferences performs significantly better, both in domain (17% error reduction) and out of domain (13% error reduction). Our post-hoc error analysis indicates that the SP-based features help mostly in situations where syntactic information is either incorrect or insufficient to disambiguate the correct role.

Author Biography

  • Mihai Surdeanu, Stanford University
    Computer Science, Senior Research Associate

Published

2024-12-05

Issue

Section

Short paper