Representing Meaning with a Combination of Logical and Distributional Models

Authors

  • I. Beltagy The University of Texas at Austin
  • Stephen Roller
  • Pengxiang Cheng
  • Katrin Erk
  • Raymond J. Mooney

Abstract

NLP tasks differ in the semantic information they require, and at this time no single semantic representation fulfills all requirements. Logic-based representations characterize sentence structure, but do not capture the graded aspect of meaning. Distributional models give graded similarity ratings for words and phrases, but do not capture sentence structure in the same detail as logic-based approaches. So it has been argued that the two are complementary.

We adopt a hybrid approach that combines logical and distributional semantics using prob- abilistic logic, specifically Markov Logic Networks (MLNs). In this paper, we focus on the three components of a practical system:1 1) Parsing and task representation focuses on representing the input problems in probabilistic logic. 2) Knowledge base construction creates weighted inference rules by integrating distributional information with other sources. 3) Probabilistic inference involves solving the resulting MLN inference problems efficiently. To evaluate our approach, we use the task of textual entailment (RTE), which can utilize the strengths of both logic-based and distributional representations. In particular we focus on the SICK dataset, where we achieve state-of-the-art results. We also release a lexical entailment dataset of 10,213 rules extracted from the SICK dataset, which is a valuable resource for evaluating lexical entailment systems.

Published

2024-12-05

Issue

Section

Special Issue : Formal Distributional Semantics