Half-context language models

Authors

  • Hinrich Schuetze
  • Michael Walsh

Abstract

This article investigates the effects of different degrees of contextual granularity on language model performance. It presents a new language model that combines clustering and half-contextualisation, a novel representation of contexts.  Half-contextualisation is based on the half-context hypothesis that states that the distributional characteristics of a word or bigram are best represented by treating its context distribution to the left and right
separately and that only directionally relevant distributional information should be employed. Clustering is achieved using a new clustering algorithm for class-based language models that compares favourably to the exchange algorithm. When interpolated with a Kneser-Ney model, half-context models are shown to have better perplexity than commonly used interpolated n-gram models and traditional class-based approaches.  A novel, fine-grained,  context-specific analysis highlights those contexts in which the model performs well and those which are better treated by existing non-class-based models.

Published

2024-12-05

Issue

Section

Short paper