Evaluating Computational Language Models with Scaling Properties of Natural Language

Authors

  • Shuntaro Takahashi The University of Tokyo
  • Kumiko Tanaka-Ishii The University of Tokyo

Abstract

In this article, we evaluate computational models of natural language with respect to the universal statistical behaviors of natural language. Statistical mechanical analyses have revealed that natural language text is characterized by scaling properties, which quantify the global structure in the vocabulary population and the long memory of a text. We study whether five scaling properties (given by Zipf's law, Heaps' law, Ebeling's method, Taylor's law, and long-range correlation analysis) can serve for evaluation of computational models. Specifically, we test $n$-gram language models, a probabilistic context-free grammar (PCFG), language models based on Simon/Pitman-Yor processes, neural language models, and generative adversarial networks (GANs) for text generation. Our analysis reveals that language models based on recurrent neural networks (RNNs) with a gating mechanism (i.e., long short-term memory, LSTM; a gated recurrent unit, GRU; and quasi-recurrent neural networks, QRNNs) are the only computational models that can reproduce the long memory behavior of natural language. Furthermore, through comparison with recently proposed model-based evaluation methods, we find that the exponent of Taylor's law is a good indicator of model quality.

Author Biographies

  • Shuntaro Takahashi, The University of Tokyo
    Ph.D course student at Department of Advanced Interdisciplinary Studies, Graduate School of Enginnering
  • Kumiko Tanaka-Ishii, The University of Tokyo
    Professor at Research Center for Advanced Science and Technology

Published

2024-12-05

Issue

Section

Long paper