Improved Estimation of Entropy for Evaluation of Word Sense Induction
Abstract
Information-theoretic measures are among the most standard techniques for evaluationof clustering methods including word sense induction (WSI) systems. Such measures rely on
sample-based estimates of the entropy. However, the standard maximum likelihood estimates of
the entropy are heavily biased with the bias dependent on, among other things, the number of
clusters and the sample size. This makes the measures unreliable and unfair when the number
of clusters produced by different systems vary and the sample size is not exceedingly large. This
corresponds exactly to the setting of WSI evaluation where a ground-truth cluster sense number
arguably does not exist and the standard evaluation scenarios use a small number of instances
of each word to compute the score. We describe more accurate entropy estimators and analyze
their performance both in simulations and on evaluation of WSI systems.