On Learning Interpreted Languages with Recurrent Models

Authors

Abstract

Can recurrent neural nets, inspired by the analogy with time sequence data processing by humans, learn to understand language? We construct simplified datasets reflecting core properties of natural language as modeled in formal syntax and semantics: recursive syntactic structure and compositionality. We find LSTM networks to generalise to compositional interpretation, but only in the most favorable learning settings, with a well-paced curriculum, extensive training data, and left-to-right (but not right-to-left) composition.

Published

2024-11-20

Issue

Section

Squibs and Discussions