Rethinking the Exploitation of Monolingual Data for Low-Resource Neural Machine Translation

Authors

  • Jianhui Pang University of Macau http://orcid.org/0000-0001-8093-867X
  • Baosong Yang Alibaba Group
  • Derek Fai Wong University of Macau
  • Yu Wan University of Macau
  • Dayiheng Liu Alibaba Group
  • Lidia Sam Chao University of Macau
  • Jun Xie Alibaba Group

Abstract

The utilization of monolingual data has been shown to be a promising strategy for addressing low-resource machine translation problems. Previous studies have demonstrated the effectiveness of techniques such as Back-Translation and self-supervised objectives, including Masked Language Modeling, Causal Language Modeling, and Denoise Autoencoding, in improving the performance of machine translation models. However, the manner in which these methods contribute to the success of machine translation tasks and how they can be effectively combined remains an under-researched area. In this study, we carry out a systematic investigation of the effects of these techniques on linguistic properties through the use of probing tasks, including source language comprehension, bilingual word alignment, and translation fluency. We further evaluate the impact of Pre-Training, Back-Translation, and Multi-Task Learning on bitexts of varying sizes.  Our findings inform the design of more effective pipelines for leveraging monolingual data in extremely low-resource and low-resource machine translation tasks. Our experiments show consistent performance gains in seven translation directions, which provide further support for our conclusions and understanding of the role of monolingual data in machine translation.

Author Biography

  • Dayiheng Liu, Alibaba Group

     

Published

2024-09-02