The Role of Transfer Learning in Multilingual Neural Machine Translation
Abstract
Multilingual Neural Machine Translation (MNMT) benefits significantly from transfer learning, which leverages pre-trained models on high-resource languages to enhance translation quality for low-resource languages. This technique enables cross-lingual knowledge sharing, improving translation accuracy and fluency, and addressing data scarcity issues. By utilizing pre-trained linguistic features, transfer learning reduces the amount of required training data and computational resources. This paper highlights various transfer learning approaches, such as zero-shot and few-shot translation, and examines their impact on translation quality using metrics like BLEU scores. Despite challenges like negative transfer, transfer learning shows immense potential in optimizing MNMT systems, fostering more inclusive multilingual communication. The findings underscore the transformative potential of transfer learning in MNMT, suggesting pathways for future research to optimize model architectures, training strategies, and language pairings.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 MZ Computing Journal
This work is licensed under a Creative Commons Attribution 4.0 International License.