The Role of Contextual Embeddings in Improving Zero-Shot Translation Quality Estimation

Authors

  • Anita Mishra Department of Artificial Intelligence, Tribhuvan University, Nepal

Abstract

Zero-shot translation quality estimation (QE) aims to evaluate the quality of translations without reference translations, a critical task for machine translation (MT) systems, particularly in low-resource settings. Contextual embeddings, generated by advanced language models such as BERT, GPT, and their variants, have shown remarkable performance in various natural language processing (NLP) tasks. This paper explores the role of contextual embeddings in enhancing zero-shot QE by leveraging the rich semantic information encapsulated in these embeddings. We present a comprehensive analysis of different contextual embedding models, their integration into QE frameworks, and their impact on QE performance. Our findings indicate that contextual embeddings significantly improve zero-shot QE accuracy, providing a robust foundation for future research in this domain.

Downloads

Published

2024-08-02