MITB Banner

Microsoft Unveils NTREX, a New Dataset for Machine Translation

NTREX aims to bridge the language divide with 128 languages, each having 2000 sentences.
Share
Listen to this story

Microsoft Research announced the launch of NTREX, the second largest human-translated parallel test set, featuring 128 languages, each having 2000 sentences translated with a document context without post-editing.

NTREX, a data set containing “News Text References of English into X Languages”, expands multilingual testing for translating 123 documents (1,997 sentences, 42k words) from English into 128 target languages. The test data is based on WMT19 and compatible with SacreBLEU. 

Read the full paper here

It can be used to evaluate English-sourced translation models but not in the reverse direction. The test set release also introduces another benchmark for evaluating massively multilingual machine translation research.

To produce this data set, the team sent the original English WMT19 test set to professional human translators. This work started after the release of the WMT19 test data and has continued in parallel with the work on new translation models since then. Translators could access the full document context. 

The team compared the NTREX-128 data set with COMET-src, a neural framework for MT evaluation, for the authentic translation direction against the scores obtained in the reverse direction. They also investigated how COMET-src behaves for languages it has yet to be trained.

Microsoft Research revealed the following results: 

  • Using COMET-src for test quality estimation is feasible but constrained due to the non-comparability of score ranges across language pairings. 
  • A significant subset of languages sees COMET-src scores on translationese input performed than the corresponding authentic source data. 
  • Although COMET-src relative comparisons are valid across all language pairings, there is a subset of languages for which the scores seem faulty.  

The data set consists of the following set of 128 languages: Afrikaans, Albanian, Amharic, Arabic, Azerbaijani, Bangla, Bashkir, Bosnian, Bulgarian, Burmese, Cantonese, Catalan, Central Kurdish, Chinese, Chuvash, Croatian, Czech, Danish, Dari, Divehi, Dutch, English, Estonian, Faroese, Fijian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Indonesian, Inuinnaqtun, Inuktitut, Irish, isiZulu, Italian, Japanese, Kannada, Kazakh, Khmer, Kiswahili, Korean, Kurdish, Kyrgyz, Lao, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Maya, Yucatán, Mongolian, Nepali, Norwegian, Odia, Otomi, Querétaro, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Serbian, Slovak, Slovenian, Somali, Spanish, Swedish, Tahitian, Tajik, Tajiki, Tamil, Tatar, Telugu, Thai, Tibetan, Tigrinya, Tongan, Turkish, Turkmen, Ukrainian, Upper Sorbian, Urdu, Uyghur, Uzbek, Vietnamese, Welsh. 

The total count of language names is less than 128, as there are some languages for which multiple scripts or variants are supported.

The number of supported languages for three multilingual test data sets, TICO-19, FLORES-101, and FLORES-200, is 37,101,200, respectively. 

The “Translation Initiative for Covid-19” released the TICO-19 dataset. It was a collaborative endeavour between several academic and industrial partners. The benchmark consists of 30 documents translated into 37 target languages from English (3,071 sentences, 69.7k words).

Meta also unveiled their open-source AI model—’ No Language Left Behind‘ (NLLB-200), capable of providing high-quality translations across 200 different languages, validated through extensive evaluations. Meta developed data set FLORES-101 with 3,001 sentences in 842 documents translated from English into 101 target languages. FLORES-200 expands FLORES-101 to 200 target languages and can assess NLLB-200’s performance. The same English source data that FLORES-101 is used to create FLORES-200.

PS: The story was written using a keyboard.
Picture of Shritama Saha

Shritama Saha

Shritama (she/her) is a technology journalist at AIM who is passionate to explore the influence of AI on different domains including fashion, healthcare and banks.
Related Posts

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories

Featured

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

AIM Conference Calendar

Immerse yourself in AI and business conferences tailored to your role, designed to elevate your performance and empower you to accomplish your organization’s vital objectives. Revel in intimate events that encapsulate the heart and soul of the AI Industry.

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed