Shared Task: Lifelong Learning for Machine Translation

This task aims at developing autonomous lifelong learning Machine Translation systems. The initial training process is standard. The system is free to learn its model parameters using all provided bitexts that correspond to the WMT News Tasks data along with their year tag. The system is allowed to update its models using the incoming stream of source text (unsupervised adaptation), creating a better model to handle the next document. In addition, a simulated active learning protocol will allow the model to obtain help from a (simulated) human domain expert. This active learning will be simulated by obtaining the reference translation for a training example. See the details in the simulated active learning section. The evaluation will differ from standard evaluations since it will be performed across time, taking into account the cost of the simulated active learning. For this year the language pairs are:

We provide parallel corpora for all languages as training data. In order to ensure reproducibility and ensure a fair comparison of the proposed methods, the system will have to run on the BEAT platform. In order to ease the integration of the participants' systems, a python environment is provided with a baseline system. See details in the BEAT section The lifelong learning machine translation evaluation is based on the series of WMT News translation tasks, on top of which some additional data have been produced.

CHECK THE GITHUB REPOSITORY

JOIN THE SLACK CHANNEL ALLIES_LLMT_WMT

GOALS

The goals of the shared task on lifelong learning for MT are:

We hope that both beginners and established research groups will participate in this task.

IMPORTANT DATES

Evaluation period July 19-29, 2021
Deadline for final system submission July 29, 2021 at 23:59 (Anywhere on Earth)
System descriptionAugust 5, 2021
Camera-ready version dueSeptember 15, 2021
Conference in Punta CanaNovember 10-11, 2021

SIMULATED ACTIVE LEARNING

The autonomous system will be able to ask questions to domain experts when its confidence in the proposed output is low. Only one type of question can be asked, namely: "what is the translation of the sentence S". Each request of active learning will have a cost which is proportional to the number of source word of the query. The quantity of active learning data required to reach the performance level will be taken into account in the lifelong evaluation metric (see section evaluation)

DATA

LICENSING OF DATA

The data released for the WMT20 translation task can be freely used for research purposes, we just ask that you cite the WMT20 shared task overview paper, and respect any additional citation requirements on the individual data sets. For other uses of the data, you should consult with original owners of the data sets.

TRAINING DATA

We aim to use publicly available sources of data wherever possible. Our main sources of training data are the Europarl corpus, the UN corpus, the news-commentary corpus and the ParaCrawl corpus.

DOWNLOAD

SUBMISSION

What to submit? A file containing the translations for the test period along with the log file of the process (output of BEAT).

EVALUATION

Evaluation will be done automatically.

BLEU score across time including active-learning penalisation will be used.

SYSTEM DESCRIPTION

You will be asked to describe your submission in a system description paper exhibiting the main aspect of your approach.

Evaluation will be done automatically.

BLEU score across time including active-learning penalisation will be used.

ACKNOWLEDGEMENTS

This task would not have been possible without the funding from the ALLIES project funded by the EU Chist-ERA programme.