Shared Task: Machine Translation of News

The recurring translation task of the WMT workshops focuses on news text and (mainly) European language pairs. For this year the language pairs are:

We provide parallel corpora for all languages as training data, and additional resources for download.

NB: The Kazakh task did not take place this year.

GOALS

The goals of the shared translation task are:

We hope that both beginners and established research groups will participate in this task.

IMPORTANT DATES

Release of training data for shared tasks (by)31 January, 2018
Test data released May 14, 2018
Translation submission deadlineMay 21, 2018
Start of manual evaluationJune 11, 2018
End of manual evaluation July 2, 2018

TASK DESCRIPTION

We provide training data for all eight language pairs, and a common framework. The task is to improve current methods. We encourage a broad participation -- if you feel that your method is interesting but not state-of-the-art, then please participate in order to disseminate it and measure progress. Participants will use their systems to translate a test set of unseen sentences in the source language. The translation quality is measured by a manual evaluation and various automatic evaluation metrics. Participants agree to contribute to the manual evaluation about eight hours of work, per system submission.

You may participate in any or all of the eight language pairs. For all language pairs we will test translation in both directions. To have a common framework that allows for comparable results, and also to lower the barrier to entry, we provide a common training set, and a pre-processed version. You are not limited to this training set, and you are not limited to the training set provided for your target language pair.

If you use additional training data (not provided by the WMT18 organisers) or existing translation systems, you must flag that your system uses additional data. We will distinguish system submissions that used the provided training data (constrained) from submissions that used significant additional data resources. Note that basic linguistic tools such as taggers, parsers, or morphological analyzers are allowed in the constrained condition.

Your submission report should highlight in which ways your own methods and data differ from the standard task. You should make it clear which tools you used, and which training sets you used.

The following two aspects of the task are new for 2018.

Multilinguality

For 2018, we are particulary interested in systems which use data other than the parallel data provided for that pair. For example, you could consider approaches based on pivoting, multi-task learning, continued training, multilingual models or other techniques; in general, any usage of a third language in any way means you are in the multilingual sub-track. You are free to use these techniques in any language pairs, but in particular we draw your attention to the following scenarios:

Unsupervised learning

Another sub-track is learning translation models from monolingual data only. For some existing approaches have a look at this or this paper; unsupervised cross-lingual embedding mapping and lexicon induction can also be a good start (it even has ready code: one, two), but don't let that limit your imagination.

In 2018 the unsupervised task is limited to the constrained task, so only the provided monolingual data is allowed; monolingual Europarl and News Commentary are also to be excluded, since they are largely parallel. The parallel data may neither be used directly to train the systems, nor indirectly to extract or bootstrap lexicons or other parameters. Again, you are free to test this on any language pairs (or to go wild and do unsupervised multilingual translation), with the special highlights being:

  • Turkish-English -- since this language pair is very low in allowed parallel training data,
  • Estonian-English -- since this pair is low-ish in in-domain parallel data in terms of NMT,
  • German-English -- for contrast, since this language pair has abundant parallel resources.
  • Additional Test Suites in News Translation Task

    At no additional burden on the News Translation Task participants (aside from having to translate much larger input data), we will collectively provide a deeper analysis of various qualities of the translations.

    See WMT18 Test Suites Google Document for more details.

    Authors of additional test suites will be invited to report on their evaluation method and its results in a separate paper

    DATA

    LICENSING OF DATA

    The data released for the WMT18 new translation task can be freely used for research purposes, we just ask that you cite the WMT18 shared task overview paper, and respect any additional citation requirements on the individual data sets. For other uses of the data, you should consult with original owners of the data sets.

    TRAINING DATA

    The provided data is mainly taken from public data sources such as the Europarl corpus, and the UN corpus. Additional training data is taken from the News Commentary corpus, which we re-extract every year from the task.

    New for 2018: The first release of the ParaCrawl corpus. This is a new crawled corpus for English to Czech, Estonian, Finnish, German and Russian. As this is the first release, it is potentially noisy, but we have observed bleu score increases on older WMT test sets (over a shallow NMT baseline) when using the Czech (+0.6), Finnish (+2.5), Latvian (+0.9) and Romanian (+3.2) versions of ParaCrawl. For German, bleu score dropped by 1.0 (this was with WMT data over-sampled 7 times). Your Mileage May Vary. You may also want to have a look at the corpus filtering task.

    We have added suitable additional training data to some of the language pairs.

    You may also use the following monolingual corpora released by the LDC:

    Note that the released data is not tokenized and includes sentences of any length (including empty sentences). All data is in Unicode (UTF-8) format. The following Moses tools allow the processing of the training data into tokenized format:

    These tools are available in the Moses git repository.

    DEVELOPMENT DATA

    To evaluate your system during development, we suggest using the 2017 test set. The data is provided in raw text format and in an SGML format that suits the NIST scoring tool. We also release other dev and test sets from previous years.

    Year CS-EN DE-EN ET-EN FI-EN KK-EN RU-EN TR-EN ZH-EN
    2008            
    2009            
    2010            
    2011            
    2012          
    2013          
    2014          
    2015        
    2016      
    2017    

    The 2018 test sets will be created from a sample of online newspapers from August 2017. The English-X sets are created using equal sized samples of English, and language X, with each sample professionally translated into the other language.

    We have released development data for the tasks that are new this year, i.e. Estonian-English and Kazakh-English. It is created in the same way as the test set and included in the development tarball.

    The news-test2011 set has three additional Czech translations that you may want to use. You can download them from Charles University.

    DOWNLOAD

    PREPROCESSED DATA

    We provide preprocessed versions of all training and development data. These are preprocesed with standard Moses tools and ready for use in MT training. This preprocessed data is distributed with the intention that it will be useful as a standard data set for future research. The preprocessed data can be obtained here

    TEST SET SUBMISSION

    Punctuation in the official test sets will be encoded with ASCII characters (not complex Unicode characters) as much as possible. You may want to normalize your system's output before submission.

    To submit your results, please first convert into into SGML format as required by the NIST BLEU scorer, and then upload it to the website matrix.statmt.org.

    For Chinese output, you should submit unsegmented text, since our primary measure is human evaluation. For automatic scoring (in the matrix) we use BLEU4 computed on characters, scoring with v1.3 of the NIST scorer only. A script to convert a Chinese SGM file to characters can be found here.

    SGML Format

    Each submitted file has to be in a format that is used by standard scoring scripts such as NIST BLEU or TER.

    This format is similar to the one used in the source test set files that were released, except for:

    The script wrap-xml.perl makes the conversion of a output file in one-segment-per-line format into the required SGML file very easy:

    Format: wrap-xml.perl LANGUAGE SRC_SGML_FILE SYSTEM_NAME < IN > OUT
    Example: wrap-xml.perl en newstest2018-src.de.sgm Google < decoder-output > decoder-output.sgm

    Upload to Website

    Upload happens in three easy steps:

    1. Go to the website matrix.statmt.org.
    2. Create an account under the menu item Account -> Create Account.
    3. Go to Account -> upload/edit content, and follow the link "Submit a system run"

    If you are submitting contrastive runs, please submit your primary system first and mark it clearly as the primary submission.

    EVALUATION

    Evaluation will be done both automatically as well as by human judgement.

    ACKNOWLEDGEMENTS

    This task would not have been possible without the sponsorship of test sets from Microsoft, Yandex, the University of Tartu, the University of Helsinki and funding from the European Union's Horizon 2020 research and innovation programme under grant agreements 645452 (QT21) and 645357 (Cracker. )