The recurring translation task of the WMT workshops focuses on news text and (mainly) European language pairs. For the year the language pairs are:
The goals of the shared translation task are:
Release of training data for shared tasks | January, 2017 |
Test data released (except zh-en/en-zh) | May 2, 2017 |
Translation submission deadline | May 8, 2017 |
Test week for en-zh/zh-en | May 15-22, 2017 |
Start of manual evaluation | May 15, 2017 |
End of manual evaluation (provisional) | June 4, 2017 |
We provide training data for seven language pairs, and a common framework. The task is to improve methods current methods. This can be done in many ways. For instance participants could try to:
You may participate in any or all of the seven language pairs. For all language pairs we will test translation in both directions. To have a common framework that allows for comparable results, and also to lower the barrier to entry, we provide a common training set.
We also strongly encourage your participation, if you use your own training corpus, your own sentence alignment, your own language model, or your own decoder.
If you use additional training data or existing translation systems, you must flag that your system uses additional data. We will distinguish system submissions that used the provided training data (constrained) from submissions that used significant additional data resources. Note that basic linguistic tools such as taggers, parsers, or morphological analyzers are allowed in the constrained condition.
Your submission report should highlight in which ways your own methods and data differ from the standard task. We may break down submitted results in different tracks, based on what resources were used. We are mostly interested in submission that are constrained to the provided training data, so that the comparison is focused on the methods, not on the data used. You may submit contrastive runs to demonstrate the benefit of additional training data.
The data released for the WMT17 new translation task can be freely used for research purposes, we just ask that you cite the WMT17 shared task overview paper, and respect any additional citation requirements on the individual data sets. For other uses of the data, you should consult with original owners of the data sets.
The provided data is mainly taken from public data sources such as the Europarl corpus, and the UN corpus. Additional training data is taken from the News Commentary corpus, which we re-extract every year from the task.
We have added suitable additional training data to some of the language pairs.
You may also use the following monolingual corpora released by the LDC:Note that the released data is not tokenized and includes sentences of any length (including empty sentences). All data is in Unicode (UTF-8) format. The following Moses tools allow the processing of the training data into tokenized format:
tokenizer.perl
detokenizer.perl
lowercase.perl
wrap-xml.perl
To evaluate your system during development, we suggest using the 2016 test set. The data is provided in raw text format and in an SGML format that suits the NIST scoring tool. We also release other dev and test sets from previous years.
Year | CS-EN | DE-EN | FI-EN | LV-EN | RU-EN | TR-EN | ZH-EN |
---|---|---|---|---|---|---|---|
2008 | ✓ | ✓ | |||||
2009 | ✓ | ✓ | |||||
2010 | ✓ | ✓ | |||||
2011 | ✓ | ✓ | |||||
2012 | ✓ | ✓ | ✓ | ||||
2013 | ✓ | ✓ | ✓ | ||||
2014 | ✓ | ✓ | ✓ | ||||
2015 | ✓ | ✓ | ✓ | ✓ | |||
2016 | ✓ | ✓ | ✓ | ✓ | ✓ |
The 2017 test sets will be created from a sample of online newspapers from August 2016. The English-X sets are created using equal sized samples of English, and language X, with each sample professionally translated into the other language.
We have released development data for the tasks that are new this year, i.e. Chinese-English and Latvian-English. It is created in the same way as the test set.
The news-test2011 set has three additional Czech translations that you may want to use. You can download them from Charles University.
File | Size | CS-EN | DE-EN | FI-EN | LV-EN | RU-EN | TR-EN | ZH-EN | Notes |
---|---|---|---|---|---|---|---|---|---|
Europarl v7 | 628MB | ✓ | ✓ | same as previous year, corpus home page | |||||
Europarl v8 | 238MB | ✓ | ✓ | lv-en is new for this year, corpus home page | |||||
876MB | ✓ | ✓ | ✓ | Same as last year | |||||
News Commentary v12 | 162MB | ✓ | ✓ | ✓ | ✓ | updated | |||
CzEng 1.6 | 3.1GB | ✓ | New for 2017. Register and download CzEng 1.6. | ||||||
121MB | ✓ | ru-en | |||||||
9.1MB | ✓ | ✓ | Provided by CMU.. | ||||||
44 MB | ✓ | Distributed by OPUS | |||||||
3.6 GB | ✓ | ✓ | New for 2017. Register and download | ||||||
156 MB | ✓ | ✓ | ✓ | New for 2017. Prepared by Tilde | |||||
2 MB | ✓ | New for 2017. Prepared by the University of Latvia | |||||||
123 MB | ✓ | New for 2017. Prepared by the University of Latvia | |||||||
309 kB | ✓ | New for 2017. Prepared by the University of Latvia | |||||||
✓ | New for 2017. |
Corpus | CS | DE | EN | FI | LV | RU | TR | ZH | All languages combined |
Notes |
---|---|---|---|---|---|---|---|---|---|---|
Europarl v7/v8 | 32MB | 107MB | 99MB | 95MB | 29MB | |||||
News Commentary | 14MB | 19MB | 22MB | 19MB | 129MB | Updated | ||||
Common Crawl | 10.5GB | 102GB | 103 GB | 5.3GB | 800MB | 42GB | 18GB | 33GB | Deduplicated with development and evaluation sentences removed. English was updated 31 January 2016 to remove bad UTF-8. Latvian and Chinese are new this year, others are as last year. Downloads can be verified with SHA512 checksums. More English is available for unconstrained participants. | |
News Crawl: articles from 2007 | 3.7MB | 92MB | 198MB | 302MB |
News Crawl Extracted article text from various online news publications. The data sets from 2007-2015 (except Latvian), are the same as last year's. |
|||||
News Crawl: articles from 2008 | 191MB | 313MB | 672MB | 2.3MB | 1.5GB | |||||
News Crawl: articles from 2009 | 194MB | 296MB | 757MB | 5.1MB | 1.6GB | |||||
News Crawl: articles from 2010 | 107MB | 135MB | 345MB | 2.5MB | 727MB | |||||
News Crawl: articles from 2011 | 389MB | 746MB | 784MB | 564MB | 3.1GB | |||||
News Crawl: articles from 2012 | 337MB | 946MB | 751MB | 568MB | 3.1GB | |||||
News Crawl: articles from 2013 | 395MB | 1.6GB | 1.1GB | 730MB | 4.3GB | |||||
News Crawl: articles from 2014 | 380MB | 2.1GB | 1.4GB | 52MB | 52MB | 801MB | 5.3GB (excludes lv) | |||
News Crawl: articles from 2015 | 360MB | 2.2GB | 1.3GB | 203MB | 154MB | 608MB | 4.8G (excludes lv) | |||
News Crawl: articles from 2016 | 252MB | 1.6GB | 1GB | 163MB | 123MB | 418MB | 77MB | 3.7G | ||
News Discussions. Version 1 from 2014/15 | 1.7GB | Extracted from comment sections of online newspapers. Version 2 is new for this year. | ||||||||
News Discussions. Version 2 from 2015/16 | 6.3GB |
The Common Crawl monolingual data is hosted by Amazon Web Services as a public data set. The underlying S3 URL is s3://web-language-models/wmt16/deduped
.
To submit your results, please first convert into into SGML format as required by the NIST BLEU scorer, and then upload it to the website matrix.statmt.org.
For Chinese output, you should submit unsegmented text, since our primary measure is human evaluation. For automatic scoring (in the matrix) we use BLEU4 computed on characters, scoring with v1.3 of the NIST scorer only. A script to convert a Chinese SGM file to characters can be found here.
Each submitted file has to be in a format that is used by standard scoring scripts such as NIST BLEU or TER.
This format is similar to the one used in the source test set files that were released, except for:
<tstset trglang="en" setid="newstest2017"
srclang="any">
, with trglang set to
either en
, de
, fr
, es
,
cs
or ru
. Important: srclang is
always any
.
sysid="uedin"
.
</tstset>
The script wrap-xml.perl makes the conversion of a output file in one-segment-per-line format into the required SGML file very easy:
Format: wrap-xml.perl LANGUAGE SRC_SGML_FILE SYSTEM_NAME < IN > OUT
Example: wrap-xml.perl en newstest2016-src.de.sgm Google < decoder-output > decoder-output.sgm
Upload happens in three easy steps:
If you are submitting contrastive runs, please submit your primary system first and mark it clearly as the primary submission.
Evaluation will be done both automatically as well as by human judgement.