More Training Data: The parallel corpora Europarl and News Commentary and the monolingual News corpora are extended, in addition to the large French-English corpus which was already released year. We release large French-English and Spanish-English parallel corpora crawled from United Nation sources by DFKI. We include the relevant LDC Gigaword corpora in the constraint data condition.
Task Description
We provide training data for four European language pairs, and a common framework (including a baseline system). The task is to improve methods current methods. This can be done in many ways. For instance participants could try to
- improve word alignment quality, phrase extraction, phrase scoring
- add new components to the open source software of the baseline system
- augment the system otherwise (e.g. by preprocessing, reranking, etc.)
- build an entirely new translation systems
Participants will use their systems to translate a test set of unseen sentences in the source language. The translation quality is measured by a manual evaluation and various automatic evaluation metrics. Participants agree to contribute to the manual evaluation about eight hours of work.
You may participate in any or all of the following language pairs:
- French-English
- Spanish-English
- German-English
- Czech-English
For all language pairs we will test translation in both directions.
To have a common framework that allows for comparable results, and also to lower the barrier to entry, we provide a common training set and baseline system.
We also strongly encourage your participation, if you use your own training corpus, your own sentence alignment, your own language model, or your own decoder.
If you use additional training data or existing translation systems, you must flag that your system uses additional data. We will distinguish system submissions that used the provided training data (constrained) from submissions that used significant additional data resources. Note that basic linguistic tools such as taggers, parsers, or morphological analyzers are allowed in the constrained condition.
Your submission report should highlight in which ways your own methods and data differ from the standard task. We may break down submitted results in different tracks, based on what resources were used. We are mostly interested in submission that are constraint to the provided training data, so that the comparison is focused on the methods, not on the data used. You may submit contrastive runs to demonstrate the benefit of additional training data.
The provided data is mainly taken from a new release (version 5) of the Europarl corpus, which is freely available. Please click on the links below to download the sentence-aligned data, or go to the Europarl website for the source release.
Additional training data is taken from the new News Commentary corpus. There are about 45 million words of training data per language from the Europarl corpus and 2 million words from the News Commentary corpus.
Europarl
- French-English
- Spanish-English
- German-English
- French monolingual
- Spanish monolingual
- German monolingual
- English monolingual
|
News Commentary
- French-English
- Spanish-English
- German-English
- Czech-English
- French monolingual
- Spanish monolingual
- German monolingual
- Czech monolingual
- English monolingual
|
News
- French monolingual
- Spanish monolingual
- German monolingual
- English monolingual
- Czech monolingual
|
United Nations
- French-English
- Spanish-English
|
French-English 109 corpus
Crawled from Canadian and European Union sources.
|
CzEng
The current version of the CzEng corpus (version v0.9) is available from the CzEng web site.
|
You may also use the following monolingual corpora released by the LDC:
- LDC2009T13 English Gigaword Fourth Edition
- LDC2007T07 English Gigaword Third Edition
- LDC2006T17 French Gigaword First Edition
- LDC2009T28 French Gigaword Second Edition
- LDC2006T12 Spanish Gigaword First Edition
- LDC2009T21 Spanish Gigaword Second Edition
Note that the released data is not tokenized and includes sentences of any length (including empty sentences). All data is in Unicode (UTF-8) format. The following tools allow the processing of the training data into tokenized format:
- Tokenizer
tokenizer.perl
- Detokenizer
detokenizer.perl
- Lowercaser
lowercase.perl
- SGML Wrapper
wrap-xml.perl
To tune your system during development, we suggest using the 2008 test set of 2051 sentences. The data is provided in raw text format and in an SGML format that suits the NIST scoring tool. We also release the 2009 test set of 2525 sentences and a system combination tuning set of 502 sentences.
News news-test2008
- English
- French
- Spanish
- German
- Czech
- Hungarian
This data is a cleaned version of the 2008 test set.
|
News news-test2009
- English
- French
- Spanish
- German
- Czech
- Hungarian
- Italian
|
News news-syscomb2009
- English
- French
- Spanish
- German
- Czech
- Hungarian
- Italian
|
Test Data
The test set is similar in nature as the prior test sets.
News news-test2010
- English
- French
- Spanish
- German
- Czech
|
Download
Test Set Submission
To submit your results, please first convert into into SGML format as required by the NIST BLEU scorer, and then upload it to the website matrix.statmt.org.
SGML Format
Each submitted file has to be in a format that is used by standard scoring scripts such as NIST BLEU or TER.
This format is similar to the one used in the source test set files that were released, except for:
- First line is
<tstset trglang="en" setid="newstest2010" srclang="any">
, with trglang set to either en
, de
, fr
, es
, or cz
. Important: srclang is always any
.
- Each document tag also has to include the system name, e.g.
sysid="uedin"
.
- CLosing tag (last line) is
</tstset>
The script wrap-xml.perl makes the conversion of a output file in one-segment-per-line format into the required SGML file very easy:
Format: wrap-xml.perl LANGUAGE SRC_SGML_FILE SYSTEM_NAME < IN > OUT
Example: wrap-xml.perl en newstest2010-src.de.sgm Google < decoder-output > decoder-output.sgm
Upload to Website
Upload happens in three easy steps:
- Go to the website matrix.statmt.org.
- Create an account under the menu item Account -> Create Account.
- Go to Account -> upload/edit content, and follow the link "Submit a system run"
- select as test set "newstest2010" and the language pair you are submitting
- select "create new system"
- click "continue"
- on the next page, upload your file and add some description
If you are submitting contrastive runs, please submit your primary system first and mark it clearly as the primary submission.
Evaluation
Evaluation will be done both automatically as well as by human judgement.
- Manual Scoring: We will collect subjective judgments about translation quality from human annotators. If you participate in the shared task, we ask you to commit about 8 hours of time to do the manual evaluation. The evaluation will be done with an online tool.
- As in previous years, we expect the translated submissions to be in recased, detokenized, XML format, just as in most other translation campaigns (NIST, TC-Star).
Dates
- December 4: Training data released
- March 1: Test data released (available on this web site)
- March 5: Results submissions
Note: If your system can easily generate n-best lists for your submissions, the groups participating in the system combination task would appreciate using them.
- April 23: Short paper submissions (online, 4-6 pages)
Results
The results of the evaluation are reported in the workshop overview paper.
supported by the EuroMatrixPlus project
P7-IST-231720-STP
funded by the European Commission
under Framework Programme 7