Shared Task: Machine Translation for European Languages
March 30-31, in conjunction with EACL 2009 in Athens, Greece
[HOME] | [TRANSLATION TASK] | [SYSTEM COMBINATION TASK]
| [EVALUATION TASK] |
[RESULTS] | [BASELINE SYSTEM]
| [SCHEDULE] | [AUTHORS] | [PAPERS]
The translation task of this workshop focuses on European language pairs. Translation quality will be evaluated on a shared, unseen test set of news stories. We provide a parallel corpus as training data, a baseline system, and additional resources for download. Participants may augment the baseline system or use their own system.
Goals
The goals of the shared translation task are:
- To investigate the applicability of current MT techniques when translating into languages other than English
- To examine special challenges in translating between European languages, including word order differences and morphology
- To create publicly available corpora for machine translation and machine translation evaluation
- To generate up-to-date performance numbers for European languages in order to provide a basis of comparison in future research
- To offer newcomers a smooth start with hands-on experience in state-of-the-art statistical machine translation methods
We hope that both beginners and established research groups will participate in this task.
Changes This Year
After a number of extensions over the last years, this year's translation task will be more focused. The motivation for this is to have a clearly defined task and to be able to gather sufficient human judgement data to make as many statistically significant distinctions as possible.
- News Translation Only: We will only evaluate performance on a set of news stories that we prepared for this evaluation. As in the previous year, the news stories are taken from major news outlets as the BBC, Der Spiegel, Le Monde, etc. during the time period of September-October 2008. Last year's test set will serve as the development set, which we re-release in a slightly cleaned form.
- Official Metric is Manual Sentence Ranking: While we continue to experiment with different forms of manual and automatic evaluation, the official metric that we will use is human preference judgments on a sentence-by-sentence basis.
- More Training Data: The parallel corpora Europarl and News Commentary are extended, a large French-English corpus (currently 400 million words) was
added, and we also proved 50-500 million word monolingual training data from the test domain of news stories.
- Constrained vs. Unconstrained: You may use any additional resources that you wish to (including training data, knowledge sources such as existing translation systems), but you should flag that your system uses additional data. We will distinguish system submissions that used the provided training data (constrained) from submissions that used significant additional data resources. Note that basic linguistic tools such as taggers, parsers, or morphological analyzers are allowed in the constrained condition.
Task Description
We provide training data for four European language pairs, and a common framework (including a language model and a baseline system). The task is to improve methods current methods. This can be done in many ways. For instance participants could try to
- improve word alignment quality, phrase extraction, phrase scoring
- add new components to the open source software of the baseline system
- augment the system otherwise (e.g. by preprocessing, reranking, etc.)
- build an entirely new translation systems
Participants will use their systems to translate a test set of unseen sentences in the source language. The translation quality is measured by a manual evaluation and various automatic evaluation metrics. Participants agree to contribute to the manual evaluation about eight hours of work.
You may participate in any or all of the following language pairs:
- French-English
- Spanish-English
- German-English
- Czech-English
- Hungarian-English
For all language pairs we will test translation in both directions.
To have a common framework that allows for comparable results, and also to lower the barrier to entry, we provide a common training set, language model, and baseline system.
We also strongly encourage your participation, if you use your own training corpus, your own sentence alignment, your own language model, or your own decoder.
Your submission report should highlight in which ways your own methods and data differ from the standard task. We may break down submitted results in different tracks, based on what resources were used. We are mostly interested in submission that are constraint to the provided training data, so that the comparison is focused on the methods, not on the data used. You may submit contrastive runs to demonstrate the benefit of additional training data.
The provided data is mainly taken from a new release (version 4) of the Europarl corpus, which is freely available. Please click on the links below to download the sentence-aligned data, or go to the Europarl website for the source release. If you prepare training data from the Europarl corpus directly, please do not take data from Q4/2000 (October-December), since it is reserved for development and test data.
Additional training data is taken from the new News Commentary corpus. There are about 40-45 million words of training data per language from the Europarl corpus and 2 million words from the News Commentary corpus.
Europarl
- French-English
- Spanish-English
- German-English
- French monolingual
- Spanish monolingual
- German monolingual
- English monolingual
|
News Commentary
- French-English
- Spanish-English
- German-English
- Czech-English
|
News
- French monolingual
- Spanish monolingual
- German monolingual
- English monolingual
- Czech monolingual
- Hungarian monolingual
|
Giga-FrEn
Crawled from Canadian and European Union sources.
|
Hunglish
The corpus was created as part of the hunglish project by the joint work of the Media Research and Education Center at the Budapest University of Technology and Economics and the Corpus Linguistics Department at the Hungarian Academy of Sciences Institute of Linguistics.
|
CzEng
You will need to to download the current version of the CzEng corpus (version v0.7) from the CzEng web site.
|
Note that in difference to previous years the released data is not tokenized and includes sentences of any length (including empty sentences). Also, this year all data is in Unicode (UTF-8) format. The following tools allow the processing of the training data into tokenized format:
- Tokenizer
tokenizer.perl
- Detokenizer
detokenizer.perl
- Lowercaser
lowercase.perl
- SGML Wrapper
wrap-xml.perl
Development Data
To tune your system during development, we provide a development set of 2051 sentences. The data is provided in raw text format and in an SGML format that suits the NIST scoring tool.
Since most statistical system use a tuning set and a test set during system development, we also provide a version of the development set split up into tuning set (news-dev2009a) and test set (news-dev2009b), consisting of alternating sentences from the original set.
News news-dev2009
- English
- French
- Spanish
- German
- Czech
- Hungarian
This data is a cleaned version of the 2008 test set.
|
Additional Development Data
The following sets have been used in previous translation tasks and may be useful.
Europarl dev2006
- English
- French
- Spanish
- German
This data is identical with the 2005 development test data and
the 2006 development data.
|
News Commentary nc-dev2007
- English
- French
- Spanish
- German
- Czech
This is identical with the 2007 development data.
|
Hunglish hung-dev2008
Note that this dev set is extracted from the official Hunglish corpus.
The training corpus we provide does not overlap with the dev set.
|
Europarl devtest2006
- English
- French
- Spanish
- German
This data is identical with the 2005 test data and
the 2006 development test data.
|
News Commentary nc-devtest2007
- English
- French
- Spanish
- German
- Czech
This data is identical with the 2006 test data (out-of-domain part).
|
Hunglish hung-devtest2008
Note that this devtest set is extracted from the official Hunglish corpus.
The training corpus we provide does not overlap with the devtest set.
|
Europarl test2006
- English
- French
- Spanish
- German
This data is identical with the 2006 test data (in-domain part).
|
News Commentary nc-test2007
- English
- French
- Spanish
- German
- Czech
This data is identical with the 2007 test data (out-of-domain part).
|
Europarl test2007
- English
- French
- Spanish
- German
This data is identical with the 2007 test data (in-domain part).
|
News Commentary nc-test2008
- English
- French
- Spanish
- German
- Czech
This data is identical with the 2008 test data.
|
Europarl test2008
- English
- French
- Spanish
- German
This data is identical with the 2008 test data.
|
Test Data
The test set is similar in nature as the news-dev2008 developement set. It is taken from identical sources.
News news-test2009
- English
- French
- Spanish
- German
- Czech
- Hungarian
|
Download
Evaluation
Evaluation will be done both automatically as well as by human judgement.
- Manual Scoring: We will collect subjective judgments about translation quality from human annotators. If you participate in the shared task, we ask you to commit about 8 hours of time to do the manual evaluation. The evaluation will be done with an online tool.
- As in previous years, we expect the translated submissions to be in recased, detokenized, XML format, just as in most other translation campaigns (NIST, TC-Star).
Dates
December 8: Test data released (available on this web site)
December 12: Results submissions (by email to pkoehn@inf.ed.ac.uk)
Note: If your system can easily generate n-best lists for your submissions, the groups participating in the system combination task would appreciate using them. Please email n-best results to jschroe1@inf.ed.ac.uk and indicate which submission they are from. If your n-best lists are too big to email, use this upload form and email to let us know you've uploaded them. Thanks!
January 9: Short paper submissions (4 pages)
| supported by the EuroMatrix project, P6-IST-5-034291-STP funded by the European Commission under Framework Programme 6 |