This workshop builds on five previous workshops on statistical machine translation:
This year's workshop will feature three shared tasks: a shared translation task, a system combination shared task, and a shared evaluation task to test automatic evaluation metrics. The shared translation task will include a featured task this year: translating disaster response SMS messages from Haitian Creole to English. The goal is to delve into the scientific challenges of producing machine translation systems useful enough to help first responders translate messages sent in the aftermath of disasters like the earthquake that struck Haiti in January of 2010. Low-resource languages and nosiy/informal input texts are major challenges for statistical machine translation.
In addition to the shared tasks, the workshop will also feature scientific papers on topics related to MT. Topics of interest include, but are not limited to:
The first shared task which will examine translation between the following language pairs:
All participants who submit entries will have their translations evaluated. We will evaluate translation performance by human judgment. To facilitate the human evaluation we will require participants in the shared tasks to manually judge some of the submitted translations.
We also provide baseline machine translation systems, with performance comparable to the best systems from last year's shared task.
Participants in the system combination task will be provided with the 1-best translations from each of the systems entered in the shared translation task. We will endeavor to provide a held-out development set for system combination, which will include translations from each of the systems and a reference translation. Any system combination strategy is acceptable, whether it selects the best translation on a per sentence basis or create novel translations by combining the systems' translations. The quality of the system combinations will be judged alongside the individual systems during the manual evaluation, as well as scored with automatic evaluation metrics.
Participants in the shared evaluation task will use their automatic evaluation metrics to score the output from the translation task and the system combination task. They will be provided with the output from the other two shared tasks along with reference translations. We will measure the correlation of automatic evaluation metrics with the human judgments.
This year we are also featuring a new, invitation-only tunable metrics task.
Submissions will consist of regular full papers of 6-10 pages, plus additional pages for references, formatted following the EMNLP 2011 guidelines. In addition, shared task participants will be invited to submit short papers (4-6 pages) describing their systems or their evaluation metrics. Both submission and review processes will be handled electronically.
We encourage individuals who are submitting research papers to evaluate their approaches using the training resources provided by this workshop and past workshops, so that their experiments can be repeated by others using these publicly available corpora.
Release of training data | January 24, 2011 |
Test set distributed for translation task | March 14, 2011 |
Submission deadline for translation task | March 20, 2011 |
Translations released for system combination | March 25, 2011 |
System combination deadline | April 1, 2011 |
Start of manual evaluation period | April 1, 2011 |
End of manual evaluation | May 31, 2011 |
Paper submission deadline | May 19, 2011 |
Notification of acceptance | June 17, 2011 |
Camera-ready deadline | July 1, 2011 |
Papers available online | July 23, 2011 |
Workshop in Edinburgh following EMNLP | July 30-31, 2011 |
Subscribe to to the announcement list for WMT11 by entering your e-mail address below. This list will be used to announce when the test sets are released, to indicate any corrections to the training sets, and to amend the deadlines as needed. |
You can read past announcements on the Google Groups page for WMT11. These also include an archive of annoucements from WMT10. |
For questions, comments, etc. please send email to pkoehn@inf.ed.ac.uk.
supported by the EuroMatrixPlus project
P7-IST-231720-STP
funded by the European Commission
under Framework Programme 7