Shared Task: Biomedical Translation Task

Task description

This task aims to evaluate systems on the translation of documents from the biomedical domain. The test data will consist of biomedical abstracts and summaries of proposals for animal experiments. This year, the biomedical translation task will address the following language pairs:


ATTENTION!! We ask the participants to not download the Medline database by themselves in order to retrieve training data. Submissions that are derived from a model that was trained on the whole PubMed will be not considererd in the evaluation.

Participants can rely on training (and development) data from various sources, for instance:

Participants are also free to use out-of-domain data.


Evaluation will be carried out both automatically and manually. Automatic evaluation will make use of standard machine translation metrics, such as BLEU.

Native speakers of each of the languages will manually check the quality of the translation for a small sample of the submissions. If necessary, we also expect participants to support us in the manual evaluation (accordingly to the number of submissions).

We plan to release test sets for the following language pairs and sources:

Test Sets and Submission formats

The formats for the clinical case reports, ontology concepts, and clinical terminology sub-task are available in the ClinSpEn Web site.

Scientific abstracts:

For the test set of Medline abstracts, the format will be plain text files. The format will be the following:

The three values are separated by a TAB character:
doc1	1	sentence_1
doc2	2	sentence_2
doc2	3	sentence_3
doc2	4	sentence_4
doc2	5	sentence_5
doc2	n	sentence_n
doc4	1	sentence_1
doc4	2	sentence_2
The format for the submission will be the same, such as in the example below. The participants should follow the same order of the sentences as in the original test set file.
doc1	1	translated_sentence_1
doc2	2	translated_sentence_2
doc2	3	translated_sentence_3
doc2	4	translated_sentence_4
doc2	5	translated_sentence_5
doc2	n	translated_sentence_n
doc4	1	translated_sentence_1
doc4	2	translated_sentence_2

Submission Requirements

Please notice that, following general WMT policy explicitly enforced in other tasks, we will release all participants' submissions after this year's edition of the task to promote further studies.

For all participants, before 4th August: Prepare an abstract of your system (it may be a half/one-page brief description, or already full system description paper) and upload it to the paper submission system. Only primary systems of teams that submit an abstract paper will be included in the human evaluation.

It is possible to download test sets and submit the translation either using OCELoT or our submission system. Our testsets are also included in the General MT Task. Please check instructions below.


  1. Register your team at OCELoT.
  2. Send an email with your name, affiliation, OCELoT username to to get your team activated (it is not possible to submit translations before team validation).
  3. Download testsets for the Biomedical task and sub-tasks.
  4. You may want to use XML wrapping and unwrapping scripts.
  5. Translate testsets.
  6. Upload your submissions to the OCELoT. Each team is allowed at most 3 submissions per language pair. Scores in the system do not reflect actual system performance, they are mainly for validation purposes.

WMT Biomedical Submission System

Please register your team using this form. You will receive a mail with the confirmation of your registration. The link for submission site will be informed in this mail. Please register your team as soon as possible.

The Medline test files are available in the WMT'22 biomedical task Google Drive folder. For the test sets for the "Clinical case reports", "Ontology Concepts", and "Clinical Terminology" sub-tasks, please refere to OCELoT above.

The format for the submission files should include the original test file name preceded by the team identifier (as registered in the form above) and the run number, following this example for the abstracts:

Each team will be allowed to submit up to 3 runs per test set.

Important dates

Release of training data for shared tasksFebruary/March, 2022
Release of test data21st July, 2022
Results submission deadline29th July, 2022
Paper submission deadline7th September, 2022
Paper notification9th October, 2022
Camera-ready version due16th October, 2022
Conference EMNLP7th - 8th December, 2022

All deadlines are in AoE (Anywhere on Earth).


Rachel Bawden (University of Edinburgh, UK)
Giorgio Maria Di Nunzio (University of Padua, Italy)
Darryl Johan Estrada (Barcelona Supercomputing Center, Spain)
Eulàlia Farré-Maduell (Barcelona Supercomputing Center, Spain)
Cristian Grozea (Fraunhofer Institute, Germany)
Antonio Jimeno Yepes (University of Melbourne, Australia)
Salvador Lima-López (Barcelona Supercomputing Center, Spain)
Martin Krallinger (Barcelona Supercomputing Center, Spain)
Aurélie Névéol (Université Paris Saclay, CNRS, LISN, France)
Mariana Neves (German Federal Institute for Risk Assessment, Germany)
Roland Roller (DFKI, Germany)
Amy Siu (Beuth University of Applied Sciences, Germany)
Philippe Thomas (DFKI, Germany)
Federica Vezzani (University of Padua, Italy)
Maika Vicente Navarro, Maika Spanish Translator, Melbourne, Australia
Dina Wiemann (Novartis, Switzerland)
Lana Yeganova (NCBI/NLM/NIH, USA)


The ClinSpen sub-track was supported by Encargo of Plan TL (SEAD) to BSC and BIOMATDB project of European Union’s Horizon Europe Coordination & Support Action under Grant Agreement No 101058779 and AI4PROFHEALTH project (PID2020-119266RA-I00).

Please contact us in the mail Please join our discussion forum.