How to Prepare a French-to-English Dataset for Machine Translation

Machine translation is the challenging task of converting text from a source language into coherent and matching text in a target language.

Neural machine translation systems such as encoder-decoder recurrent neural networks are achieving state-of-the-art results for machine translation with a single end-to-end system trained directly on source and target language.

Standard datasets are required to develop, explore, and familiarize yourself with how to develop neural machine translation systems.

In this tutorial, you will discover the Europarl standard machine translation dataset and how to prepare the data for modeling.

After completing this tutorial, you will know:

  • The Europarl dataset comprised of the proceedings from the European Parliament in a host of 11 languages.
  • How to load and clean the parallel French and English transcripts ready for modeling in a neural machine translation system.
  • How to reduce the vocabulary size of both French and English data in order to reduce the complexity of the translation task.

Kick-start your project with my new book Deep Learning for Natural Language Processing, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

How to Prepare a French-to-English Dataset for Machine Translation

How to Prepare a French-to-English Dataset for Machine Translation
Photo by Giuseppe Milo, some rights reserved.

Tutorial Overview

This tutorial is divided into 5 parts; they are:

  1. Europarl Machine Translation Dataset
  2. Download French-English Dataset
  3. Load Dataset
  4. Clean Dataset
  5. Reduce Vocabulary

Python Environment

This tutorial assumes you have a Python SciPy environment installed with Python 3 installed.

The tutorial also assumes you have scikit-learn, Pandas, NumPy, and Matplotlib installed.

If you need help with your environment, see this post:

Need help with Deep Learning for Text Data?

Take my free 7-day email crash course now (with code).

Click to sign-up and also get a free PDF Ebook version of the course.

Europarl Machine Translation Dataset

The Europarl is a standard dataset used for statistical machine translation, and more recently, neural machine translation.

It is comprised of the proceedings of the European Parliament, hence the name of the dataset as the contraction Europarl.

The proceedings are the transcriptions of speakers at the European Parliament, which are translated into 11 different languages.

It is a collection of the proceedings of the European Parliament, dating back to 1996. Altogether, the corpus comprises of about 30 million words for each of the 11 official languages of the European Union

Europarl: A Parallel Corpus for Statistical Machine Translation, 2005.

The raw data is available on the European Parliament website in HTML format.

The creation of the dataset was lead by Philipp Koehn, author of the book “Statistical Machine Translation.”

The dataset was made available for free to researchers on the website “European Parliament Proceedings Parallel Corpus 1996-2011,” and often appears as a part of machine translation challenges, such as the Machine Translation task in the 2014 Workshop on Statistical Machine Translation.

The most recent version of the dataset is version 7, released in 2012, comprised of data from 1996 to 2011.

Download French-English Dataset

We will focus on the parallel French-English dataset.

This is a prepared corpus of aligned French and English sentences recorded between 1996 and 2011.

The dataset has the following statistics:

  • Sentences: 2,007,723
  • French words: 51,388,643
  • English words: 50,196,035

You can download the dataset from here:

Once downloaded, you should have the file “fr-en.tgz” in your current working directory.

You can unzip this archive file using the tar command, as follows:

You will now have two files, as follows:

  • English: europarl-v7.fr-en.en (288M)
  • French: europarl-v7.fr-en.fr (331M)

Below is a sample of the English file.

Below is a sample of the French file.

Load Dataset

Let’s start off by loading the data files.

We can load each file as a string. Because the files contain unicode characters, we must specify an encoding when loading the files as text. In this case, we will use UTF-8 that will easily handle the unicode characters in both files.

The function below, named load_doc(), will load a given file and return it as a blob of text.

Next, we can split the file into sentences.

Generally, one utterance is stored on each line. We can treat these as sentences and split the file by new line characters. The function to_sentences() below will split a loaded document.

When preparing our model later, we will need to know the length of sentences in the dataset. We can write a short function to calculate the shortest and longest sentences.

We can tie all of this together to load and summarize the English and French data files. The complete example is listed below.

Running the example summarizes the number of lines or sentences in each file and the length of the longest and shortest lines in each file.

Importantly, we can see that the number of lines 2,007,723 matches the expectation.

Clean Dataset

The data needs some minimal cleaning before being used to train a neural translation model.

Looking at some samples of text, some minimal text cleaning may include:

  • Tokenizing text by white space.
  • Normalizing case to lowercase.
  • Removing punctuation from each word.
  • Removing non-printable characters.
  • Converting French characters to Latin characters.
  • Removing words that contain non-alphabetic characters.

These are just some basic operations as a starting point; you may know of or require more elaborate data cleaning operations.

The function clean_lines() below implements these cleaning operations. Some notes:

  • We use the unicode API to normalize unicode characters, which converts French characters to Latin equivalents.
  • We use an inverse regex match to retain only those characters in words that are printable.
  • We use a translation table to translate characters as-is, but exclude all punctuation characters.

Once normalized, we save the lists of clean lines directly in binary format using the pickle API. This will speed up loading for further operations later and in the future.

Reusing the loading and splitting functions developed in the previous sections, the complete example is listed below.

After running, the clean sentences are saved in english.pkl and french.pkl files respectively.

As part of the run, we also print the first few lines of each list of clean sentences, reproduced below.

English:

French:

My reading of French is very limited, but at least as the English is concerned, further improves could be made, such as dropping or concatenating hanging ‘s‘ characters for plurals.

Reduce Vocabulary

As part of the data cleaning, it is important to constrain the vocabulary of both the source and target languages.

The difficulty of the translation task is proportional to the size of the vocabularies, which in turn impacts model training time and the size of a dataset required to make the model viable.

In this section, we will reduce the vocabulary of both the English and French text and mark all out of vocabulary (OOV) words with a special token.

We can start by loading the pickled clean lines saved from the previous section. The load_clean_sentences() function below will load and return a list for a given filename.

Next, we can count the occurrence of each word in the dataset. For this we can use a Counter object, which is a Python dictionary keyed on words and updates a count each time a new occurrence of each word is added.

The to_vocab() function below creates a vocabulary for a given list of sentences.

We can then process the created vocabulary and remove all words from the Counter that have an occurrence below a specific threshold.

The trim_vocab() function below does this and accepts a minimum occurrence count as a parameter and returns an updated vocabulary.

Finally, we can update the sentences, remove all words not in the trimmed vocabulary and mark their removal with a special token, in this case, the string “unk“.

The update_dataset() function below performs this operation and returns a list of updated lines that can then be saved to a new file.

We can tie all of this together and reduce the vocabulary for both the English and French dataset and save the results to new data files.

We will use a min occurrence of 5, but you are free to explore other min occurrence counts suitable for your application.

The complete code example is listed below.

First, the size of the English vocabulary is reported followed by the updated size. The updated dataset is saved to the file ‘english_vocab.pkl‘ and a spot check of some updated examples with out of vocabulary words replace with “unk” are printed.

We can see that the size of the vocabulary was shrunk by about half to a little over 40,000 words.

The same procedure is then performed on the French dataset, saving the result to the file ‘french_vocab.pkl‘.

We see a similar shrinking of the size of the French vocabulary.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Summary

In this tutorial, you discovered the Europarl machine translation dataset and how to prepare the data ready for modeling.

Specifically, you learned:

  • The Europarl dataset comprised of the proceedings from the European Parliament in a host of 11 languages.
  • How to load and clean the parallel French and English transcripts ready for modeling in a neural machine translation system.
  • How to reduce the vocabulary size of both French and English data in order to reduce the complexity of the translation task.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Develop Deep Learning models for Text Data Today!

Deep Learning for Natural Language Processing

Develop Your Own Text models in Minutes

...with just a few lines of python code

Discover how in my new Ebook:
Deep Learning for Natural Language Processing

It provides self-study tutorials on topics like:
Bag-of-Words, Word Embedding, Language Models, Caption Generation, Text Translation and much more...

Finally Bring Deep Learning to your Natural Language Processing Projects

Skip the Academics. Just Results.

See What's Inside

56 Responses to How to Prepare a French-to-English Dataset for Machine Translation

  1. Avatar
    Gerrit Govaerts January 8, 2018 at 6:54 pm #

    A bit off topic , but very sharp observations about the remarkable success of recurrent and convolutional neural nets and why basic multi layer perceptrons are probably not worth the effort : http://www.stochasticlifestyle.com/algorithm-efficiency-comes-problem-information/

  2. Avatar
    Klaas January 9, 2018 at 6:20 am #

    This is again outstanding work Jason. Thanks so much for sharing. This is really really helpful. Especially for people like me who lack the scientific / mathematic background but are very interested in learning this none the less.
    Highly appreciate your work!
    Best regards

    • Avatar
      Jason Brownlee January 9, 2018 at 3:17 pm #

      Thanks, I’m glad it helped.

      • Avatar
        riyaj atar December 10, 2020 at 3:27 am #

        thanks Jason . once again its great tutorial.
        i have one question regarding creating dataset in tfds format for parallel corpus for machine translation task.
        can you give few steps how to create dataset in that format for our own dataset ?
        i appreciate your time and efforts.
        thanks .stay healthy.

        • Avatar
          Jason Brownlee December 10, 2020 at 6:30 am #

          Thanks.

          Sorry, I don’t undertand your question. Can you please elaborate the problem you’re having.

  3. Avatar
    Vidyush Bakshi January 9, 2018 at 9:38 pm #

    Great work again , really good explaination!!

  4. Avatar
    Canbey Bilgili January 13, 2018 at 1:25 am #

    Great article. It is a good source for preparing data. Thank you!

  5. Avatar
    LeeX January 22, 2018 at 2:28 am #

    China Researchers very appreciate your tutorials!

  6. Avatar
    Nixon February 7, 2018 at 4:13 am #

    Hi brother i am new learner how study machine learning to easy way please help me

  7. Avatar
    mzeid February 26, 2018 at 11:48 am #

    Hi Jason,

    This is a wonderful article indeed. I am trying to follow your guide on English>Arabic data, but the function ‘clean_lines(lines)’ when used with Arabic text, doesn’t yield any results. Any idea how to fix this in Arabic?

    Thanks in advance!

    • Avatar
      Jason Brownlee February 26, 2018 at 2:55 pm #

      Sorry, I have not worked with Arabic. Perhaps the function needs to be updated to support unicode chars?

  8. Avatar
    machine_translator April 6, 2018 at 9:19 pm #

    Thanks a lot for this very clear tutorial on how to prepare data for machine translation. What would be the next steps? Are similar tutorials available for those steps?

  9. Avatar
    Zayed April 11, 2018 at 8:45 am #

    Great and useful tutorial.

    I would like to save the files to plain text files ‘.txt’ in UTF-8 format and I don’t need pickle files.

    What do I need to change in the code above to make it output text files?

    • Avatar
      Jason Brownlee April 11, 2018 at 4:15 pm #

      Perhaps you could save the vocab with one line per word.

      You could save the translations one per line.

      To do this, you could write a function to save a list to an ASCII file using the standard Python API and call it instead of the pickle function.

      • Avatar
        Zayed April 12, 2018 at 2:53 am #

        Thanks Jason for your reply. I don’t need even the vocab. I am stuck at this function.

        # save a list of clean sentences to file
        def save_clean_sentences(sentences, filename):
        dump(sentences, open(filename, ‘wb’))
        print(‘Saved: %s’ % filename)

        I tried this and I can see the spot check result (10 lines of each language) in PyCharm, but nothing is written to the files.

        def save_clean_sentences(sentences, filename):
        f = open(filename, ‘r+’)
        for line in f:
        f.write(line[i], ‘r+’)
        f.write(‘\n’)

        What am I missing?

        Thanks again for taking the time to support me.

  10. Avatar
    Zayed April 14, 2018 at 3:38 am #

    I have a question about removing punctuation from data. In your example above, you see sentences like this:

    “please rise then for this minute s silence”
    “the house rose and observed a minute s silence”

    As you can see, apostrophe is removed from sentences. So, does this mean that I try to translate the same sentence, but with the apostrophe “please rise then for this minute’s silence”, the neural decoder won’t be able to pick the correct French translation or the translation would be different since now the source is a bit different.

    Would the translation be different, if the same source sentence has a period at the end or starts with a capital letter? For example:

    Please rise then for this minute’s silence
    Please rise then for this minute’s silence.

    Is removing punctuation from training data the standard? Does it improve the overall quality? Any pointers please!

    • Avatar
      Jason Brownlee April 14, 2018 at 6:50 am #

      I removed it (or it was removed from the training data prior, I don’t recall) to simplify the problem.

      I would recommend adding it back in (not stripping it from the training data, if it is present or get data with punctuation) to learn the translation with punctuation.

      It is standard when focusing on the translation part, but not for a working real-world model.

      Alternately, you could develop a model to add punctuation back in.

      • Avatar
        Zayed April 14, 2018 at 7:17 am #

        Thanks Jason! This makes sense.

        I have another question about lower-casing. If you lower-case all training data, would the neural decoder be able to capitalize the beginning of the target sentence or keep unknown words that were not seen in the training data as is? Or is it going to lower-case all words during decoding/translation? Say for example we have this sentence:

        IBM is providing AI services.

        Would the neural decoder be able to render IBM and AI as is if it was trained on lower-cased data only?

        Thanks again for your support and I hope you don’t mind my frequent questions. Please also let me know which one of your books cover neural machine translation in detail? I am mainly interested in creating neural machine translation systems and neural spell-checker. Do you cover neural-based spell-checking in any of your books?

        Thanks again!

        • Avatar
          Jason Brownlee April 15, 2018 at 6:17 am #

          If all training data is lower case, then the model only knows about lower case.

          If case is important, you can train with case preserved or train a model to add case to lower case strings, or other clever ideas…

  11. Avatar
    Dominique Lahaix September 20, 2018 at 7:50 am #

    Hi Jason – got a question maybe you can help?

    we’ve build a system using ML that automatically categorize short documents. we’ve done it in English and we need to do the same for French. We did supervised learning using a very large corpus of manually annotated documents.

    Unfortunately our French training set is way smaller … so I was wondering whether we could:

    – translate the training set and use (complement) this a sa training set for French
    – translate the model itself (I even don’t know whether this is an option)

    Have you heard about people using translated training set to build model? does it work OK?
    Thanks

    • Avatar
      Jason Brownlee September 20, 2018 at 8:11 am #

      Sound like great ideas!

      Maybe generate new data to train with, as augmented versions of your existing documents.

      Also, when using smaller datasets, consider regularization methods to ensure you do not overfit the training data.

  12. Avatar
    simran December 26, 2018 at 3:44 pm #

    greetings sir,
    i am doing my Ph.D in Corpus linguistics using machine learning. i need help for developing a preprocessing algorithm for corpus before translation.

    • Avatar
      Jason Brownlee December 27, 2018 at 5:39 am #

      It is not an algorithm, instead it is a sequence of preprocessing steps that are most appropriate for your specific dataset.

  13. Avatar
    Prashant Kumar Singh March 14, 2019 at 9:45 pm #

    Hi Jason, Is it possible to translate language from English to other language i.e. French.

    My project example is as below;

    I’m working in existdb and generating the PDF files which is published globally on the webpage. BUT I want to change that PDF content language country wise. So, is it possible to do with your blog.

    The changeling task is how to club python (currently in your blog) within existdb (Open source database). OR any other way to do this. Please help me to understand.

    Thanks,
    Prashant

    • Avatar
      Jason Brownlee March 15, 2019 at 5:30 am #

      Tanks for the suggestion.

      • Avatar
        Prashant Kumar Singh March 20, 2019 at 4:07 pm #

        Hi Jason,

        Can you please answer me on below point which will be helpful for me;

        Question: Is it possible to connect ML model to my webpage, which is based on exist (XML) database content? And please suggest me the steps to follow.

        Thanks,
        Prashant

        • Avatar
          Jason Brownlee March 21, 2019 at 7:58 am #

          I don’t see why not.

          It sounds like an engineering question and will depend on the specifics of your production environment. I don’t have a worked example, sorry.

  14. Avatar
    anvesh June 10, 2019 at 7:27 pm #

    can we use a english to french pretained model to train on my small different dataset then translate english to any other language

    • Avatar
      Jason Brownlee June 11, 2019 at 7:52 am #

      It might help as a starting point, but further training will be required.

  15. Avatar
    Dani Gross June 15, 2019 at 6:44 pm #

    Hi Jason,
    thank you for your tutorials!

    How do you handle sentence alignment with this corpus, given that it contains empty strings?

    • Avatar
      Jason Brownlee June 16, 2019 at 7:12 am #

      Sorry, I don’t have a tutorial on sentence alignment, I cannot give you good off the cuff advice.

  16. Avatar
    Rishai August 27, 2019 at 9:07 am #

    Hi Jason, thanks for all these tutorials. Would you have a tutorial on how to go the next step, of converting tokens/words to integer vectors, so that they can be passed into an Embedding layer?

  17. Avatar
    Sreenivas Kashyap October 26, 2019 at 3:45 am #

    Hi Jason,
    Good Work , I’m Developing Translation Model From Kannada to English But The tokenizer doesn’t work while split the text.
    OUTPUT IS LIKE THIS:

    Saved: english-kannada.pkl
    [tom woke up] => []
    [give me half] => []
    [we needed it] => []
    [tom liked you] => []
    [just go inside] => []
    [do you remember] => []
    [i just got back] => []
    [see you at] => []
    Can you suggest me in solving the above problem..

  18. Avatar
    Nithin December 5, 2019 at 5:58 am #

    Hello Jason,

    Thanks for the tutorial and a good explanation.
    I would greatly appreciate if you can clarify the following doubts:
    1) How will tokenizing the less frequent words with ‘unk’ affect the model accuracy, because now the token ‘unk’ might be a significant proportion of the data.
    2) Do you have any comments on using character level encoding vs work level encodings.
    3) With a larger vocabulary size (around 0.1M), are there techniques and softwares to use the sparse representation of words, instead of one-hot encoding, to reduce the memory requirements while training.

    Thanks

    • Avatar
      Jason Brownlee December 5, 2019 at 6:44 am #

      Remove those words from the vocab, then mark words not in the vocab as unk when pre-processing text for modeling.

      Modeling the world level is more efficient for now, as far as I know.

      100K vocab is modest. Don’t worry about it.

      • Avatar
        Nithin December 7, 2019 at 4:55 am #

        Hello Jason,

        Thanks for the answers.
        We are working on using encoder decoder model on europarl. We are using a 2 GRU layers with 128 cells and a time distributed layer. The vocabulary for French after following the tutorial is around 50K (with 5 as threshold). We want to train this on a single GPU with 12 GB of GPU memory, but with batch sizes of 16 or 32, the GPU memory fills up and gives the memory full error.
        The most probable reason that we suspect for this is because of the one hot representation of each word as a vector of 50K dimension.
        Also the time distributed layer has shape (None, 528 (max size of French sentence), 50K (output vector size = vocabulary of French).
        We wanted to know if there is any way by which we can avoid this (for ex. like libSVM) or more efficient representations for RNN with large vocabulary size to help in training with large batch sizes.

        Thank you

        • Avatar
          Jason Brownlee December 7, 2019 at 5:41 am #

          Perhaps use a generator to achieve progressive loading of batches?

          • Avatar
            Nithin December 7, 2019 at 7:59 am #

            Yes, we are using generator. The module fails saying that it faced out of memory while creating a tensor of size [19136,58802] ([(max French sentence length)*batch_size, French vocabulary size]), for max_french_sentence_size around 600 and batch size 1024. Which seems correct, as this matrix would be around 8GB for 8byte float representation. So we were wondering if there is any way to solve this issue and what are the state-of-the-art methods to solve these issues.

            Thank you

          • Avatar
            Jason Brownlee December 8, 2019 at 6:03 am #

            Perhaps try using a small vocab?
            Perhaps try using a smaller batch size?
            Perhaps try using a smaller sentence length?
            Perhaps try training on a machine with more memory?

  19. Avatar
    Jane January 16, 2020 at 1:28 pm #

    Is this considered as a seq2seq model?

  20. Avatar
    S.Gowri pooja May 17, 2020 at 11:09 am #

    Hi,jason . Thanks for sharing this article. How this pre process model vary for different languages?

  21. Avatar
    ARABA AMAN December 24, 2021 at 6:12 pm #

    I working on machine translation between Amharic and Afaan Oromo
    I preprare dataset on diffrent sheet with its corresponding target languages

    so how can I train from two file name for neural machine translation?
    that means how to feed this cleaned sentence to train model

Leave a Reply