A Gentle Introduction to Neural Machine Translation

One of the earliest goals for computers was the automatic translation of text from one language to another.

Automatic or machine translation is perhaps one of the most challenging artificial intelligence tasks given the fluidity of human language. Classically, rule-based systems were used for this task, which were replaced in the 1990s with statistical methods. More recently, deep neural network models achieve state-of-the-art results in a field that is aptly named neural machine translation.

In this post, you will discover the challenge of machine translation and the effectiveness of neural machine translation models.

After reading this post, you will know:

  • Machine translation is challenging given the inherent ambiguity and flexibility of human language.
  • Statistical machine translation replaces classical rule-based systems with models that learn to translate from examples.
  • Neural machine translation models fit a single model rather than a pipeline of fine-tuned models and currently achieve state-of-the-art results.

Kick-start your project with my new book Deep Learning for Natural Language Processing, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

A Gentle Introduction to Neural Machine Translation

A Gentle Introduction to Neural Machine Translation
Photo by Fabio Achilli, some rights reserved.

What is Machine Translation?

Machine translation is the task of automatically converting source text in one language to text in another language.

In a machine translation task, the input already consists of a sequence of symbols in some language, and the computer program must convert this into a sequence of symbols in another language.

— Page 98, Deep Learning, 2016.

Given a sequence of text in a source language, there is no one single best translation of that text to another language. This is because of the natural ambiguity and flexibility of human language. This makes the challenge of automatic machine translation difficult, perhaps one of the most difficult in artificial intelligence:

The fact is that accurate translation requires background knowledge in order to resolve ambiguity and establish the content of the sentence.

— Page 21, Artificial Intelligence, A Modern Approach, 3rd Edition, 2009.

Classical machine translation methods often involve rules for converting text in the source language to the target language. The rules are often developed by linguists and may operate at the lexical, syntactic, or semantic level. This focus on rules gives the name to this area of study: Rule-based Machine Translation, or RBMT.

RBMT is characterized with the explicit use and manual creation of linguistically informed rules and representations.

— Page 133, Handbook of Natural Language Processing and Machine Translation, 2011.

The key limitations of the classical machine translation approaches are both the expertise required to develop the rules, and the vast number of rules and exceptions required.

Need help with Deep Learning for Text Data?

Take my free 7-day email crash course now (with code).

Click to sign-up and also get a free PDF Ebook version of the course.

What is Statistical Machine Translation?

Statistical machine translation, or SMT for short, is the use of statistical models that learn to translate text from a source language to a target language gives a large corpus of examples.

This task of using a statistical model can be stated formally as follows:

Given a sentence T in the target language, we seek the sentence S from which the translator produced T. We know that our chance of error is minimized by choosing that sentence S that is most probable given T. Thus, we wish to choose S so as to maximize Pr(S|T).

A Statistical Approach to Machine Translation, 1990.

This formal specification makes the maximizing of the probability of the output sequence given the input sequence of text explicit. It also makes the notion of there being a suite of candidate translations explicit and the need for a search process or decoder to select the one most likely translation from the model’s output probability distribution.

Given a text in the source language, what is the most probable translation in the target language? […] how should one construct a statistical model that assigns high probabilities to “good” translations and low probabilities to “bad” translations?

— Page xiii, Syntax-based Statistical Machine Translation, 2017.

The approach is data-driven, requiring only a corpus of examples with both source and target language text. This means linguists are not longer required to specify the rules of translation.

This approach does not need a complex ontology of interlingua concepts, nor does it need handcrafted grammars of the source and target languages, nor a hand-labeled treebank. All it needs is data—sample translations from which a translation model can be learned.

— Page 909, Artificial Intelligence, A Modern Approach, 3rd Edition, 2009.

Quickly, the statistical approach to machine translation outperformed the classical rule-based methods to become the de-facto standard set of techniques.

Since the inception of the field at the end of the 1980s, the most popular models for statistical machine translation […] have been sequence-based. In these models, the basic units of translation are words or sequences of words […] These kinds of models are simple and effective, and they work well for man language pairs

Syntax-based Statistical Machine Translation, 2017.

The most widely used techniques were phrase-based and focus on translating sub-sequences of the source text piecewise.

Statistical Machine Translation (SMT) has been the dominant translation paradigm for decades. Practical implementations of SMT are generally phrase-based systems (PBMT) which translate sequences of words or phrases where the lengths may differ

Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation, 2016.

Although effective, statistical machine translation methods suffered from a narrow focus on the phrases being translated, losing the broader nature of the target text. The hard focus on data-driven approaches also meant that methods may have ignored important syntax distinctions known by linguists. Finally, the statistical approaches required careful tuning of each module in the translation pipeline.

What is Neural Machine Translation?

Neural machine translation, or NMT for short, is the use of neural network models to learn a statistical model for machine translation.

The key benefit to the approach is that a single system can be trained directly on source and target text, no longer requiring the pipeline of specialized systems used in statistical machine learning.

Unlike the traditional phrase-based translation system which consists of many small sub-components that are tuned separately, neural machine translation attempts to build and train a single, large neural network that reads a sentence and outputs a correct translation.

Neural Machine Translation by Jointly Learning to Align and Translate, 2014.

As such, neural machine translation systems are said to be end-to-end systems as only one model is required for the translation.

The strength of NMT lies in its ability to learn directly, in an end-to-end fashion, the mapping from input text to associated output text.

Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation, 2016.

Encoder-Decoder Model

Multilayer Perceptron neural network models can be used for machine translation, although the models are limited by a fixed-length input sequence where the output must be the same length.

These early models have been greatly improved upon recently through the use of recurrent neural networks organized into an encoder-decoder architecture that allow for variable length input and output sequences.

An encoder neural network reads and encodes a source sentence into a fixed-length vector. A decoder then outputs a translation from the encoded vector. The whole encoder–decoder system, which consists of the encoder and the decoder for a language pair, is jointly trained to maximize the probability of a correct translation given a source sentence.

Neural Machine Translation by Jointly Learning to Align and Translate, 2014.

Key to the encoder-decoder architecture is the ability of the model to encode the source text into an internal fixed-length representation called the context vector. Interestingly, once encoded, different decoding systems could be used, in principle, to translate the context into different languages.

… one model first reads the input sequence and emits a data structure that summarizes the input sequence. We call this summary the “context” C. […] A second mode, usually an RNN, then reads the context C and generates a sentence in the target language.

— Page 461, Deep Learning, 2016.

For more on the Encoder-Decoder recurrent neural network architecture, see the post:

Encoder-Decoders with Attention

Although effective, the Encoder-Decoder architecture has problems with long sequences of text to be translated.

The problem stems from the fixed-length internal representation that must be used to decode each word in the output sequence.

The solution is the use of an attention mechanism that allows the model to learn where to place attention on the input sequence as each word of the output sequence is decoded.

Using a fixed-sized representation to capture all the semantic details of a very long sentence […] is very difficult. […] A more efficient approach, however, is to read the whole sentence or paragraph […], then to produce the translated words one at a time, each time focusing on a different part of he input sentence to gather the semantic details required to produce the next output word.

— Page 462, Deep Learning, 2016.

The encoder-decoder recurrent neural network architecture with attention is currently the state-of-the-art on some benchmark problems for machine translation. And this architecture is used in the heart of the Google Neural Machine Translation system, or GNMT, used in their Google Translate service.
https://translate.google.com

… current state-of-the-art machine translation systems are powered by models that employ attention.

— Page 209, Neural Network Methods in Natural Language Processing, 2017.

For more on attention, see the post:

Although effective, the neural machine translation systems still suffer some issues, such as scaling to larger vocabularies of words and the slow speed of training the models. There are the current areas of focus for large production neural translation systems, such as the Google system.

Three inherent weaknesses of Neural Machine Translation […]: its slower training and inference speed, ineffectiveness in dealing with rare words, and sometimes failure to translate all words in the source sentence.

Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation, 2016.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Books

Papers

Additional

Summary

In this post, you discovered the challenge of machine translation and the effectiveness of neural machine translation models.

Specifically, you learned:

  • Machine translation is challenging given the inherent ambiguity and flexibility of human language.
  • Statistical machine translation replaces classical rule-based systems with models that learn to translate from examples.
  • Neural machine translation models fit a single model rather than a pipeline of fine tuned models and currently achieve state-of-the-art results.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Develop Deep Learning models for Text Data Today!

Deep Learning for Natural Language Processing

Develop Your Own Text models in Minutes

...with just a few lines of python code

Discover how in my new Ebook:
Deep Learning for Natural Language Processing

It provides self-study tutorials on topics like:
Bag-of-Words, Word Embedding, Language Models, Caption Generation, Text Translation and much more...

Finally Bring Deep Learning to your Natural Language Processing Projects

Skip the Academics. Just Results.

See What's Inside

31 Responses to A Gentle Introduction to Neural Machine Translation

  1. Avatar
    Roberto Mariani January 1, 2018 at 3:17 am #

    Given a database of hundreds of million lines of short sentences with a limited number of 20000 words, do you think it is better to investigate a character-level RNN or a word-based RNN? What your intuition tells you?

    • Avatar
      Jason Brownlee January 1, 2018 at 5:29 am #

      Start with words and go to char to see if it can lift skill or simplify the model.

  2. Avatar
    Rodolfo Maslias January 2, 2018 at 6:03 pm #

    I shared your interesting article on my Fb page European Terminology

  3. Avatar
    Dan Baez January 10, 2018 at 4:08 pm #

    Great post Jason, machinelearningmastery.com has become my new home for practical learning as I am starting to get a hold of some ML techniques. A suggestion from me that may help others..

    Could you look putting together a simple tutorial developing ‘production’ ready models. For example, once a model has been developed how does one go about updating with new data and using the model for ongoing classification and prediction with new data. Some methods I have come stumbled across are manually updating new inputs into the code, manually updating new inputs into a .CSV file and for bigger datasets updating new data into .H5 file that the model recognises. This would help take the enormous learnings you offer to a level where the models become an ongoing tool for work or research….definitely something I have not yet mastered!

  4. Avatar
    Ali July 25, 2018 at 6:03 pm #

    Sir, your post is very informative, and it gives me novel intuitions into this area.
    Thank you very much for sharing your knowledge.
    I’m completely new in this field.
    Actually, I used to translate research papers and articles as my freelance job. So, I know nothing academic in the computer science field.
    However, I have been interested in machine learning since 2 years ago. I worked with python and attended in some online courses. The whole field is full of joy, and challenges, of course.
    I’m not an English native speaker, as it can be inferred from my english writing skills; sorry for that.
    My first language is Persian (Farsi) and Persian has no ASCII representation. We use unicode charset, just like Arabic.
    I was wondering if the aforementioned issue (lack of ASCII support), and the special properties of Persian language (e.g., its syntax which is way different from that of English, Spanish, French, or even Arabic) would affect NLP techniques and algorithms used in translation services like Google Translate?
    I think google service translates English-Arabic pair so much better than English-Persian pair, and I feel like it has nothing to do with the volume of data (Persian texts, particularly) provided for the engine.
    Also, I really like to develope a minimal machine translation project (for my research purposes), but I have no idea in terms of best algorithms, platforms, or techniques.
    It would be useful if you share your opinion with us on this particular matter, and I would really appreciate that.
    Again, thank you for the intuitive information you post here.
    Best Wishes,
    Ali from Persia

    • Avatar
      Jason Brownlee July 26, 2018 at 7:39 am #

      It is an interesting question and not something I know much about. Off the cuff, I would try to model the problem using unicode instead of chars, but I’d encourage you to read up in the literature how it is addressed generally.

  5. Avatar
    Bob Hodgson September 6, 2018 at 5:23 am #

    Dear Jason

    Do you have any thoughts on the usefulness of NNT to the task of Bible translation? I consult for a Bible translation agency and am eager to show the application of NNT to the production of first draft translations in small and threatened languages of the world. FYI: the Hebrew Bible has only about 6,000+ discrete words, the Christian New Testament about the same amount. Many of the small and endangered languages have about the same number of discrete words.

    • Avatar
      Jason Brownlee September 6, 2018 at 5:42 am #

      I don’t know. Perhaps prototype some models and see how well it performs.

  6. Avatar
    Dario September 7, 2018 at 11:37 pm #

    Hi Jason, would NMT a good method to do code translation from one language to another: let’s say from R to Python? Thanks

  7. Avatar
    Buli Diriba January 19, 2019 at 7:25 am #

    Hello Jason,

    Thanks for the post its very constructive and interesting, and it gives me good understanding but I got some questions on Neural Machine Translation

    1 As I understand, In NMT we don’t need a separate language model, so how does a Decoder learns the grammar of the target language during predicting the next word, Or does a Seq2seq model do not need to learn grammar of a language ?

    • Avatar
      Jason Brownlee January 19, 2019 at 8:18 am #

      It learns a conditional probabilistic model, e.g. output the next word conditioned on the input and on the words generated so far.

  8. Avatar
    Ben Johnson August 11, 2019 at 12:23 pm #

    Hello Jason:

    As luck would have it, I’m glad I came across your informative post. It is a good introduction–thanks to your good analysis and gentle approach (your headline got me here).

    I have been translating from Japanese to English for about 40 years now, and since the beginning of MT, I do see surprising progress, but it still seems the “attention” or equivalent level of improvement in the Western languages is greater than for the Asian languages, as nuanced in some of the earlier posts to you in this blog. Goofy Google translations (Google Maps) made headlines recently in Japan, in addition to the continued cry for help with Chinese to English translations.

    I perceive this is still simply a “cultural issue” and in time this too will improve; sorry for being in the wrong forum. It seems to me NMT providers should at least use qualified human checks before publishing (sometimes perverse) translations. “Well, this too will get better sooner or later.”

    • Avatar
      Jason Brownlee August 12, 2019 at 6:33 am #

      Great comment Ben, thanks for sharing.

      Regarding Chinese translation, I would expect that systems by Baidu may be more effective thatn those by google.

    • Avatar
      Stefano August 20, 2021 at 2:40 pm #

      Probably one of the issues is that historically most of the investment came from the west (IBM, Google), and it’s easier to collaborate in a language that many understand (English).

      However, I think you might want to have a look at DeepL, I have clients in China and Japan that are really happy with it (https://www.deepl.com/en/blog/20200319)

  9. Avatar
    Maysoon December 11, 2019 at 11:27 pm #

    Thank you so much for the comprehensive explanation of how neural machine translation works, I have a question regarding probabilities learning; for commonly used words, pronouns, helping verbs, etc. Are they treated differently than domain-specific terms?

    • Avatar
      Jason Brownlee December 12, 2019 at 6:25 am #

      Thanks!

      You can handle them differently if you want, or remove them completely if needed.

  10. Avatar
    Andrew Lambourne February 27, 2020 at 7:46 pm #

    A valuable and well-structured overview of this fascinating field, for which many thanks.

    I have great respect for the quantum leaps which neural nets have brought to Speech and Language Technology in general – my own specific interest has been real-time transcription. Nevertheless, I still believe that another very significant quantum leap is still required. That will involve bridging the huge capability gap between the neural net approach and the approach taken by a human being: the human approach is explicitly informed by “meaning”.

    Whilst neural nets encode the “meaning-driven” human skill which has created example target texts from given source texts, they have no explicit concept of that meaning. Hence they may still “lose the thread”. An even if working at a sentence level rather than by word or by phrase, even a sentence is not normally an independent entity: sentences are usually part of a self-consistent text which has been created for a purpose – to convey meaning from one human to another.

    I’d be interested in your comments on this, and on how the next quantum leap to address the challenge of respecting context and meaning in SLT in general might be taken?

    • Avatar
      Jason Brownlee February 28, 2020 at 6:04 am #

      Thanks.

      I think things have come a long way even since I wrote this article. No leaps required I think, just incremental improvement.

      Newer methods don’t seem to lose the thread anymore even after long input sequences.

  11. Avatar
    salma March 31, 2020 at 12:50 am #

    Thank you. I am asked to write a paper and I will mainly be discussing modern translation tools and their impact on the process and product of translation. Also the challenges of machine translation. So, for those two ideas which translation tools fit the ideas to be examined?

  12. Avatar
    Dominique August 24, 2020 at 7:11 pm #

    Dear Jason,

    In your book “Deep Learning for Natural Language Processing” chapter 15, the predictions seemed not be influenced by the number of epochs. Increasing the number of epochs to 40 still gave me a wrong prediction:

    However increasing the level of detail of the movie review examples gave me a good prediction:

    This is a confirmation of your remark “this may be the two contrived reviews are very short and the model is expecting sequences of 1,000 or more words.”

    Kind regards,
    Dominique

    • Avatar
      Jason Brownlee August 25, 2020 at 6:40 am #

      Great finding, thank you!

      • Avatar
        Sam Wallace October 4, 2021 at 6:26 am #

        Hello,

        I am looking for a way to set up machine translation of older versions of languages to their modern equivalents. For example, I would like to be able to translate an Old French text into Modern French. Any help or suggestions would be appreciated.

        Thanks!

        • Adrian Tam
          Adrian Tam October 6, 2021 at 8:18 am #

          Try seq2seq model first. That should be the easiest.

  13. Avatar
    Vijay Singh October 8, 2021 at 6:14 pm #

    your post was really very informative.
    where can i get the actual code for
    sequence to sequence models
    encoder decoder model
    encoder decoder model with attention

    Also any suggestion for hands on neural machine translation course.

    i will be highly grateful to you for your help.

    Thanks a lot !

Leave a Reply