How to Develop a Neural Machine Translation System from Scratch

Develop a Deep Learning Model to Automatically
Translate from German to English in Python with Keras, Step-by-Step.

Machine translation is a challenging task that traditionally involves large statistical models developed using highly sophisticated linguistic knowledge.

Neural machine translation is the use of deep neural networks for the problem of machine translation.

In this tutorial, you will discover how to develop a neural machine translation system for translating German phrases to English.

After completing this tutorial, you will know:

  • How to clean and prepare data ready to train a neural machine translation system.
  • How to develop an encoder-decoder model for machine translation.
  • How to use a trained model for inference on new input phrases and evaluate the model skill.

Let’s get started.

How to Develop a Neural Machine Translation System in Keras

How to Develop a Neural Machine Translation System in Keras
Photo by Björn Groß, some rights reserved.

Tutorial Overview

This tutorial is divided into 4 parts; they are:

  1. German to English Translation Dataset
  2. Preparing the Text Data
  3. Train Neural Translation Model
  4. Evaluate Neural Translation Model

Python Environment

This tutorial assumes you have a Python 3 SciPy environment installed.

You must have Keras (2.0 or higher) installed with either the TensorFlow or Theano backend.

The tutorial also assumes you have NumPy and Matplotlib installed.

If you need help with your environment, see this post:

Need help with Deep Learning for Text Data?

Take my free 7-day email crash course now (with code).

Click to sign-up and also get a free PDF Ebook version of the course.

Start Your FREE Crash-Course Now

German to English Translation Dataset

In this tutorial, we will use a dataset of German to English terms used as the basis for flashcards for language learning.

The dataset is available from the ManyThings.org website, with examples drawn from the Tatoeba Project. The dataset is comprised of German phrases and their English counterparts and is intended to be used with the Anki flashcard software.

The page provides a list of many language pairs, and I encourage you to explore other languages:

The dataset we will use in this tutorial is available for download here:

Download the dataset to your current working directory and decompress; for example:

You will have a file called deu.txt that contains 152,820 pairs of English to German phases, one pair per line with a tab separating the language.

For example, the first 5 lines of the file look as follows:

We will frame the prediction problem as given a sequence of words in German as input, translate or predict the sequence of words in English.

The model we will develop will be suitable for some beginner German phrases.

Preparing the Text Data

The next step is to prepare the text data ready for modeling.

Take a look at the raw data and note what you see that we might need to handle in a data cleaning operation.

For example, here are some observations I note from reviewing the raw data:

  • There is punctuation.
  • The text contains uppercase and lowercase.
  • There are special characters in the German.
  • There are duplicate phrases in English with different translations in German.
  • The file is ordered by sentence length with very long sentences toward the end of the file.

Did you note anything else that could be important?
Let me know in the comments below.

A good text cleaning procedure may handle some or all of these observations.

Data preparation is divided into two subsections:

  1. Clean Text
  2. Split Text

1. Clean Text

First, we must load the data in a way that preserves the Unicode German characters. The function below called load_doc() will load the file as a blob of text.

Each line contains a single pair of phrases, first English and then German, separated by a tab character.

We must split the loaded text by line and then by phrase. The function to_pairs() below will split the loaded text.

We are now ready to clean each sentence. The specific cleaning operations we will perform are as follows:

  • Remove all non-printable characters.
  • Remove all punctuation characters.
  • Normalize all Unicode characters to ASCII (e.g. Latin characters).
  • Normalize the case to lowercase.
  • Remove any remaining tokens that are not alphabetic.

We will perform these operations on each phrase for each pair in the loaded dataset.

The clean_pairs() function below implements these operations.

Finally, now that the data has been cleaned, we can save the list of phrase pairs to a file ready for use.

The function save_clean_data() uses the pickle API to save the list of clean text to file.

Pulling all of this together, the complete example is listed below.

Running the example creates a new file in the current working directory with the cleaned text called english-german.pkl.

Some examples of the clean text are printed for us to evaluate at the end of the run to confirm that the clean operations were performed as expected.

2. Split Text

The clean data contains a little over 150,000 phrase pairs and some of the pairs toward the end of the file are very long.

This is a good number of examples for developing a small translation model. The complexity of the model increases with the number of examples, length of phrases, and size of the vocabulary.

Although we have a good dataset for modeling translation, we will simplify the problem slightly to dramatically reduce the size of the model required, and in turn the training time required to fit the model.

You can explore developing a model on the fuller dataset as an extension; I would love to hear how you do.

We will simplify the problem by reducing the dataset to the first 10,000 examples in the file; these will be the shortest phrases in the dataset.

Further, we will then stake the first 9,000 of those as examples for training and the remaining 1,000 examples to test the fit model.

Below is the complete example of loading the clean data, splitting it, and saving the split portions of data to new files.

Running the example creates three new files: the english-german-both.pkl that contains all of the train and test examples that we can use to define the parameters of the problem, such as max phrase lengths and the vocabulary, and the english-german-train.pkl and english-german-test.pkl files for the train and test dataset.

We are now ready to start developing our translation model.

Train Neural Translation Model

In this section, we will develop the translation model.

This involves both loading and preparing the clean text data ready for modeling and defining and training the model on the prepared data.

Let’s start off by loading the datasets so that we can prepare the data. The function below named load_clean_sentences() can be used to load the train, test, and both datasets in turn.

We will use the “both” or combination of the train and test datasets to define the maximum length and vocabulary of the problem.

This is for simplicity. Alternately, we could define these properties from the training dataset alone and truncate examples in the test set that are too long or have words that are out of the vocabulary.

We can use the Keras Tokenize class to map words to integers, as needed for modeling. We will use separate tokenizer for the English sequences and the German sequences. The function below-named create_tokenizer() will train a tokenizer on a list of phrases.

Similarly, the function named max_length() below will find the length of the longest sequence in a list of phrases.

We can call these functions with the combined dataset to prepare tokenizers, vocabulary sizes, and maximum lengths for both the English and German phrases.

We are now ready to prepare the training dataset.

Each input and output sequence must be encoded to integers and padded to the maximum phrase length. This is because we will use a word embedding for the input sequences and one hot encode the output sequences The function below named encode_sequences() will perform these operations and return the result.

The output sequence needs to be one-hot encoded. This is because the model will predict the probability of each word in the vocabulary as output.

The function encode_output() below will one-hot encode English output sequences.

We can make use of these two functions and prepare both the train and test dataset ready for training the model.

We are now ready to define the model.

We will use an encoder-decoder LSTM model on this problem. In this architecture, the input sequence is encoded by a front-end model called the encoder then decoded word by word by a backend model called the decoder.

The function define_model() below defines the model and takes a number of arguments used to configure the model, such as the size of the input and output vocabularies, the maximum length of input and output phrases, and the number of memory units used to configure the model.

The model is trained using the efficient Adam approach to stochastic gradient descent and minimizes the categorical loss function because we have framed the prediction problem as multi-class classification.

The model configuration was not optimized for this problem, meaning that there is plenty of opportunity for you to tune it and lift the skill of the translations. I would love to see what you can come up with.

Finally, we can train the model.

We train the model for 30 epochs and a batch size of 64 examples.

We use checkpointing to ensure that each time the model skill on the test set improves, the model is saved to file.

We can tie all of this together and fit the neural translation model.

The complete working example is listed below.

Running the example first prints a summary of the parameters of the dataset such as vocabulary size and maximum phrase lengths.

Next, a summary of the defined model is printed, allowing us to confirm the model configuration.

A plot of the model is also created providing another perspective on the model configuration.

Plot of Model Graph for NMT

Plot of Model Graph for NMT

Next, the model is trained.

Each epoch takes about 30 seconds on modern CPU hardware; no GPU is required.

During the run, the model will be saved to the file model.h5, ready for inference in the next step.

Evaluate Neural Translation Model

We will evaluate the model on the train and the test dataset.

The model should perform very well on the train dataset and ideally have been generalized to perform well on the test dataset.

Ideally, we would use a separate validation dataset to help with model selection during training instead of the test set. You can try this as an extension.

The clean datasets must be loaded and prepared as before.

Next, the best model saved during training must be loaded.

Evaluation involves two steps: first generating a translated output sequence, and then repeating this process for many input examples and summarizing the skill of the model across multiple cases.

Starting with inference, the model can predict the entire output sequence in a one-shot manner.

This will be a sequence of integers that we can enumerate and lookup in the tokenizer to map back to words.

The function below, named word_for_id(), will perform this reverse mapping.

We can perform this mapping for each integer in the translation and return the result as a string of words.

The function predict_sequence() below performs this operation for a single encoded source phrase.

Next, we can repeat this for each source phrase in a dataset and compare the predicted result to the expected target phrase in English.

We can print some of these comparisons to screen to get an idea of how the model performs in practice.

We will also calculate the BLEU scores to get a quantitative idea of how well the model has performed.

The evaluate_model() function below implements this, calling the above predict_sequence() function for each phrase in a provided dataset.

We can tie all of this together and evaluate the loaded model on both the training and test datasets.

The complete code listing is provided below.

Running the example first prints examples of source text, expected and predicted translations, as well as scores for the training dataset, followed by the test dataset.

Your specific results will differ given the random shuffling of the dataset and the stochastic nature of neural networks.

Looking at the results for the test dataset first, we can see that the translations are readable and mostly correct.

For example: “ich liebe dich” was correctly translated to “i love you“.

We can also see that the translations were not perfect, with “ich konnte nicht gehen” translated to “i cant go” instead of the expected “i couldnt walk“.

We can also see the BLEU-4 score of 0.51, which provides an upper bound on what we might expect from this model.

Looking at the results on the test set, do see readable translations, which is not an easy task.

For example, we see “ich mag dich nicht” correctly translated to “i dont like you“.

We also see some poor translations and a good case that the model could suffer from further tuning, such as “ich bin etwas beschwipst” translated as “i a bit bit” instead of the expected “im a bit tipsy

A BLEU-4 score of 0.076238 was achieved, providing a baseline skill to improve upon with further improvements to the model.

Extensions

This section lists some ideas for extending the tutorial that you may wish to explore.

  • Data Cleaning. Different data cleaning operations could be performed on the data, such as not removing punctuation or normalizing case, or perhaps removing duplicate English phrases.
  • Vocabulary. The vocabulary could be refined, perhaps removing words used less than 5 or 10 times in the dataset and replaced with “unk“.
  • More Data. The dataset used to fit the model could be expanded to 50,000, 100,000 phrases, or more.
  • Input Order. The order of input phrases could be reversed, which has been reported to lift skill, or a Bidirectional input layer could be used.
  • Layers. The encoder and/or the decoder models could be expanded with additional layers and trained for more epochs, providing more representational capacity for the model.
  • Units. The number of memory units in the encoder and decoder could be increased, providing more representational capacity for the model.
  • Regularization. The model could use regularization, such as weight or activation regularization, or the use of dropout on the LSTM layers.
  • Pre-Trained Word Vectors. Pre-trained word vectors could be used in the model.
  • Recursive Model. A recursive formulation of the model could be used where the next word in the output sequence could be conditional on the input sequence and the output sequence generated so far.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Summary

In this tutorial, you discovered how to develop a neural machine translation system for translating German phrases to English.

Specifically, you learned:

  • How to clean and prepare data ready to train a neural machine translation system.
  • How to develop an encoder-decoder model for machine translation.
  • How to use a trained model for inference on new input phrases and evaluate the model skill.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.


Develop Deep Learning models for Text Data Today!

Deep Learning for Natural Language Processing

Develop Your Own Text models in Minutes

…with just a few lines of python code

Discover how in my new Ebook:
Deep Learning for Natural Language Processing

It provides self-study tutorials on topics like:
Bag-of-Words, Word Embedding, Language Models, Caption Generation, Text Translation and much more…

Finally Bring Deep Learning to your Natural Language Processing Projects

Skip the Academics. Just Results.

Click to learn more.


114 Responses to How to Develop a Neural Machine Translation System from Scratch

  1. Klaas January 10, 2018 at 7:53 am #

    amazing work again. One Question. Do you have a seperate tutorial where you explain the LSTM layers (Timedistributed, Repeatvector,…)?

  2. Mohamed January 10, 2018 at 1:51 pm #

    Your tutorials are amazing indeed. Thank you!
    Hope you will have the time to work on the Extensions lists above. This will complete this amazing tutorial.

    Thanks again!

  3. Richard January 12, 2018 at 5:52 am #

    Brilliant, thanks Jason. I’m looking forward to giving this a try.

  4. Parul January 14, 2018 at 7:47 am #

    hey i want to know one thing that if we are giving english to german translations to the model for training 9000 and for testing 1000.. then what is the encoder decoder model is actually doing ..as we are giving everything to the model at the time of testing.

    • Jason Brownlee January 15, 2018 at 6:54 am #

      The model is not given the answer, it must translate new examples.

      Perhaps I don’t follow your question?

  5. abkul orto January 15, 2018 at 5:38 pm #

    Hi Jason,

    I am regular reader of your articles and purchased books.i want to work on translation of a local language to english.kindly advice on the steps.

    thanks you

  6. kannu January 20, 2018 at 4:50 am #

    # prepare regex for char filtering
    re_print = re.compile(‘[^%s]’ % re.escape(string.printable))

    can u please explain me the meaning of this code for ex what is string.printable actually doing and what is the meaning of (‘[^%s]’

    • Jason Brownlee January 20, 2018 at 8:24 am #

      I am selecting “not the printable characters”.

      You can learn more about regex from a good book on Python.

  7. Harish Yadav January 20, 2018 at 9:22 pm #

    Excellent explanation i would say!!!! damn good !!!looking to develop text-phonemes with your model !!!

  8. Drishty January 23, 2018 at 8:28 pm #

    Hi , Jason your wok is amazing and while i was doing this code i found this and i want to know i it’s required ti reshape the sequence ? and what sequence.shape[0],sequence.shape[1] is doing.
    and why we need the vocab size ?
    y = y.reshape(sequences.shape[0], sequences.shape[1], vocab_size)

  9. Drishty January 23, 2018 at 8:29 pm #

    *want to know why it’s required to reshape the sequence ? and what

    • Jason Brownlee January 24, 2018 at 9:55 am #

      We must ensure that the data is the correct shape that is expected by the model, e.g. 2d for MLPs, 3D for LSTMs, etc.

  10. firoz January 24, 2018 at 4:41 am #

    hi ,

    i wanted to ask tyou why we have not done one-hot encoding for text in german.?

    • Jason Brownlee January 24, 2018 at 9:58 am #

      The input data is integer encoded and passed through a word embedding. No need to one hot encode in this case.

  11. ravi January 25, 2018 at 4:59 am #

    hello sir,

    over here the load_model is not defined .

    thank you .

    • Jason Brownlee January 25, 2018 at 5:58 am #

      from keras.models import load_model

    • ravi January 25, 2018 at 6:17 am #

      can please tell me where the

      translation = model.predict(source, verbose=0)

      error: source is not deifined

      • Jason Brownlee January 25, 2018 at 9:07 am #

        Sorry, I have not seen that error. Perhaps try copying the entire example at the end of the post?

  12. asheesh January 25, 2018 at 6:36 am #

    while running above code i am facing memory error in to_categorical function. I am doing translation for english to hindi. Pls give any suggestion.

    • Jason Brownlee January 25, 2018 at 9:09 am #

      Perhaps try updating Keras?
      Perhaps try modifying the code to use progressive loading?
      Perhaps try running on AWS with an instance that has more RAM?

  13. Harish Yadav January 25, 2018 at 11:20 pm #

    please do a model on attention with gru and beam search

  14. Harish Yadav January 30, 2018 at 4:13 pm #

    i have used bidirectional lstm,got a better result…i want to improve more …but i dont know how to implement attention layer in keras…could you please help me out…

  15. hayet January 31, 2018 at 9:48 pm #

    Hi, I want know why you use model.add(RepeatVector(tar_timesteps))?

  16. hayet February 2, 2018 at 12:11 am #

    is it possible to calculate the NMT model score with this method

    model.compile(optimizer=’adam’, loss=’categorical_crossentropy’, metrics=[‘accuracy’])

    scores = model.evaluate(testX,testY)

    • Jason Brownlee February 2, 2018 at 8:20 am #

      It will estimate accuracy and loss, but not bot give you any insight into the skill of the NMT on text data.

  17. Darren February 20, 2018 at 5:03 am #

    Hi Jason, brilliant article!

    Just a quick question, when you configure the encoder-decoder model, there seems no inference model as you mentioned in your previous articles? If this model has achieved what inference model did, in which layer? If not, how does it compare to the suite of train model, inference-encoder model and inference-decoder model? Thank you so much!

  18. Jakobe February 25, 2018 at 4:45 am #

    Does text_to_sequences encode data ?
    according to the documentation it just transform texts to a list of sequences

    • Jason Brownlee February 25, 2018 at 7:45 am #

      Yes, it encodes words in text to integers.

      • Jakobe March 6, 2018 at 9:38 am #

        Could you verify This documentation. It is mentionned that text_to_sequences return STR.
        I am confusing right now.
        https://keras.io/preprocessing/text/

        • Jason Brownlee March 6, 2018 at 2:55 pm #

          For “texts_to_sequences” on Tokenizer it says:

          “Return: list of sequences (one per text input).”

  19. Emil March 6, 2018 at 10:41 am #

    ImportError: cannot import name ‘corpus_bleu’
    Did anyone have an idea about this error.

  20. Dirck March 10, 2018 at 8:54 pm #

    By following your tutorial, I was able to find BLEU scores on test dataset as follow :
    BLEU-1: 0.069345
    BLEU-2: 0.255634
    BLEU-3: 0.430785
    BLEU-4: 0.490818

    So we can notice that they are very close to the scores on train dataset.
    Is it about overfitting or it is a normal behavior ?

    • Jason Brownlee March 11, 2018 at 6:25 am #

      Nice work!

      Similar scores on train and test is a sign of a stable model. If the skill is poor, it might be a stable but underfit model.

  21. vikas dixit March 10, 2018 at 11:12 pm #

    Hello sir, you are using test data as validation data. This means model has seen test data during training phase only. I think test data is kept separated. Am I right?? If yes please explain logic behind it.

  22. sindhu reddy March 20, 2018 at 2:32 am #

    Hello sir, great explanation. everything works well with the given corpus.when i am using the own corpus it says .pkl file is not encoded in utf-8.

    can you please share the the encoding of the text files used for the above project?

    It is giving following error
    —————————————————————————
    IndexError Traceback (most recent call last)
    in ()
    65 # spot check
    66 for i in range(100):
    —> 67 print(‘[%s] => [%s]’ % (clean_pairs[i,0], clean_pairs[i,1]))

    IndexError: too many indices for array

    Kindly help

    • Jason Brownlee March 20, 2018 at 6:26 am #

      Perhaps double check you are using Python 3?

      • sindhu reddy March 20, 2018 at 6:30 pm #

        yes i am using python 3.5

        • Jason Brownlee March 21, 2018 at 6:31 am #

          Are you able to confirm that all other libs are up to date and that you copied all of the code from the example?

  23. sindhu reddy March 21, 2018 at 5:06 pm #

    yes jason i have updated all the libraries. it is working completely fine for the deu,txt file .
    when ever i use my own text file it is giving the following error.

    can you kindly tell what formatting is used in text file.

    Thanks

    • Jason Brownlee March 22, 2018 at 6:19 am #

      As stated in the post, the format is “Tab-delimited Bilingual Sentence Pairs”.

  24. Jigyasa Sakhuja March 24, 2018 at 3:47 am #

    hi Jason i am a fan of yours and i have implemented this machine translation and it was awesome i got all the results perfectly .. now i wanted to generate code using natural language by using RNN.. and when i am reading my file which is of declartaion and docstrings it is not showing as it is the ouput .. like it should show the declarations but it is showing something like x00/x00/x00/x00/x00/x00/x00/x00/x00/x00/x00/x00/x00/x00/x00/x00/x00/x00/x00/x00/

    but it should show
    if cint(frappe.db.get_single_value(u’System DCSP Settings’, u’setup_complete’)):

  25. sasi March 28, 2018 at 5:59 pm #

    In your data x is English and y is german… but in the code x is German, and y is english… why that difference????????????

    • Jason Brownlee March 29, 2018 at 6:31 am #

      We are translating from German (X) to English (Y).

      You can learn the reverse if you prefer. I chose not to because my english is better than my german.

  26. Kam March 29, 2018 at 8:48 pm #

    Hi,
    I am trying to use pre trained word embeddings to make translation.
    But, after making some researrch I found that pre-trained word embeddings are just only user for initialize encoder and decoder and also we nedd only the src embeddings.
    So, for the moment I am confused.
    Normally, must we provide source and target embeddings to the algorithme ?
    Please if they are some documentation or links about this topic.

    • Jason Brownlee March 30, 2018 at 6:37 am #

      Not sure I follow, what do you mean exactly?

      You can use a pre-trained embedding. This is separate from needing to have input and output data pairs to train the model.

  27. Sindhura April 4, 2018 at 3:57 am #

    Regarding recursive model in extensions, isn’t it already implemented in the current code? Because the decoder part is lstm and is lstm output of one unit is fed to the next unit.

  28. Max b April 17, 2018 at 3:55 am #

    “be stolen returned” is my systems translation of “vielen dank jason”, which ist supposed to mean: Thank you so much Jason!

    This post helped me a lot and I’ll now continue to tune it. Keep up the awesome work!

  29. suraj April 17, 2018 at 7:38 pm #

    In machine translation why we need vocabulary with the english text and german text …?

    • Jason Brownlee April 18, 2018 at 8:02 am #

      We need to limit the number of words that we model, it cannot be unbounded, at least in the way I’m choosing to model the problem.

      • michael April 20, 2018 at 12:24 am #

        That suggests that it can be unbounded if you model it in a different way.

  30. AlgoP April 24, 2018 at 11:42 pm #

    Hi Jason,
    I have just tested the clean_pairs method against ENG-PL set provided on the same website.One of the characters does not print on the screen( ‘all the other non ASCII chars are converted correctly), it is ignored as per this line I guess:

    I did an experiment with replacing the above with line = normalize(‘NFD’, line).encode(‘utf-8’, ‘ignore’), but there is no difference between these two in results.I am not sure why this is happening as it is only one letter.Also,( I assume your chose was ascii as you built a German to English translator am I correct?).Could you plase share your thoughts, if possible?

    • Jason Brownlee April 25, 2018 at 6:33 am #

      Perhaps you’re able to inspect the text or search the text for non-ascii chars to see what the offending characters are?

      This might give you insight into what is going on.

    • AlgoP April 25, 2018 at 6:44 am #

      I am working on it -it looks like it may be the issue with re.escape method rather than with encoding itself.

  31. Johny May 1, 2018 at 9:49 pm #

    Does removing punctuation not preventing the model to be used to predict a paragraph? How can you evaluate it with one sentence or paragraph not in the test set?

    • Jason Brownlee May 2, 2018 at 5:39 am #

      You can provide data to the model and make a prediction.

      call the predict_sequence() function we wrote above.

  32. Umesh May 1, 2018 at 10:53 pm #

    From Keras. Proprocessing. Text import Tokenizer
    ..
    Does not woking after installing keras..
    ..
    It’s says that no module named tensorflow
    ..
    I have windows 32 it machine.
    ..
    Your article very good…!
    .
    But I can’t process ahead due to this problem!

  33. Jundong May 4, 2018 at 9:53 am #

    Thank you for your article, Jason!

    I have one question about the difference between your implementation and the Keras Tutorial “https://blog.keras.io/a-ten-minute-introduction-to-sequence-to-sequence-learning-in-keras.html”. It seems to me that, there is a ‘teaching forcing’ element in the “Keras Tutorial” using “target” (offset by one step) as “decoder input data”. This element is not presented in your model. My question is: is it necessary? or you just use “RepeatedVector” and “TimeDistributed” to implement the similar function?

    Thank you!

  34. Beay May 5, 2018 at 9:08 pm #

    Great help Jason, thank you one more time, i want to ask you:

    How can i implement bidirectional lstm code for further improvements? at below what i did on codes please fix it with your knowledge.

    def define_model(src_vocab, tar_vocab, src_timesteps, tar_timesteps, n_units):
    model = Sequential()
    model.add(Embedding(src_vocab, n_units, input_length=src_timesteps, mask_zero=True))
    model.add(Bidirectional(LSTM(n_units)))
    model.add(RepeatVector(tar_timesteps))
    model.add(Bidirectional(LSTM(n_units, return_sequences=True)))
    model.add(TimeDistributed(Dense(tar_vocab, activation=’softmax’)))
    return model

  35. Beay May 6, 2018 at 1:05 am #

    In this below code

    # remove non-printable chars form each token
    line = [re_print.sub(”, w) for w in line]

    in Turkish words i got this sample errors for example

    “kaç” -> “kac” , “koş”->”kos”

    how can i fix it ?

    thank you

    • Jason Brownlee May 6, 2018 at 6:31 am #

      I don’t follow sorry. What is the problem exactly?

  36. Beay May 6, 2018 at 7:25 am #

    i have used these codes on a Turkish-English corpus file and some Turkish characters are

    missing (ç,ğ,ü,ğ,Ö,Ğ,Ü,İ,ı)

    thank you.

    • Jason Brownlee May 7, 2018 at 6:45 am #

      Missing after the conversion?

      Perhaps normalizing to Latin characters is not the best approach for your specific problem?

  37. Sai May 18, 2018 at 4:55 am #

    Thank you very much. Could you please help where can I get good dataset for Thai to English. The dataset for Thai language is available from the ManyThings.org website is with lesser data.I am trying to use this approach to build similar for Thai.

    • Jason Brownlee May 18, 2018 at 6:27 am #

      Sorry, I don’t know off hand.

    • Sai May 18, 2018 at 10:39 pm #

      Please ignore my query, i have searched and got the dataset. Thank you for these articles

  38. pep May 18, 2018 at 7:35 pm #

    Once the model is trained, could be used the model to predict in both directions, I mean: english-german, german-english.

  39. Meghna May 23, 2018 at 9:10 pm #

    Hi Jason, thank you for the amazing tutorial. It really helped me. I implemented the above code and understood each function. Further, I want to implement Neural conversation model as given in https://arxiv.org/pdf/1506.05869.pdf on dialogue data. So, I have 2 questions, first is how to make pairing in dialogue data and second is how to feed previous conversations as input to the decoder model.

    • Jason Brownlee May 24, 2018 at 8:11 am #

      Sorry, I don’t have an example of a dialog system. I hope to cover it in the future.

  40. Ahmad Ahmad May 24, 2018 at 6:30 pm #

    G.M Mr Jason …

    In my model , I find BLEU scores on train dataset as follow :

    BLEU-1: 0.736022
    BLEU-2: 0.717377
    BLEU-3: 0.710192
    BLEU-4: 0.692681

    So we can notice that they are higher from the scores on train dataset.
    Is it normal behavior or is it bad ?

  41. maitha May 28, 2018 at 1:07 pm #

    Hi Jason,
    Great and helpful work, I am trying the code to translate Arabic to English but in first step (Clean Text) and it give me an empty [ ]?! how can I solve this one.
    [hi] => []
    [run] => []
    [help] => []

  42. Sastry May 28, 2018 at 11:24 pm #

    Hi Jason,

    Thanks for sharing a easy and simple approach for translations.

    I tried your code to work with Indian languages and found Hindi data set in the same location from where you shared the German dataset.

    The following normalize code for Hindi removes the character from line. I have tried with NFC, still facing the same problem. If I skip this line then, the non-printable character line is skipping the hindi text.

    print(‘Before: ‘, line)
    # normalize unicode characters
    line = normalize(‘NFD’, line).encode(‘ascii’, ‘ignore’)
    print(‘After: ‘,line)

    Before: Go.
    After: b’Go.’
    Before: जा.
    After: b’.’

    Does skipping these two lines of code affect the training in any way?

    Thanks,
    Sastry

    • Jason Brownlee May 29, 2018 at 6:26 am #

      Yes, the code example expects to work with Latin characters.

  43. kamal deep garg May 29, 2018 at 3:43 pm #

    Hello sir

    what is minimum Hardware requirement to train nmt using keras?

  44. Srijan Verma May 31, 2018 at 6:31 pm #

    Hi Jason,

    This post is really helpful. Thanks for this.

    I am working on building a translator which translates from English to Hindi (or any other Indian language). But I am facing a problem while cleaning the data.
    The normalize code does not work for Indian languages, and if I skip that line of code then I am not getting any output after training my data.

    Is there a way to use the same code on your post and some other way to clean the data for Indian languages to get the desired output..? Like are there any python modules/Libraries that i should install so as to use them for Indian Languages.?

    Thanks!

    • Jason Brownlee June 1, 2018 at 8:17 am #

      You may have to research how to prepare hindi data for NLP.

      Perhaps converting to latin chars in not the best approach.

  45. lakshm June 1, 2018 at 3:02 pm #

    Hello,

    Aren’t we supposed to pass the English data along with the encoded data to decoder.As per my understanding only the encoded German data has been passed to the decoder right??

  46. Sai June 5, 2018 at 6:57 pm #

    Hi Jason,

    I have now progressed upto Training the model. Cleaning & tokenizing the data set took time as i used a different language, but was a good learning.

    Wanted to know whats the significance of “30 epochs and a batch size of 64 examples” in your example. Are these anyways related to Total vocabulary (or) total trainable parameters ?

    Also, could you please guide me to any article of yours where i can learn more around what is epochs, what is BLEU score , what is loss etc.

    Thank you

  47. Sai June 7, 2018 at 9:43 pm #

    Hi Jason,

    I have a silly question, but wanted to seek clarification.

    In step “Train Neural Translation Model” :- have used 10,000 rows from the dataset, and established the model in file model.h5 for xxx number of vocabularies.
    If I extract next 10,000 rows from data and continue to train the model using the same lines of code above, would it use the previously established model from model.h5 or would it be overwritten and start as fresh data being used to train ?

    Thank you,

    • Jason Brownlee June 8, 2018 at 6:11 am #

      Yes, the model will be trained using the existing model as a starting point.

  48. Sai June 8, 2018 at 3:02 pm #

    Hi Jason,

    ok, understood.

    Referred to your article https://machinelearningmastery.com/check-point-deep-learning-models-keras/ and understood that, before compiling the model using model.compile(), i have to load the model from file, to use existing model as starting point in training.

    Thank you very much.

  49. Paul June 8, 2018 at 3:19 pm #

    Hi Jason,
    Can Word2Vec be used as the input embedding to boost the LSTM model ? Or say that pre-trained word vector by Word2Vec as input of the model can get better?

    Thanks!

  50. Raghavendra June 12, 2018 at 11:06 am #

    Hello Jason,
    Excellently written article with intricate concepts explained in such a simple manner.However it would be great if you can add a attention layer for handling larger sentences.

    I tried to add a attention layer to the code above by referring the below one.
    https://github.com/keras-team/keras/issues/4962

    I am unable to add the attention layer..I have read your previous blog on adding attention

    https://machinelearningmastery.com/encoder-decoder-attention-sequence-to-sequence-prediction-keras/

    But the vocabulary at the output end is too large to be processed and this is not solving the problem

    It would be great if you add attention ( bahdanu’s or luong’s ) to your above code and solve the problem of larger sentences

    Thanking you !

    • Jason Brownlee June 12, 2018 at 2:27 pm #

      Thanks, I hope to develop some attention tutorials once it is officially supported by Keras.

      • Raghavendra June 12, 2018 at 3:23 pm #

        How about including the attention snippet as you did in the later case.this code is working fine for me except that attention can handle longer sentences and this is where I am facing issues.I was actually asking for adding attention to the above code as you did in the later case.

        • Jason Brownlee June 13, 2018 at 6:15 am #

          Sorry, I cannot create a custom example for you.

          I hope to give more examples of attention when Keras officially supports attention.

Leave a Reply