7 Applications of Deep Learning for Natural Language Processing

The field of natural language processing is shifting from statistical methods to neural network methods.

There are still many challenging problems to solve in natural language. Nevertheless, deep learning methods are achieving state-of-the-art results on some specific language problems.

It is not just the performance of deep learning models on benchmark problems that is most interesting; it is the fact that a single model can learn word meaning and perform language tasks, obviating the need for a pipeline of specialized and hand-crafted methods.

In this post, you will discover 7 interesting natural language processing tasks where deep learning methods are achieving some headway.

Let’s get started.

7 Applications of Deep Learning for Natural Language Processing

7 Applications of Deep Learning for Natural Language Processing
Photo by Tim Gorman, some rights reserved.

Overview

In this post, we will look at the following 7 natural language processing problems.

  1. Text Classification
  2. Language Modeling
  3. Speech Recognition
  4. Caption Generation
  5. Machine Translation
  6. Document Summarization
  7. Question Answering

I have tried to focus on the types of end-user problems that you may be interested in, as opposed to more academic or linguistic sub-problems where deep learning does well such as part-of-speech tagging, chunking, named entity recognition, and so on.

Each example provides a description of the problem, an example, and references to papers that demonstrate the methods and results. Most references are drawn from Goldberg’s excellent 2015 primer on deep learning for NLP researchers.

Do you have a favorite NLP application for deep learning that is not listed?
Let me know in the comments below.

Need help with Deep Learning for Text Data?

Take my free 7-day email crash course now (with code).

Click to sign-up and also get a free PDF Ebook version of the course.

Start Your FREE Crash-Course Now

1. Text Classification

Given an example of text, predict a predefined class label.

The goal of text categorization is to classify the topic or theme of a document.

— Page 575, Foundations of Statistical Natural Language Processing, 1999.

A popular classification example is sentiment analysis where class labels represent the emotional tone of the source text such as “positive” or “negative“.

Below are a 3 more examples:

  • Spam filtering, classifying email text as spam or not.
  • Language identification, classifying the language of the source text.
  • Genre classification, classifying the genre of a fictional story.

Further, the problem may be framed in a way that requires multiple classes assigned to a text, so-called multi-label classification. Such as predicting multiple hashtags for a source tweet.

For more on the general topic, see:

Below are 3 examples of deep learning papers for text classification:

2. Language Modeling

Language modeling is really a subtask of more interesting natural language problems, specifically those that condition the language model on some other input.

… the problem is to predict the next word given the previous words. The task is fundamental to speech or optical character recognition, and is also used for spelling correction, handwriting recognition, and statistical machine translation.

— Page 191, Foundations of Statistical Natural Language Processing, 1999.

In addition to the academic interest in language modeling, it is a key component of many deep learning natural language processing architectures.

A language model learns the probabilistic relationship between words such that new sequences of words can be generated that are statistically consistent with the source text.

Alone, language models can be used for text or speech generation; for example:

  • Generating new article headlines.
  • Generating new sentences, paragraphs, or documents.
  • Generating suggested continuation of a sentence.

For more in language modeling, see:

Below is an example of deep learning for language modeling (only):

3. Speech Recognition

Speech recognition is the problem of understanding what was said.

The task of speech recognition is to map an acoustic signal containing a spoken natural language utterance into the corresponding sequence of words intended by the speaker.

— Page 458, Deep Learning, 2016.

Given an utterance of text as audio data, the model must produce human readable text.

Given the automatic nature of the process, the problem may also be called Automatic Speech Recognition (ASR).

A language model is used to create the text output that is conditioned on the audio data.

Some examples include:

  • Transcribing a speech.
  • Creating text captions for a movie or TV show.
  • Issuing commands to the radio while driving.

For more on speech recognition, see:

Below are 3 examples of deep learning for speech recognition.

4. Caption Generation

Caption generation is the problem of describing the contents of an image.

Given a digital image, such as a photo, generate a textual description of the contents of the image.

A language model is used to create the caption that is conditioned on the image.

Some examples include:

  • Describing the contents of a scene.
  • Creating a caption for a photograph.
  • Describing a video.

This is not just an application for the hearing impaired, but also in generating human readable text for image and video data that can be searched, such as on the web.

Below are 3 examples of deep learning for caption generation:

5. Machine Translation

Machine translation is the problem of converting a source text in one language to another language.

Machine translation, the automatic translation of text or speech from one language to another, is one [of] the most important applications of NLP.

— Page 463, Foundations of Statistical Natural Language Processing, 1999.

Given that deep neural networks are used, the field is referred to as neural machine translation.

In a machine translation task, the input already consists of a sequence of symbols in some language, and the computer program must convert this int a sequence of symbols in another language. This is commonly applied to natural languages, such as translating from English to French. Deep learning has recently begun to have an important impact on this kind of task.

— Page 98, Deep Learning, 2016.

A language model is used to output the destination text in the second language, conditioned on the source text.

Some examples include:

  • Translating a text document from French to English.
  • Translating Spanish audio to German text.
  • Translating English text to Italian audio.

For more on neural machine translation, see:

Below are 3 examples of deep learning for machine translation:

6. Document Summarization

Document summarization is the task where a short description of a text document is created.

As above, a language model is used to output the summary conditioned on the full document.

Some examples of document summarization include:

  • Creating a heading for a document.
  • Creating an abstract of a document.

For more on the topic, see:

Below are 3 examples of deep learning for document summarization:

7. Question Answering

Question answering is the problem where given a subject, such as a document of text, answer a specific question about the subject.

… question answering systems which try to answer a user query that is formulated in the form of a question by return the appropriate none phrase such as a location, a person, or a date. For example, the question Why killed President Kennedy? might be answered with the noun phrase Oswald

— Page 377, Foundations of Statistical Natural Language Processing, 1999.

Some examples include:

For more information on question answering, see:

  • Answering questions about Wikipedia articles.
  • Answering questions about news articles.
  • Answering questions about medical records.

Below are 3 examples of deep learning for question answering:

Further Reading

This section provides more resources on deep learning applications for NLP if you are looking go deeper.

Summary

In this post, you discovered 7 applications of deep learning to natural language processing tasks.

Was your favorite example of deep learning for NLP missed?
Let me know in the comments.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.


Develop Deep Learning models for Text Data Today!

Deep Learning for Natural Language Processing

Develop Your Own Text models in Minutes

…with just a few lines of python code

Discover how in my new Ebook:
Deep Learning for Natural Language Processing

It provides self-study tutorials on topics like:
Bag-of-Words, Word Embedding, Language Models, Caption Generation, Text Translation and much more…

Finally Bring Deep Learning to your Natural Language Processing Projects

Skip the Academics. Just Results.

Click to learn more.


34 Responses to 7 Applications of Deep Learning for Natural Language Processing

  1. Baran September 20, 2017 at 6:05 am #

    Hi Jason, I enjoyed reading it, thank you. Are you planning to create a coding example/tutorial for Q&A model with Keras? I need an implementation with variable question and answer size

  2. Emeka Farrier September 20, 2017 at 10:18 pm #

    Good read! I’m starting a little project as well… very unique in natural language processing

  3. Chiedu September 21, 2017 at 3:15 pm #

    Hi Jason,
    I see you have begun your series on ML with text.
    Nice one

  4. Farooq Zaman September 22, 2017 at 5:23 am #

    Respected sir thanks for this nice and knowledgeable post on NLP tasks
    I read some paper on part of speech tagging where deep learning also perform well
    Can you please have a post on that as well ? And also some sort of implementation with keras. I will love to inspire it . thanks once again

  5. Ben Peterson September 23, 2017 at 1:10 am #

    Great post. I would like to leverage this technology in my graduate thesis work, subjecting propaganda to various analyses. Do you know of any low-cost or education-friendly services available for people like me to conduct such research using machine learning?

    Thanks for the great post.

    Respectfully,
    Ben

    • Jason Brownlee September 23, 2017 at 5:42 am #

      What services you mean exactly Ben? Source of data?

  6. Ryan September 23, 2017 at 2:39 am #

    Hey Jason – thanks for this article and list of resources. I’m looking to use NLP to review contracts for determining if key areas of information have been completed; principal names, addresses, signatures, etc. Currently doing this with manual scanning, ugh. “Question Answering” seems close – any suggestions on tools or types of tech to deploy? Thanks

    • Jason Brownlee September 23, 2017 at 5:43 am #

      Interesting. Sounds like engineering (checking each field) might be better than machine learning, but I don’t really know the problem well.

  7. Abhishek Singh September 28, 2017 at 2:58 am #

    Awesome work Sir.Would be better, if explained with working examples.
    Thanks

  8. Deepu November 1, 2017 at 4:56 pm #

    You are amazing! It just took 5 mins to read and understand this blog to get an idea about different field in Deep learning. Now I can really narrow down by research for my project. Thank you and appreciate your effort. BTW, I bought your text book last month and I am loving it. Please keep it coming.

  9. Anupam December 18, 2017 at 1:12 pm #

    Hi Jason,
    I have a body of text and I want to derive some inferences from it. For example :
    Input :
    If bit A is set bit B cannot be set.
    If bit A is 1 then B cannot be written.

    Output :
    To set B, A must be set

    How would you classify this problem and what approach do you recommend?
    Thanks

    • Jason Brownlee December 18, 2017 at 3:29 pm #

      Perhaps you can prepare millions of input-output examples in text and train an NLP model?

      Perhaps you can translate the text to a binary format and learn a simple logic program?

  10. Anu January 14, 2018 at 11:37 pm #

    Excellent intro.I would like to know more about how deep learning can be used for named entity recognition

  11. shabir January 20, 2018 at 5:14 pm #

    hay jason

    very interesting, will u help to send me the coding phase for text summarization in pythan

  12. priya March 21, 2018 at 6:34 pm #

    Hi Jason,

    How to provide feature vectors extracted from audio as input to RNN networks in python ?

  13. Harish April 16, 2018 at 2:45 pm #

    Hi Jason,
    Found this article interesting. I have an idea of summarizing highlights of a sport from a set of commentaries. Taking cricket, given the whole commentary set of the match I have to pick out commentaries correspoding to any of the highlights such as 4s,6s or wickets. Please help me by answering what I have to do? What method can i use text classification or text summarization? How to do that method with respect to this context?

  14. Happy May 18, 2018 at 12:34 am #

    Hello Jason,

    Its always inspiring to learn from your blog.

    I am trying to learn about Question Answering. Have you implemented one already?
    Would love to learn from it.

    Thanks

  15. Mamta May 19, 2018 at 10:56 am #

    Great Article .. Feeling confident ..Started 7 day mini-course
    “Deep Learning for NLP Crash Course.”

  16. Balaji Gentela May 23, 2018 at 7:26 pm #

    Hello sir..could you please explain how text classification works.. Is there any algorithm….?

  17. Syed Alam July 8, 2018 at 11:00 pm #

    Hi,

    Are you planning to provide natural language processing concepts and code for speech recognition?

Leave a Reply