There is another type of neural network that is dominating difficult machine learning problems that involve sequences of inputs called recurrent neural networks.
Recurrent neural networks have connections that have loops, adding feedback and memory to the networks over time. This memory allows this type of network to learn and generalize across sequences of inputs rather than individual patterns.
A powerful type of Recurrent Neural Network called the Long Short-Term Memory Network has been shown to be particularly effective when stacked into a deep configuration, achieving state-of-the-art results on a diverse array of problems from language translation to automatic captioning of images and videos.
In this post you will get a crash course in recurrent neural networks for deep learning, acquiring just enough understanding to start using LSTM networks in Python with Keras.
After reading this post, you will know:
- The limitations of Multilayer Perceptrons that are addressed by recurrent neural networks.
- The problems that must be addressed to make Recurrent Neural networks useful.
- The details of the Long Short-Term Memory networks used in applied deep learning.
Let’s get started.
Support For Sequences in Neural Networks
There are some problem types that are best framed involving either a sequence as an input or an output.
For example, consider a univariate time series problem, like the price of a stock over time. This dataset can be framed as a prediction problem for a classical feedforward multilayer Perceptron network by defining a windows size (e.g. 5) and training the network to learn to make short term predictions from the fixed sized window of inputs.
This would work, but is very limited. The window of inputs adds memory to the problem, but is limited to just a fixed number of points and must be chosen with sufficient knowledge of the problem. A naive window would not capture the broader trends over minutes, hours and days that might be relevant to making a prediction. From one prediction to the next, the network only knows about the specific inputs it is provided.
Univariate time series prediction is important, but there are even more interesting problems that involve sequences.
Consider the following taxonomy of sequence problems that require a mapping of an input to an output (taken from Andrej Karpathy).
- One-to-Many: sequence output, for image captioning.
- Many-to-One: sequence input, for sentiment classification.
- Many-to-Many: sequence in and out, for machine translation.
- Synched Many-to-Many: synced sequences in and out, for video classification.
We can also see that a one-to-one example of input to output would be an example of a classical feedforward neural network for a prediction task like image classification.
Support for sequences in neural networks is an important class of problem and one where deep learning has recently shown impressive results State-of-the art results have been using a type of network specifically designed for sequence problems called recurrent neural networks.
Beat the Math/Theory Doldrums and Start using Deep Learning in your own projects Today, without getting lost in “documentation hell”
Get my free Deep Learning With Python mini course and develop your own deep nets by the time you’ve finished the first PDF with just a few lines of Python.
Daily lessons in your inbox for 14 days, and a DL-With-Python “Cheat Sheet” you can download right now.
Recurrent Neural Networks
Recurrent Neural Networks or RNNs are a special type of neural network designed for sequence problems.
Given a standard feed-forward multilayer Perceptron network, a recurrent neural network can be thought of as the addition of loops to the architecture. For example, in a given layer, each neuron may pass its signal latterly (sideways) in addition to forward to the next layer. The output of the network may feedback as an input to the network with the next input vector. And so on.
The recurrent connections add state or memory to the network and allow it to learn broader abstractions from the input sequences.
The field of recurrent neural networks is well established with popular methods. For the techniques to be effective on real problems, two major issues needed to be resolved for the network to be useful.
- How to train the network with Backpropagation.
- How to stop gradients vanishing or exploding during training.
1. How to Train Recurrent Neural Networks
The staple technique for training feedforward neural networks is to back propagate error and update the network weights.
Backpropagation breaks down in a recurrent neural network, because of the recurrent or loop connections.
This was addressed with a modification of the Backpropagation technique called Backpropagation Through Time or BPTT.
Instead of performing backpropagation on the recurrent network as stated, the structure of the network is unrolled, where copies of the neurons that have recurrent connections are created. For example a single neuron with a connection to itself (A->A) could be represented as two neurons with the same weight values (A->B).
This allows the cyclic graph of a recurrent neural network to be turned into an acyclic graph like a classic feed-forward neural network, and Backpropagation can be applied.
2. How to Have Stable Gradients During Training
When Backpropagation is used in very deep neural networks and in unrolled recurrent neural networks, the gradients that are calculated in order to update the weights can become unstable.
They can become very large numbers called exploding gradients or very small numbers called the vanishing gradient problem. These large numbers in turn are used to update the weights in the network, making training unstable and the network unreliable.
This problem is alleviated in deep multilayer Perceptron networks through the use of the Rectifier transfer function, and even more exotic but now less popular approaches of using unsupervised pre-training of layers.
In recurrent neural network architectures, this problem has been alleviated using a new type of architecture called the Long Short-Term Memory Networks that allows deep recurrent networks to be trained.
Long Short-Term Memory Networks
The Long Short-Term Memory or LSTM network is a recurrent neural network that is trained using Backpropagation Through Time and overcomes the vanishing gradient problem.
As such it can be used to create large (stacked) recurrent networks, that in turn can be used to address difficult sequence problems in machine learning and achieve state-of-the-art results.
Instead of neurons, LSTM networks have memory blocks that are connected into layers.
A block has components that make it smarter than a classical neuron and a memory for recent sequences. A block contains gates that manage the block’s state and output. A unit operates upon an input sequence and each gate within a unit uses the sigmoid activation function to control whether they are triggered or not, making the change of state and addition of information flowing through the unit conditional.
There are three types of gates within a memory unit:
- Forget Gate: conditionally decides what information to discard from the unit.
- Input Gate: conditionally decides which values from the input to update the memory state.
- Output Gate: conditionally decides what to output based on input and the memory of the unit.
Each unit is like a mini state machine where the gates of the units have weights that are learned during the training procedure.
You can see how you may achieve a sophisticated learning and memory from a layer of LSTMs, and it is not hard to imagine how higher-order abstractions may be layered with multiple such layers.
We have covered a lot of ground in this post. Below are some resources that you can use to go deeper into the topic of recurrent neural networks for deep learning.
Resources to learn more about Recurrent Neural Networks and LSTMs.
- Recurrent neural network on Wikipedia
- Long Short-Term Memory on Wikipedia
- The Unreasonable Effectiveness of Recurrent Neural Networks by Andrej Karpathy
- Understanding LSTM Networks
- Deep Dive into Recurrent Neural Nets
- A Beginner’s Guide to Recurrent Networks and LSTMs
Popular tutorials for implementing LSTMs.
- LSTMs for language modeling with TensorFlow
- RNN for Spoken Word Understanding in Theano
- LSTM for sentiment analysis in Theano
Primary sources on LSTMs.
- Long short-term memory [pdf], 1997 paper by Hochreiter and Schmidhuber
- Learning to forget: Continual prediction with LSTM, 2000 by Schmidhuber and Cummins that add the forget gate
- On the difficulty of training Recurrent Neural Networks [pdf], 2013
People to follow doing great work with LSTMs.
In this post you discovered sequence problems and recurrent neural networks that can be used to address them.
Specifically, you learned:
- The limitations of classical feedforward neural networks and how recurrent neural networks can overcome these problems.
- The practical problems in training recurrent neural networks and how they are overcome.
- The Long Short-Term Memory network used to create deep recurrent neural networks.
Do you have any questions about deep recurrent neural networks, LSTMs or about this post? Ask your question in the comments and I will do my best to answer.
Frustrated With Your Progress In Deep Learning?
What If You Could Develop Your Own Deep Nets in Minutes
...with just a few lines of Python
Discover how in my new Ebook: Deep Learning With Python
It covers self-study tutorials and end-to-end projects on topics like:
Multilayer Perceptrons, Convolutional Nets and Recurrent Neural Nets, and more...
Finally Bring Deep Learning To
Your Own Projects
Skip the Academics. Just Results.