Search results for "Long Short Term Memory Network"

How to Develop Convolutional Neural Networks for Multi-Step Time Series Forecasting

Convolutional Neural Networks for Multi-Step Time Series Forecasting

Given the rise of smart electricity meters and the wide adoption of electricity generation technology like solar panels, there is a wealth of electricity usage data available. This data represents a multivariate time series of power-related variables that in turn could be used to model and even forecast future electricity consumption. Unlike other machine learning […]

Continue Reading
Histograms of each variable in the training data set

1D Convolutional Neural Network Models for Human Activity Recognition

Human activity recognition is the problem of classifying sequences of accelerometer data recorded by specialized harnesses or smart phones into known well-defined movements. Classical approaches to the problem involve hand crafting features from the time series data based on fixed-sized windows and training machine learning models, such as ensembles of decision trees. The difficulty is […]

Continue Reading
Encoder-Decoder Recurrent Neural Network Models for Neural Machine Translation

Encoder-Decoder Recurrent Neural Network Models for Neural Machine Translation

The encoder-decoder architecture for recurrent neural networks is the standard neural machine translation method that rivals and in some cases outperforms classical statistical machine translation methods. This architecture is very new, having only been pioneered in 2014, although, has been adopted as the core technology inside Google’s translate service. In this post, you will discover […]

Continue Reading
A Gentle Introduction to Exploding Gradients in Recurrent Neural Networks

A Gentle Introduction to Exploding Gradients in Neural Networks

Exploding gradients are a problem where large error gradients accumulate and result in very large updates to neural network model weights during training. This has the effect of your model being unstable and unable to learn from your training data. In this post, you will discover the problem of exploding gradients with deep artificial neural […]

Continue Reading
Gentle Introduction to Global Attention for Encoder-Decoder Recurrent Neural Networks

Gentle Introduction to Global Attention for Encoder-Decoder Recurrent Neural Networks

The encoder-decoder model provides a pattern for using recurrent neural networks to address challenging sequence-to-sequence prediction problems such as machine translation. Attention is an extension to the encoder-decoder model that improves the performance of the approach on longer sequences. Global attention is a simplification of attention that may be easier to implement in declarative deep […]

Continue Reading
How to Develop a Word Embedding Model for Predicting Movie Review Sentiment

Deep Convolutional Neural Network for Sentiment Analysis (Text Classification)

Develop a Deep Learning Model to Automatically Classify Movie Reviews as Positive or Negative in Python with Keras, Step-by-Step. Word embeddings are a technique for representing text where different words with similar meaning have a similar real-valued vector representation. They are a key breakthrough that has led to great performance of neural network models on […]

Continue Reading
Feeding Hidden State as Input to Decoder

How Does Attention Work in Encoder-Decoder Recurrent Neural Networks

Attention is a mechanism that was developed to improve the performance of the Encoder-Decoder RNN on machine translation. In this tutorial, you will discover the attention mechanism for the Encoder-Decoder model. After completing this tutorial, you will know: About the Encoder-Decoder model and attention mechanism for machine translation. How to implement the attention mechanism step-by-step. […]

Continue Reading