Search results for "Convolutional Neural Networks"

How to Grid Search Deep Learning Models for Time Series Forecasting

How to Grid Search Deep Learning Models for Time Series Forecasting

Grid searching is generally not an operation that we can perform with deep learning methods. This is because deep learning methods often require large amounts of data and large models, together resulting in models that take hours, days, or weeks to train. In those cases where the datasets are smaller, such as univariate time series, […]

Continue Reading
Sliding Window Approach to Modeling Time Series

LSTM Model Architecture for Rare Event Time Series Forecasting

Time series forecasting with LSTMs directly has shown little success. This is surprising as neural networks are known to be able to learn complex non-linear relationships and the LSTM is perhaps the most successful type of recurrent neural network that is capable of directly supporting multivariate sequence prediction problems. A recent study performed at Uber […]

Continue Reading

When should I use an MLP, CNN and RNN?

A When should I use an MLP, CNN and RNN? A Multilayer Perceptron or MLP can approximate a mapping function from inputs to outputs. They are flexible and can be adapted to most problems, nevertheless, they are perhaps more suited to classification and regression problems. A Convolutional Neural Network or CNN was developed and is […]

Continue Reading
Caption Generation with the Inject and Merge Architectures for the Encoder-Decoder Model

Caption Generation with the Inject and Merge Encoder-Decoder Models

Caption generation is a challenging artificial intelligence problem that draws on both computer vision and natural language processing. The encoder-decoder recurrent neural network architecture has been shown to be effective at this problem. The implementation of this architecture can be distilled into inject and merge based models, and both make different assumptions about the role […]

Continue Reading
Encoder-Decoder Models for Text Summarization in Keras

Encoder-Decoder Models for Text Summarization in Keras

Text summarization is a problem in natural language processing of creating a short, accurate, and fluent summary of a source document. The Encoder-Decoder recurrent neural network architecture developed for machine translation has proven effective when applied to the problem of text summarization. It can be difficult to apply this architecture in the Keras deep learning […]

Continue Reading
Encoder-Decoder Deep Learning Models for Text Summarization

Encoder-Decoder Deep Learning Models for Text Summarization

Text summarization is the task of creating short, accurate, and fluent summaries from larger text documents. Recently deep learning methods have proven effective at the abstractive approach to text summarization. In this post, you will discover three different models that build on top of the effective Encoder-Decoder architecture developed for sequence-to-sequence prediction in machine translation. […]

Continue Reading
How to Use Small Experiments to Develop a Caption Generation Model in Keras

How to Use Small Experiments to Develop a Caption Generation Model in Keras

Caption generation is a challenging artificial intelligence problem where a textual description must be generated for a photograph. It requires both methods from computer vision to understand the content of the image and a language model from the field of natural language processing to turn the understanding of the image into words in the right […]

Continue Reading
Example of annotation regions of an image with descriptions

How to Automatically Generate Textual Descriptions for Photographs with Deep Learning

Captioning an image involves generating a human readable textual description given an image, such as a photograph. It is an easy problem for a human, but very challenging for a machine as it involves both understanding the content of an image and how to translate this understanding into natural language. Recently, deep learning methods have […]

Continue Reading