Word embeddings are a type of word representation that allows words with similar meaning to have a similar representation. They are a distributed representation for text that is perhaps one of the key breakthroughs for the impressive performance of deep learning methods on challenging natural language processing problems. In this post, you will discover the […]
Search results for "Natural Language Processing"
A Gentle Introduction to the Bag-of-Words Model
The bag-of-words model is a way of representing text data when modeling text with machine learning algorithms. The bag-of-words model is simple to understand and implement and has seen great success in problems such as language modeling and document classification. In this tutorial, you will discover the bag-of-words model for feature extraction in natural language […]
How to Develop Word Embeddings in Python with Gensim
Word embeddings are a modern approach for representing text in natural language processing. Word embedding algorithms like word2vec and GloVe are key to the state-of-the-art results achieved by neural network models on natural language processing problems like machine translation. In this tutorial, you will discover how to train and load word embedding models for natural […]
How to Use Word Embedding Layers for Deep Learning with Keras
Word embeddings provide a dense representation of words and their relative meanings. They are an improvement over sparse representations used in simpler bag of word model representations. Word embeddings can be learned from text data and reused among projects. They can also be learned as part of fitting a neural network on text data. In this […]
How to Encode Text Data for Machine Learning with scikit-learn
Text data requires special preparation before you can start using it for predictive modeling. The text must be parsed to remove words, called tokenization. Then the words need to be encoded as integers or floating point values for use as input to a machine learning algorithm, called feature extraction (or vectorization). The scikit-learn library offers […]
Gentle Introduction to Transduction in Machine Learning
Transduction or transductive learning are terms you may come across in applied machine learning. The term is being used with some applications of recurrent neural networks on sequence prediction problems, like some problems in the domain of natural language processing. In this post, you will discover what transduction is in machine learning. After reading this […]
Gentle Introduction to Generative Long Short-Term Memory Networks
The Long Short-Term Memory recurrent neural network was developed for sequence prediction. In addition to sequence prediction problems. LSTMs can also be used as a generative model In this post, you will discover how LSTMs can be used as generative models. After completing this post, you will know: About generative models, with a focus on […]
Encoder-Decoder Long Short-Term Memory Networks
Gentle introduction to the Encoder-Decoder LSTMs for sequence-to-sequence prediction with example Python code. The Encoder-Decoder LSTM is a recurrent neural network designed to address sequence-to-sequence problems, sometimes called seq2seq. Sequence-to-sequence prediction problems are challenging because the number of items in the input and output sequences can vary. For example, text translation and learning to execute […]
CNN Long Short-Term Memory Networks
Gentle introduction to CNN LSTM recurrent neural networks with example Python code. Input with spatial structure, like images, cannot be modeled easily with the standard Vanilla LSTM. The CNN Long Short-Term Memory Network or CNN LSTM for short is an LSTM architecture specifically designed for sequence prediction problems with spatial inputs, like images or videos. […]
A Tour of Recurrent Neural Network Algorithms for Deep Learning
Recurrent neural networks, or RNNs, are a type of artificial neural network that add additional weights to the network to create cycles in the network graph in an effort to maintain an internal state. The promise of adding state to neural networks is that they will be able to explicitly learn and exploit context in […]