Search results for "Natural Language Processing"

dotproduct_cover

How to Implement Scaled Dot-Product Attention from Scratch in TensorFlow and Keras

Having familiarized ourselves with the theory behind the Transformer model and its attention mechanism, we’ll start our journey of implementing a complete Transformer model by first seeing how to implement the scaled-dot product attention. The scaled dot-product attention is an integral part of the multi-head attention, which, in turn, is an important component of both […]

Continue Reading
attention_research_cover

A Bird’s Eye View of Research on Attention

Attention is a concept that is scientifically studied across multiple disciplines, including psychology, neuroscience, and, more recently, machine learning. While all disciplines may have produced their own definitions for attention, one core quality they can all agree on is that attention is a mechanism for making both biological and artificial neural systems more flexible.  In […]

Continue Reading
Learning Curves of Cross-Entropy Loss for a Deep Learning Model

TensorFlow 2 Tutorial: Get Started in Deep Learning with tf.keras

Predictive modeling with deep learning is a skill that modern developers need to know. TensorFlow is the premier open-source deep learning framework developed and maintained by Google. Although using TensorFlow directly can be challenging, the modern tf.keras API brings Keras’s simplicity and ease of use to the TensorFlow project. Using tf.keras allows you to design, […]

Continue Reading
Discrete Probability Distributions for Machine Learning

Discrete Probability Distributions for Machine Learning

The probability for a discrete random variable can be summarized with a discrete probability distribution. Discrete probability distributions are used in machine learning, most notably in the modeling of binary and multi-class classification problems, but also in evaluating the performance for binary classification models, such as the calculation of confidence intervals, and in the modeling […]

Continue Reading
GANs in Action

9 Books on Generative Adversarial Networks (GANs)

Generative Adversarial Networks, or GANs for short, were first described in the 2014 paper by Ian Goodfellow, et al. titled “Generative Adversarial Networks.” Since then, GANs have seen a lot of attention given that they are perhaps one of the most effective techniques for generating large, high-quality synthetic images. As such, a number of books […]

Continue Reading