Question Answering is a crucial natural language processing task that enables machines to understand and respond to human questions by extracting relevant information from a given context. DistilBERT, a distilled version of BERT, offers an excellent balance between performance and computational efficiency for building Q&A systems. In this tutorial, you will learn how to build […]
Author Archive | Muhammad Asad Iqbal Khan
Understanding the DistilBart Model and ROUGE Metric
DistilBart is a typical encoder-decoder model for NLP tasks. In this tutorial, you will learn how such a model is constructed and how you can check its architecture so that you can compare it with other models. You will also learn how to use the pretrained DistilBart model to generate summaries and how to control […]
Text Summarization with DistillBart Model
Text summarization represents a sophisticated evolution of text generation, requiring a deep understanding of content and context. With encoder-decoder transformer models like DistilBart, you can now create summaries that capture the essence of longer text while maintaining coherence and relevance. In this tutorial, you’ll discover how to implement text summarization using DistilBart. You’ll learn through […]

Text Generation using Contrastive Search with GPT-2 Model
Text generation is one of the most fascinating applications of deep learning. With the advent of large language models like GPT-2, we can now generate human-like text that’s coherent, contextually relevant, and surprisingly creative. In this tutorial, you’ll discover how to implement text generation using GPT-2. You’ll learn through hands-on examples that you can run […]

Auto-Completion Style Text Generation with GPT-2 Model
Generating gibberish text is a simple programming exercise for beginners. But completing a sentence meaningfully would require a lot of work. The landscape of auto-completion technology has transformed dramatically with the introduction of neural approaches. With Hugging Face’s transformers library, implementing text completion is only a few lines of code. In this comprehensive tutorial, you […]
How to Do Named Entity Recognition (NER) with a BERT Model
Named Entity Recognition (NER) is one of the fundamental building blocks of natural language understanding. When humans read text, we naturally identify and categorize named entities based on context and world knowledge. For instance, in the sentence “Microsoft’s CEO Satya Nadella spoke at a conference in Seattle,” we effortlessly recognize the organizational, personal, and geographical […]
A Complete Introduction to Using BERT Models
BERT model is one of the first Transformer application in natural language processing (NLP). Its architecture is simple, but sufficiently do its job in the tasks that it is intended to. In the following, we’ll explore BERT models from the ground up — understanding what they are, how they work, and most importantly, how to […]
Activation Functions in PyTorch
As neural networks become increasingly popular in the field of machine learning, it is important to understand the role that activation functions play in their implementation. In this article, you’ll explore the concept of activation functions that are applied to the output of each neuron in a neural network to introduce non-linearity into the model. […]
Building a Logistic Regression Classifier in PyTorch
Logistic regression is a type of regression that predicts the probability of an event. It is used for classification problems and has many applications in the fields of machine learning, artificial intelligence, and data mining. The formula of logistic regression is to apply a sigmoid function to the output of a linear function. This article […]
Training Logistic Regression with Cross-Entropy Loss in PyTorch
In the previous session of our PyTorch series, we demonstrated how badly initialized weights can impact the accuracy of a classification model when mean square error (MSE) loss is used. We noticed that the model didn’t converge during training and its accuracy was also significantly reduced. In the following, you will see what happens if […]