Image Feature Extraction in OpenCV: Keypoints and Description Vectors

In the previous post, you learned some basic feature extraction algorithms in OpenCV. The features are extracted in the form of classifying pixels. These indeed abstract the features from images because you do not need to consider the different color channels of each pixel, but to consider a single value. In this post, you will learn some other feature extract algorithms that can tell you about the image more concisely.

After completing this tutorial, you will know:

  • What are keypoints in an image
  • What are the common algorithms available in OpenCV for extracting keypoints

Kick-start your project with my book Machine Learning in OpenCV. It provides self-study tutorials with working code.


Let’s get started.

Image Feature Extraction in OpenCV: Keypoints and Description Vectors
Photo by Silas Köhler, some rights reserved.

Overview

This post is divided into two parts; they are:

  • Keypoint Detection with SIFT and SURF in OpenCV
  • Keypoint Detection using ORB in OpenCV

Prerequisites

For this tutorial, we assume that you are already familiar with:

Keypoint Detection with SIFT and SURF in OpenCV

Scale-Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF) are powerful algorithms for detecting and describing local features in images. They are named scale-invariant and robust because, compared to Harris Corner Detection, for example, its result is expectable even after some change to the image.

The SIFT algorithm applies Gaussian blur to the image and computes the difference in multiple scales. Intuitively, such a difference will be zero if your entire image is a single flat color. Hence, this algorithm is called keypoint detection, which identifies a place in the image with the most significant change in pixel values, such as corners.

The SIFT algorithm derives certain “orientation” values for each keypoint and outputs a vector representing the histogram of the orientation values.

It is found quite slow to run SIFT algorithm. Hence, there is a speed-up version, SURF. Describing the SIFT and SURF algorithms in detail would be lengthy, but luckily, you do not need to understand too much to use it with OpenCV.

Let’s look at an example using the following image:

Similar to the previous post, SIFT and SURF algorithms assume a grayscale image. This time, you need to create a detector first and apply it to the image:

NOTE: You may find difficulties in running the above code in your OpenCV installation. To make this run, you may need to compile your own OpenCV module from scratch. It is because SIFT and SURF were patented, so OpenCV considered them “non-free”. Since the SIFT patent has already expired (SURF is still in effect), you may find SIFT works fine if you download a newer version of OpenCV.

The output of the SIFT or SURF algorithm are a list of keypoints and a numpy array of descriptors. The descriptors array is Nx128 for N keypoints, each represented by a vector of length 128. Each keypoint is an object with several attributes, such as the orientation angle.

There can be a lot of keypoints detected by default, because this helps one of the best uses for detected keypoints — to find associations between distorted images.

To reduce the number of detected keypoint in the output, you can set a higher “contrast threshold” and lower “edge threshold” (default to be 0.03 and 10 respectively) in SIFT or increase the “Hessian threshold” (default 100) in SURF. These can be adjusted at the detector object using sift.setContrastThreshold(0.03), sift.setEdgeThreshold(10), and surf.setHessianThreshold(100) respectively.

To draw the keypoints on the image, you can use the cv2.drawKeypoints() function and apply the list of all keypoints to it. The complete code, using only the SIFT algorithm and setting a very high threshold to keep only a few keypoints, is as follows:

The image created is as follows:

Keypoints detected by the SIFT algorithm (zoomed in)
Original photo by Gleren Meneghin, some rights reserved.

The function cv2.drawKeypoints() will not modify your original image, but return a new one. In the picture above, you can see the keypoints drawn as circles proportional to its “size” with a stroke indicating the orientation. There are keypoints on the number “17” on the door as well as on the mail slots. But there are indeed more. From the for loop above, you can see that some keypoints are overlapped because multiple orientation angles are found.

In showing the keypoints on the image, you used the keypoint objects returned. However, you may find the feature vectors stored in descriptors useful if you want to further process the keypoints, such as running a clustering algorithm. But note that you still need the list of keypoints for information, such as the coordinates, to match the feature vectors.

Keypoint Detection using ORB in OpenCV

Since the SIFT and SURF algorithms are patented, there is an incentive to develop a free alternative that doesn’t need to be licensed. It is a product of the OpenCV developers themselves.

ORB stands for Oriented FAST and Rotated BRIEF. It is a combination of two other algorithms, FAST and BRIEF with modifications to match the performance of SIFT and SURF. You do not need to understand the algorithm details to use it, and its output is also a list of keypoint objects, as follows:

In the above, you set the ORB to generate the top 30 keypoints when you created the detector. By default, this number will be 500.

The detector returns a list of keypoints and a numpy array of descriptors (feature vector of each keypoint) exactly as before. However, the descriptors of each keypoint are now of length-32 instead of 128.

The generated keypoints are as follows:

Keypoints detected by ORB algorithm
Original photo by Gleren Meneghin, some rights reserved.

You can see, keypoints are generated roughly at the same location. The results are not exactly the same because there are overlapping keypoints (or offset by a very small distance) and easily the ORB algorithm reached the maximum count of 30. Moreover, the size are not comparable between different algorithms.

Want to Get Started With Machine Learning with OpenCV?

Take my free email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Books

Websites

Summary

In this tutorial, you learned how to apply OpenCV’s keypoint detection algorithms, SIFT, SURF, and ORB.

Specifically, you learned:

  • What is a keypoint in an image
  • How to find the keypoints and the associated description vectors using OpenCV functions.

If you have any questions, please leave a comment below.

Get Started on Machine Learning in OpenCV!

Machine Learning in OpenCV

Learn how to use machine learning techniques in image processing projects

...using OpenCV in advanced ways and work beyond pixels

Discover how in my new Ebook:
Machine Learing in OpenCV

It provides self-study tutorials with all working code in Python to turn you from a novice to expert. It equips you with
logistic regression, random forest, SVM, k-means clustering, neural networks, and much more...all using the machine learning module in OpenCV

Kick-start your deep learning journey with hands-on exercises


See What's Inside

, , ,

No comments yet.

Leave a Reply