[New Book] Click to get The Beginner's Guide to Data Science!
Use the offer code 20offearlybird to get 20% off. Hurry, sale ends soon!

How to Perform Face Recognition With VGGFace2 in Keras

Face recognition is a computer vision task of identifying and verifying a person based on a photograph of their face.

Recently, deep learning convolutional neural networks have surpassed classical methods and are achieving state-of-the-art results on standard face recognition datasets. One example of a state-of-the-art model is the VGGFace and VGGFace2 model developed by researchers at the Visual Geometry Group at Oxford.

Although the model can be challenging to implement and resource intensive to train, it can be easily used in standard deep learning libraries such as Keras through the use of freely available pre-trained models and third-party open source libraries.

In this tutorial, you will discover how to develop face recognition systems for face identification and verification using the VGGFace2 deep learning model.

After completing this tutorial, you will know:

  • About the VGGFace and VGGFace2 models for face recognition and how to install the keras_vggface library to make use of these models in Python with Keras.
  • How to develop a face identification system to predict the name of celebrities in given photographs.
  • How to develop a face verification system to confirm the identity of a person given a photograph of their face.

Kick-start your project with my new book Deep Learning for Computer Vision, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

  • Update Nov/2019: Updated for TensorFlow v2.0, VGGFace v0.6, and MTCNN v0.1.0.
How to Perform Face Recognition With VGGFace2 Convolutional Neural Network in Keras

How to Perform Face Recognition With VGGFace2 Convolutional Neural Network in Keras
Photo by Joanna Pędzich-Opioła, some rights reserved.

Tutorial Overview

This tutorial is divided into six parts; they are:

  1. Face Recognition
  2. VGGFace and VGGFace2 Models
  3. How to Install the keras-vggface Library
  4. How to Detect Faces for Face Recognition
  5. How to Perform Face Identification With VGGFace2
  6. How to Perform Face Verification With VGGFace2

Face Recognition

Face recognition is the general task of identifying and verifying people from photographs of their face.

The 2011 book on face recognition titled “Handbook of Face Recognition” describes two main modes for face recognition, as:

  • Face Verification. A one-to-one mapping of a given face against a known identity (e.g. is this the person?).
  • Face Identification. A one-to-many mapping for a given face against a database of known faces (e.g. who is this person?).

A face recognition system is expected to identify faces present in images and videos automatically. It can operate in either or both of two modes: (1) face verification (or authentication), and (2) face identification (or recognition).

— Page 1, Handbook of Face Recognition. 2011.

We will explore both of these face recognition tasks in this tutorial.

Want Results with Deep Learning for Computer Vision?

Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

VGGFace and VGGFace2 Models

The VGGFace refers to a series of models developed for face recognition and demonstrated on benchmark computer vision datasets by members of the Visual Geometry Group (VGG) at the University of Oxford.

There are two main VGG models for face recognition at the time of writing; they are VGGFace and VGGFace2. Let’s take a closer look at each in turn.

VGGFace Model

The VGGFace model, named later, was described by Omkar Parkhi in the 2015 paper titled “Deep Face Recognition.”

A contribution of the paper was a description of how to develop a very large training dataset, required to train modern-convolutional-neural-network-based face recognition systems, to compete with the large datasets used to train models at Facebook and Google.

… [we] propose a procedure to create a reasonably large face dataset whilst requiring only a limited amount of person-power for annotation. To this end we propose a method for collecting face data using knowledge sources available on the web (Section 3). We employ this procedure to build a dataset with over two million faces, and will make this freely available to the research community.

Deep Face Recognition, 2015.

This dataset is then used as the basis for developing deep CNNs for face recognition tasks such as face identification and verification. Specifically, models are trained on the very large dataset, then evaluated on benchmark face recognition datasets, demonstrating that the model is effective at generating generalized features from faces.

They describe the process of training a face classifier first that uses a softmax activation function in the output layer to classify faces as people. This layer is then removed so that the output of the network is a vector feature representation of the face, called a face embedding. The model is then further trained, via fine-tuning, in order that the Euclidean distance between vectors generated for the same identity are made smaller and the vectors generated for different identities is made larger. This is achieved using a triplet loss function.

Triplet-loss training aims at learning score vectors that perform well in the final application, i.e. identity verification by comparing face descriptors in Euclidean space. […] A triplet (a, p, n) contains an anchor face image as well as a positive p != a and negative n examples of the anchor’s identity. The projection W’ is learned on target datasets

Deep Face Recognition, 2015.

A deep convolutional neural network architecture is used in the VGG style, with blocks of convolutional layers with small kernels and ReLU activations followed by max pooling layers, and the use of fully connected layers in the classifier end of the network.

VGGFace2 Model

Qiong Cao, et al. from the VGG describe a follow-up work in their 2017 paper titled “VGGFace2: A dataset for recognizing faces across pose and age.”

They describe VGGFace2 as a much larger dataset that they have collected for the intent of training and evaluating more effective face recognition models.

In this paper, we introduce a new large-scale face dataset named VGGFace2. The dataset contains 3.31 million images of 9131 subjects, with an average of 362.6 images for each subject. Images are downloaded from Google Image Search and have large variations in pose, age, illumination, ethnicity and profession (e.g. actors, athletes, politicians).

VGGFace2: A dataset for recognising faces across pose and age, 2017.

The paper focuses on how this dataset was collected, curated, and how images were prepared prior to modeling. Nevertheless, VGGFace2 has become the name to refer to the pre-trained models that have provided for face recognition, trained on this dataset.

Models are trained on the dataset, specifically a ResNet-50 and a SqueezeNet-ResNet-50 model (called SE-ResNet-50 or SENet), and it is variations of these models that have been made available by the authors, along with the associated code. The models are evaluated on standard face recognition datasets, demonstrating then state-of-the-art performance.

… we demonstrate that deep models (ResNet-50 and SENet) trained on VGGFace2, achieve state-of-the-art performance on […] benchmarks.

VGGFace2: A dataset for recognising faces across pose and age, 2017.

Specifically, the SqueezeNet-based model offers better performance in general.

The comparison between ResNet-50 and SENet both learned from scratch reveals that SENet has a consistently superior performance on both verification and identification. […] In addition, the performance of SENet can be further improved by training on the two datasets VGGFace2 and MS1M, exploiting the different advantages that each offer.

VGGFace2: A dataset for recognising faces across pose and age, 2017.

A face embedding is predicted by a given model as a 2,048 length vector. The length of the vector is then normalized, e.g. to a length of 1 or unit norm using the L2 vector norm (Euclidean distance from the origin). This is referred to as the ‘face descriptor‘. The distance between face descriptors (or groups of face descriptors called a ‘subject template’) is calculated using the Cosine similarity.

The face descriptor is extracted from from the layer adjacent to the classifier layer. This leads to a 2048 dimensional descriptor, which is then L2 normalized

VGGFace2: A dataset for recognising faces across pose and age, 2017.

How to Install the keras-vggface Library

The authors of VGFFace2 provide the source code for their models, as well as pre-trained models that can be downloaded with standard deep learning frameworks such as Caffe and PyTorch, although there are not examples for TensorFlow or Keras.

We could convert the provided models to TensorFlow or Keras format and develop a model definition in order to load and use these pre-trained models. Thankfully, this work has already been done and can be used directly by third-party projects and libraries.

Perhaps the best-of-breed third-party library for using the VGGFace2 (and VGGFace) models in Keras is the keras-vggface project and library by Refik Can Malli.

Given that this is a third-party open-source project and subject to change, I have created a fork of the project here.

This library can be installed via pip; for example:

After successful installation, you should then see a message like the following:

You can confirm that the library was installed correctly by querying the installed package:

This will summarize the details of the package; for example:

You can also confirm that the library loads correctly by loading it in a script and printing the current version; for example:

Running the example will load the library and print the current version.

How to Detect Faces for Face Recognition

Before we can perform face recognition, we need to detect faces.

Face detection is the process of automatically locating faces in a photograph and localizing them by drawing a bounding box around their extent.

In this tutorial, we will also use the Multi-Task Cascaded Convolutional Neural Network, or MTCNN, for face detection, e.g. finding and extracting faces from photos. This is a state-of-the-art deep learning model for face detection, described in the 2016 paper titled “Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks.”

We will use the implementation provided by Iván de Paz Centeno in the ipazc/mtcnn project. This can also be installed via pip as follows:

We can confirm that the library was installed correctly by importing the library and printing the version; for example.

Running the example prints the current version of the library.

We can use the mtcnn library to create a face detector and extract faces for our use with the VGGFace face detector models in subsequent sections.

The first step is to load an image as a NumPy array, which we can achieve using the Matplotlib imread() function.

Next, we can create an MTCNN face detector class and use it to detect all faces in the loaded photograph.

The result is a list of bounding boxes, where each bounding box defines a lower-left-corner of the bounding box, as well as the width and height.

If we assume there is only one face in the photo for our experiments, we can determine the pixel coordinates of the bounding box as follows.

We can use these coordinates to extract the face.

We can then use the PIL library to resize this small image of the face to the required size; specifically, the model expects square input faces with the shape 224×224.

Tying all of this together, the function extract_face() will load a photograph from the loaded filename and return the extracted face.

It assumes that the photo contains one face and will return the first face detected.

We can test this function with a photograph.

Download a photograph of Sharon Stone taken in 2013 from Wikipedia released under a permissive license.

Download the photograph and place it in your current working directory with the filename ‘sharon_stone1.jpg‘.

Photograph of Sharon

Photograph of Sharon (sharon_stone1.jpg)
Stone, from Wikipedia.

The complete example of loading the photograph of Sharon Stone, extracting the face, and plotting the result is listed below.

Running the example loads the photograph, extracts the face, and plots the result.

We can see that the face was correctly detected and extracted.

The results suggest that we can use the developed extract_face() function as the basis for examples with the VGGFace face recognition model in subsequent sections.

Face Detected From a Photograph of Sharon Stone Using an MTCNN Model

Face Detected From a Photograph of Sharon Stone Using an MTCNN Model

How to Perform Face Identification With VGGFace2

In this section, we will use the VGGFace2 model to perform face recognition with photographs of celebrities from Wikipedia.

A VGGFace model can be created using the VGGFace() constructor and specifying the type of model to create via the ‘model‘ argument.

The keras-vggface library provides three pre-trained VGGModels, a VGGFace1 model via model=’vgg16′ (the default), and two VGGFace2 models ‘resnet50‘ and ‘senet50‘.

The example below creates a ‘resnet50‘ VGGFace2 model and summarizes the shape of the inputs and outputs.

The first time that a model is created, the library will download the model weights and save them in the ./keras/models/vggface/ directory in your home directory. The size of the weights for the resnet50 model is about 158 megabytes, so the download may take a few minutes depending on the speed of your internet connection.

Running the example prints the shape of the input and output tensors of the model.

We can see that the model expects input color images of faces with the shape of 244×244 and the output will be a class prediction of 8,631 people. This makes sense given that the pre-trained models were trained on 8,631 identities in the MS-Celeb-1M dataset (listed in this CSV file).

This Keras model can be used directly to predict the probability of a given face belonging to one or more of more than eight thousand known celebrities; for example:

Once a prediction is made, the class integers can be mapped to the names of the celebrities, and the top five names with the highest probability can be retrieved.

This behavior is provided by the decode_predictions() function in the keras-vggface library.

Before we can make a prediction with a face, the pixel values must be scaled in the same way that data was prepared when the VGGFace model was fit. Specifically, the pixel values must be centered on each channel using the mean from the training dataset.

This can be achieved using the preprocess_input() function provided in the keras-vggface library and specifying the ‘version=2‘ so that the images are scaled using the mean values used to train the VGGFace2 models instead of the VGGFace1 models (the default).

We can tie all of this together and predict the identity of our Shannon Stone photograph downloaded in the previous section, specifically ‘sharon_stone1.jpg‘.

The complete example is listed below.

Running the example loads the photograph, extracts the single face that we know was present, and then predicts the identity for the face.

The top five highest probability names are then displayed.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

We can see that the model correctly identifies the face as belonging to Sharon Stone with a likelihood of 99.642%.

We can test the model with another celebrity, in this case, a male, Channing Tatum.

A photograph of Channing Tatum taken in 2017 is available on Wikipedia under a permissive license.

Download the photograph and save it in your current working directory with the filename ‘channing_tatum.jpg‘.

Photograph of Channing Tatum

Photograph of Channing Tatum, From Wikipedia (channing_tatum.jpg).

Change the code to load the photograph of Channing Tatum instead; for example:

Running the example with the new photograph, we can see that the model correctly identifies the face as belonging to Channing Tatum with a likelihood of 94.432%.

You might like to try this example with other photographs of celebrities taken from Wikipedia. Try a diverse set of genders, races, and ages. You will discover that the model is not perfect, but for those celebrities that it does know well, it can be effective.

You might like to try other versions of the model, such as ‘vgg16‘ and ‘senet50‘, then compare results. For example, I found that with a photograph of Oscar Isaac, that the ‘vgg16‘ is effective, but the VGGFace2 models are not.

The model could be used to identify new faces. One approach would be to re-train the model, perhaps just the classifier part of the model, with a new face dataset.

How to Perform Face Verification With VGGFace2

A VGGFace2 model can be used for face verification.

This involves calculating a face embedding for a new given face and comparing the embedding to the embedding for the single example of the face known to the system.

A face embedding is a vector that represents the features extracted from the face. This can then be compared with the vectors generated for other faces. For example, another vector that is close (by some measure) may be the same person, whereas another vector that is far (by some measure) may be a different person.

Typical measures such as Euclidean distance and Cosine distance are calculated between two embeddings and faces are said to match or verify if the distance is below a predefined threshold, often tuned for a specific dataset or application.

First, we can load the VGGFace model without the classifier by setting the ‘include_top‘ argument to ‘False‘, specifying the shape of the output via the ‘input_shape‘ and setting ‘pooling‘ to ‘avg‘ so that the filter maps at the output end of the model are reduced to a vector using global average pooling.

This model can then be used to make a prediction, which will return a face embedding for one or more faces provided as input.

We can define a new function that, given a list of filenames for photos containing a face, will extract one face from each photo via the extract_face() function developed in a prior section, pre-processing is required for inputs to the VGGFace2 model and can be achieved by calling preprocess_input(), then predict a face embedding for each.

The get_embeddings() function below implements this, returning an array containing an embedding for one face for each provided photograph filename.

We can take our photograph of Sharon Stone used previously (e.g. sharon_stone1.jpg) as our definition of the identity of Sharon Stone by calculating and storing the face embedding for the face in that photograph.

We can then calculate embeddings for faces in other photographs of Sharon Stone and test whether we can effectively verify her identity. We can also use faces from photographs of other people to confirm that they are not verified as Sharon Stone.

Verification can be performed by calculating the Cosine distance between the embedding for the known identity and the embeddings of candidate faces. This can be achieved using the cosine() SciPy function. The maximum distance between two embeddings is a score of 1.0, whereas the minimum distance is 0.0. A common cut-off value used for face identity is between 0.4 and 0.6, such as 0.5, although this should be tuned for an application.

The is_match() function below implements this, calculating the distance between two embeddings and interpreting the result.

We can test out some positive examples by downloading more photos of Sharon Stone from Wikipedia.

Specifically, a photograph taken in 2002 (download and save as ‘sharon_stone2.jpg‘), and a photograph taken in 2017 (download and save as ‘sharon_stone3.jpg‘)

We will test these two positive cases and the Channing Tatum photo from the previous section as a negative example.

The complete code example of face verification is listed below.

The first photo is taken as the template for Sharon Stone and the remaining photos in the list are positive and negative photos to test for verification.

Running the example, we can see that the system correctly verified the two positive cases given photos of Sharon Stone both earlier and later in time.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

We can also see that the photo of Channing Tatum is correctly not verified as Sharon Stone. It would be an interesting extension to explore the verification of other negative photos, such as photos of other female celebrities.

Note: the embeddings generated from the model are not specific to the photos of celebrities used to train the model. The model is believed to produce useful embeddings for any faces; perhaps try it out with photos of yourself compared to photos of relatives and friends.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Papers

Books

API

Summary

In this tutorial, you discovered how to develop face recognition systems for face identification and verification using the VGGFace2 deep learning model.

Specifically, you learned:

  • About the VGGFace and VGGFace2 models for face recognition and how to install the keras_vggface library to make use of these models in Python with Keras.
  • How to develop a face identification system to predict the name of celebrities in given photographs.
  • How to develop a face verification system to confirm the identity of a person given a photograph of their face.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Develop Deep Learning Models for Vision Today!

Deep Learning for Computer Vision

Develop Your Own Vision Models in Minutes

...with just a few lines of python code

Discover how in my new Ebook:
Deep Learning for Computer Vision

It provides self-study tutorials on topics like:
classification, object detection (yolo and rcnn), face recognition (vggface and facenet), data preparation and much more...

Finally Bring Deep Learning to your Vision Projects

Skip the Academics. Just Results.

See What's Inside

148 Responses to How to Perform Face Recognition With VGGFace2 in Keras

  1. Avatar
    Anthony The Koala June 7, 2019 at 7:12 pm #

    Dear Dr Jason,
    While this tutorial is about recognizing the difference between person A (Channing Tatum) and person B (Sharon Stone), my question is whether the face recognition system can recognize the variations within a person and the algorithm identifies the correct person.

    By variations, I mean, if the person has facial hair, has a fatter face or has an emaciated face has spectacles on or off or a scar.

    To illustrate this again. A particular person registers his/her face. Later on, the person may have variations in the face; getting fatter or thinner, has/has not facial hair or has spectacles, what additional work is needed to handle variations.

    Thank you,
    Anthony of Sydney

    • Avatar
      Jason Brownlee June 8, 2019 at 6:50 am #

      Ideally, yes, the embeddings for the same person across time will be closer than the embeddings for different people, in general.

  2. Avatar
    Joe June 11, 2019 at 1:32 am #

    Hi,

    On another topic, are you planning any blogs on analysis of videos from the aspect of perspective work and perspective meshes.
    I am interested in analysing horse racing video and other sports.

    Thank you,
    Joe

    • Avatar
      Jason Brownlee June 11, 2019 at 7:58 am #

      Great suggestion, I hope to cover that topic in the future.

  3. Avatar
    Aravind June 11, 2019 at 2:43 am #

    Hi sir,

    I am getting following error:
    from keras.applications.imagenet_utils import _obtain_input_shape
    ImportError: cannot import name ‘_obtain_input_shape’ from ‘keras.applications.imagenet_utils’

    • Avatar
      Jason Brownlee June 11, 2019 at 8:00 am #

      Perhaps check that you have the latest version of Keras installed, e.g. 2.2.4+

  4. Avatar
    Paolo Ripamonti June 13, 2019 at 1:29 am #

    Hi Jason,

    thank you for this great post! very useful!

    i have a question, in order to recognize people, can i use a classifier like SVM or KNN over the face encodings? if yes, which of these is better?

    i’m working with over a lot of people (near thousand) and i’m not sure that working with classifier is the correct approach.

    thank you

    Paolo

    • Avatar
      Jason Brownlee June 13, 2019 at 6:19 am #

      Yes.

      Test a suite of algorithms in order to discover what works best for your specific dataset.

      SVM works quite well.

  5. Avatar
    Reema June 24, 2019 at 4:49 am #

    I got an error in decode_prediction
    saying
    ValueError: decode_predictions expects a batch of predictions (i.e. a 2D array of shape (samples, 1000)). Found array with shape: (1, 8631)

  6. Avatar
    Esha August 3, 2019 at 10:18 pm #

    Hi sir,
    i am getting the following error:

    cannot import name ‘_obtain_input_shape’ from ‘keras.applications.imagenet_utils’ (C:\Users\user\Anaconda3\lib\site-packages\keras\applications\imagenet_utils.py)

    • Avatar
      Jason Brownlee August 4, 2019 at 6:30 am #

      Sorry to hear that, ensure you are using Keras 2.2.4 or higher and TensorFlow 1.14 or higher.

  7. Avatar
    Benoni August 5, 2019 at 6:46 am #

    > They describe the process of training a face classifier first that uses a softmax activation function in the output layer to classify faces as people. This layer is then removed so that the output of the network is a vector feature representation of the face, called a face embedding. The model is then further trained, via fine-tuning, in order that the Euclidean distance between vectors generated for the same identity are made smaller and the vectors generated for different identities is made larger. This is achieved using a triplet loss function.

    You make it sound so easy & understandable, brilliant tutorial Jason.

    • Avatar
      Jason Brownlee August 5, 2019 at 6:55 am #

      Thanks, I’m happy that it helps.

    • Avatar
      Noah December 20, 2020 at 5:35 pm #

      “… The model is then further trained, via fine-tuning, in order that the Euclidean distance between vectors generated for the same identity are made smaller and the vectors generated for different identities is made larger. …”

      What has been done in this step?

      • Avatar
        Jason Brownlee December 21, 2020 at 6:36 am #

        We don’t train a facenet model in this tutorial, we use a pre-trained model.

  8. Avatar
    Salehin August 16, 2019 at 8:02 am #

    I want to use vgg face2 model in the same way you described the facenet model in the following website:

    https://machinelearningmastery.com/how-to-develop-a-face-recognition-system-using-facenet-in-keras-and-an-svm-classifier/

    How can I use that?

    • Avatar
      Jason Brownlee August 16, 2019 at 8:06 am #

      I believe it is a drop in replacement.

      I cannot write a custom example for you sorry.

  9. Avatar
    Thien August 17, 2019 at 12:48 pm #

    Dear Dr Jason,
    I’m searching for weights of pretrained VGGFaceV2 MobileNet, but Keras just support weights of pretrained VGGFaceV2 for VGGNet16, ResNet50, SeNet50.
    Do you know where to find and download it or if you have ever trained MobileNet on VGGFaceV2 dataset, can you share the weights?
    Thank you.

  10. Avatar
    Ci August 21, 2019 at 3:23 am #

    How can we store the face data in database ?

    • Avatar
      Jason Brownlee August 21, 2019 at 6:50 am #

      Perhaps check the documentation for your database and how to store binary data?

  11. Avatar
    abderrazzak September 29, 2019 at 8:23 am #

    Thank you for this gift Mr Jason Brownlee

    • Avatar
      Jason Brownlee September 29, 2019 at 8:27 am #

      You’re welcome, I’m happy the tutorial is helpful!

  12. Avatar
    Shubham December 3, 2019 at 4:52 pm #

    Hello sir when i run this code
    # example of face detection with mtcnn
    from matplotlib import pyplot
    from PIL import Image
    from numpy import asarray
    from mtcnn.mtcnn import MTCNN

    # extract a single face from a given photograph
    def extract_face(filename, required_size=(224, 224)):
    # load image from file
    pixels = pyplot.imread(filename)
    # create the detector, using default weights
    detector = MTCNN()
    # detect faces in the image
    results = detector.detect_faces(pixels)
    # extract the bounding box from the first face
    x1, y1, width, height = results[0][‘box’]
    x2, y2 = x1 + width, y1 + height
    # extract the face
    face = pixels[y1:y2, x1:x2]
    # resize pixels to the model size
    image = Image.fromarray(face)
    image = image.resize(required_size)
    face_array = asarray(image)
    return face_array

    # load the photo and extract the face
    pixels = extract_face(‘sharon_stone1.jpg’)
    # plot the extracted face
    pyplot.imshow(pixels)
    # show the plot
    pyplot.show()

    It gives error : –
    Using TensorFlow backend.
    Illegal instruction (core dumped)

  13. Avatar
    rahul December 4, 2019 at 10:44 pm #

    how do i train the model for my own images

  14. Avatar
    Naren Babu December 9, 2019 at 5:11 pm #

    Hi Jason,

    I have been using https://github.com/vudung45/FaceRec (Facenet) for a while, but its accuracy is low.

    Can you please suggest me which is better (Facenet or VGGFace2)

    • Avatar
      Jason Brownlee December 10, 2019 at 7:25 am #

      Perhaps try both on your problem and see what works best.

  15. Avatar
    Naren Babu December 9, 2019 at 5:56 pm #

    Hi Jason,

    I have only one face for each person. In this case, should i go with model based like SVM or should i directly compute the difference with the encoding computed.

    One face for each person, for this case, what would you suggest go with, Facenet or VGGface2.

    • Avatar
      Jason Brownlee December 10, 2019 at 7:27 am #

      Try a few approaches and see what works best for your specific dataset.

  16. Avatar
    Samuel December 25, 2019 at 8:05 am #

    Thanks for this tutorial.

    The question I have however is, how do I calculate the cosine similarity as a percentage of accuracy?

    • Avatar
      Jason Brownlee December 25, 2019 at 10:42 am #

      Good question. I hope to cover this topic in the future.

  17. Avatar
    Shahzaib December 26, 2019 at 5:19 am #

    Hi…I just want to ask can I use the same model for live stream face recognition…?

  18. Avatar
    hassan December 27, 2019 at 10:33 pm #

    Hi Jason. I tried your code and I have an error. 🙂

    For yhat = model.predict(samples);

    ValueError: Error when checking input: expected input_427 to have 4 dimensions, but got array with shape (224, 224, 3)

  19. Avatar
    mimus January 15, 2020 at 2:45 am #

    hey man! thanks for your post, i was wondering if it really works on tensorflow2.0? i just need to install tensorflow-gpu 2.0, keras 2.2.4 cuda toolkit 10.0 and cudnn 7.6?(im using conda) or there is another special considerations to install keras_vggface on tensorflow2.0?

    • Avatar
      Jason Brownlee January 15, 2020 at 8:27 am #

      It works with TensorFlow 2 and Keras 2.3 n Python 3.6.

      This will help you with your environment:
      https://machinelearningmastery.com/setup-python-environment-machine-learning-deep-learning-anaconda/

      • Avatar
        mimus January 16, 2020 at 4:39 pm #

        dear jason, i am trying to figure out how to make it run, i guess it is something on my environment set up, but i get this error:

        TypeError: The added layer must be an instance of class Layer. Found:

        i asked about this problem on stackoverflow:

        https://stackoverflow.com/questions/59763562/canot-use-vggface-keras-on-tensorflow-2-0

        i was wondering if you can helpme on this, thanks in advance

        • Avatar
          Jason Brownlee January 17, 2020 at 5:55 am #

          I believe the comment on that stackoverflow post is a good start.

          Also, update to TF 2.1 and Keras 2.3.1.

          • Avatar
            mimus January 18, 2020 at 1:31 am #

            working on that, thanks a lot. also i want to ask you if should i use the rcmalli librarie or yours?

          • Avatar
            Jason Brownlee January 18, 2020 at 8:50 am #

            What is rcmalli?

          • Avatar
            mimus January 18, 2020 at 9:56 am #

            is the user from github who owns the project that you mentioned up there in the post, let me remind you the link,

            sudo pip install git+https://github.com/rcmalli/keras-vggface.git

            now i should use that or these:

            https://github.com/jbrownlee/keras-vggface

            thanks for the help fellow!

          • Avatar
            Jason Brownlee January 19, 2020 at 7:06 am #

            You can install from the original github project or from my clone of the project.

            Either one is fine.

    • Avatar
      kiki May 8, 2021 at 12:42 am #

      Hi, have you a model that works with tensorflow 2.0?

      • Avatar
        Jason Brownlee May 8, 2021 at 6:37 am #

        All code examples use Keras 2.4 running on top of TensorFlow 2.

  20. Avatar
    Mahnoor Sakhawat January 30, 2020 at 4:30 pm #

    Thank you for this tutorial.
    I tried your code and it works perfectly. But when I used my own images, in the following code
    # Example of face detection with a vggface2 model

    …..

    # extract the bounding box from the first face
    x1, y1, width, height = results[0][‘box’]

    In the above line, the following error comes:
    IndexError: list index out of range
    Please help.

    • Avatar
      Jason Brownlee January 31, 2020 at 7:38 am #

      You may have to debug the error. Perhaps confirm your image was loaded correctly?

      • Avatar
        Mahnoor Sakhawat January 31, 2020 at 9:09 pm #

        Thank you Sir, the code works fine. I added my own images and run the code. I got the following output:

        b’ Downtown_Julie_Brown’: 0.295%
        b’ Layne_Staley’: 0.282%
        b’ Eugene_H\xc3\xbctz’: 0.260%
        b’ Fito_Cabrales’: 0.226%
        b’ Stevie_Ray’: 0.204%

        Running the last code, gives me this output.

        Positive Tests
        >face is a Match (0.009 face is a Match (0.026 face is NOT a Match (0.876 > 0.500)

        It does confirm that the face in my image is neither of the names which are shown above but how can I get this kind of output

        b’ Channing_Tatum’: 94.432%
        b’ Eoghan_Quigg’: 0.146%
        b’ Les_Miles’: 0.113%
        b’ Ibrahim_Afellay’: 0.072%
        b’ Tovah_Feldshuh’: 0.070%

        where, the correctly recognized face is getting 94.432% of likelihood. How can I see my image name here like, for example:
        image xyz ‘: 94.432%

        • Avatar
          Jason Brownlee February 1, 2020 at 5:54 am #

          You can get good predictions for faces the model knows well.

  21. Avatar
    Mahnoor Sakhawat February 3, 2020 at 9:45 pm #

    thank you for your reply. I tried the code with my own images and it works fine. The code is calculating the embedding and then comparing it in run time. I wanted to know that how can I save the embedding of a class. So, that I can use it to just compare it with new image’s calculated embedding?
    Secondly, could you please, clarify my concept about CNN. We usually have a large data set for CNN but here, we are just calculating the embedding using a single image and then, comparing the embedding. Will we get accurate results in the presence of occlusions or different light intensity etc? Though I am getting accurate results while, testing the images with different light intensities. I am not getting why a large data set isn’t used here.

  22. Avatar
    Mahnoor Sakhawat February 4, 2020 at 8:29 pm #

    I had another query.
    How can we reduce computation time while, testing?

    • Avatar
      Jason Brownlee February 5, 2020 at 8:07 am #

      Use less data.
      Use a smaller model.
      Use a faster machine.

      • Avatar
        Mahnoor Sakhawat February 13, 2020 at 4:19 pm #

        Thank you for your reply.
        I was looking into MTCNN face detector. Its last stage creates 5 dots on the face. Why is the above algorithm not outputting the detected face with 5 dots? Perhaps, is something missing in the code? Can you point out what is missing and what is the reason behind ignoring the code which creates these dots? Isn’t it a fundamental part of MTCNN?

        • Avatar
          Jason Brownlee February 14, 2020 at 6:27 am #

          It can, in this code we only use the bounding box. You can change it to do anything you wish.

          • Avatar
            Mahnoor Sakhawat February 14, 2020 at 3:12 pm #

            Ok thank you so much.

  23. Avatar
    Ammar February 10, 2020 at 9:43 am #

    Hi,
    I am using VGGFACE2 model with this tutorial ( https://machinelearningmastery.com/how-to-develop-a-face-recognition-system-using-facenet-in-keras-and-an-svm-classifier/ )

    – when I use small dataset, vggface2 faster than facenet to predict
    – when I use big dataset, vggface2 slower than facenet to predict

    Is that ok or there is a mistake?

    • Avatar
      Jason Brownlee February 10, 2020 at 1:20 pm #

      Nice work.

      I don’t know if that is an accurate finding or not, sorry.

  24. Avatar
    Rezz March 6, 2020 at 2:39 pm #

    For this project, how do you trained the model? I can’t seem to find the part where you trained the model. I want to train the model with my own image dataset using this project.

  25. Avatar
    Ed April 9, 2020 at 2:49 am #

    Hi Sir, i have the following error after follow your code above, my laptop do not have any gpu.

    ImportError: DLL load failed while importing _pywrap_tensorflow_internal: The specified module could not be found.

    Failed to load the native TensorFlow runtime.

    Please help!

    Thank you.

  26. Avatar
    Ed April 11, 2020 at 5:22 am #

    Library Versions
    Keras v2.2.4
    Tensorflow v1.14.0
    Warning: Theano backend is not supported/tested for now

  27. Avatar
    Bala April 12, 2020 at 7:35 pm #

    I just used face_recognition (https://github.com/ageitgey/face_recognition/tree/master/examples) library to identify the face names. so what is the difference between vggface2 vs face_recognition library?
    Which one is the best one?

    Thanks!!

    • Avatar
      Jason Brownlee April 13, 2020 at 6:13 am #

      I’m not familiar with that library, sorry.

      Generally, a library will use a model internally.

  28. Avatar
    Vinay May 16, 2020 at 5:28 am #

    Hi,

    While running precompute_features.py, this model “batch_fvecs = resnet50_features.predict(images)” performs inference on cpu, any Idea how can this be run on GPU.
    I’ve tensorflow-gpu 1.14, Nvidia 1050i, the CUDA and CUDNN libs in place. Infect for MTCNN face detection it performs inference on GPU only.

    Am I missing some thing? why is it not performing inference on GPU?

    • Avatar
      Jason Brownlee May 16, 2020 at 6:25 am #

      I don’t know sorry. Perhaps you need to debug your development environment?

  29. Avatar
    Mostafa June 3, 2020 at 5:41 pm #

    Hello Dr. Brownell. Thanks your nice tutorial.
    Is there any keras implementations for others architectures of SENet such as:
    – SE-ResNet-50-256D
    – SE-ResNet-50-256D
    – SE-ResNet-50-128D

    • Avatar
      Jason Brownlee June 4, 2020 at 6:11 am #

      Maybe, I don’t know sorry. Perhaps try a google search?

  30. Avatar
    Mahnoor June 16, 2020 at 5:04 am #

    Hi Jason,

    Thank you for your posts. I had two questions:

    Perhaps, VGG won’t work on Raspberry pi due to memory constraints, so, which controller can i use to build a stand alone system?? And how would i know the limit of the number of different faces that can be recognized?

    If instead of VGG I am using Haar cascade on raspberry pi?

    What is the limit of the number of different faces that can be recognized using Raspberry pi 4? Does using a bigger memory micro SD card increase this limit or is it the Raspberry pi’s RAM which affects it?

    Regards,
    Mahnoor

    • Avatar
      Jason Brownlee June 16, 2020 at 5:44 am #

      I don’t know about that platform, perhaps test a suite of approaches and discover what is most appropriate for your project requirements.

  31. Avatar
    Twarit Nigam June 23, 2020 at 10:11 pm #

    Hi Jason,
    FAB post.

    I am using your code by creating the pickle file with known embedding and known names while training the VGGFace2 model on my dataset and then using that pickle file on test data (image files) is working great.

    But if I am trying to applying the pickle file output on the live feed webcam data, it’s doesn’t work. There is some preprocessing issue with the way I am trying to read the live data.

    Have you come across to implementation of VGGFace2 and MTCNN on live feed data, if yes, would you please share?

    Re,
    Twarit

    • Avatar
      Jason Brownlee June 24, 2020 at 6:32 am #

      You will need to prepare new data/images in an identical manner as the training data.

  32. Avatar
    Oussama Laouadi June 30, 2020 at 12:31 am #

    Hi.

    why there is no VGGface2 model like the VGGface1, why using other models like resnet trained on VGGface2 dataset?

    please correct me here : VGGface is both a dataset and a VGG model trained on this dataset. VGGface2 is just a dataset with no VGG model trained on it.

  33. Avatar
    bhanuchander July 14, 2020 at 7:37 am #

    It is really good.. i am using this for face authentication since previous year… with weight imprinting technology.. even give good FAR compared with dlib….

    Results here : https://github.com/Bhanuchander210/reality_of_one_shot_learning/blob/master/evaluate_results.md

  34. Avatar
    Avani July 26, 2020 at 12:17 pm #

    Hey Jason, I have usecase of classifying emoji images. They are not exactly face but do resemble some features like expressions. I am confused on whether I should go ahead and retrain pretrained CNN on Imagenet data or I should retrain this Facenet model on new emoji images? Please guide on what do you think would be better?

    • Avatar
      Jason Brownlee July 26, 2020 at 1:40 pm #

      I would guess that a new model is required. Perhaps inspired by well performing image classification models like vgg.

      • Avatar
        Avani July 26, 2020 at 3:27 pm #

        You mean training new model from scratch? Can’t I fine tune inception/resnet/vgg already trained on Imagenet?

        • Avatar
          Jason Brownlee July 27, 2020 at 5:44 am #

          I guess is no, but perhaps try it and see.

          • Avatar
            Avani July 27, 2020 at 8:14 am #

            Sure thanks, will try!

  35. Avatar
    Sarah August 1, 2020 at 7:31 pm #

    Hi,
    Very useful post!
    Just a short question, why don’t you normalize image pixels before using the net? I thought it was recommended to always normalize the inputs. In case it’s not, when we should normalize and when not?
    Thank you very much!

    • Avatar
      Jason Brownlee August 2, 2020 at 5:41 am #

      We do, in the call to the preprocess_input() function.

      • Avatar
        Sarah August 19, 2020 at 6:35 pm #

        It substracts the train means but there’s no transformation to normalize the pixels between 0 and 1, am I right? Is it not necessary?
        And another question, should the input images be in RGB or BGR to use the keras-vggface library? I think is RGB but I would like to confirm it.

        Thanks!!

        • Avatar
          Jason Brownlee August 20, 2020 at 6:38 am #

          You must prepare data for the model by calling the preprocess_input() function which standardizes the pixel values.

          We do this in the tutorial.

          Images are in RGB format.

          • Avatar
            Alireza February 1, 2021 at 5:58 am #

            I think they should be BGR because this tensorflow release is based of CAFFE and CAFFE is BGR

  36. Avatar
    Khim Wee August 26, 2020 at 9:31 am #

    Thanks for this great article!

    I am surprised that vggface2 can also recognise some of my local celebrities!

    However it still couldn’t recognise some of the youtubers i tested.

    I will be exploring to use transfer learning to recognise these personalities that are previously not recognised using vggface2 to improve my understanding.

    Any pointers will be greatly appreciated!

  37. Avatar
    Rita Goel September 1, 2020 at 5:37 am #

    Hi Dr. Jason,

    With this code for finding difference between two persons – is it safe to assume that it can be used to distinguish between identical twins as well. I am doing my research to distinguish between identical twins – could you please suggest me something I can proceed.

    thanks,
    Rita

    • Avatar
      Jason Brownlee September 1, 2020 at 6:38 am #

      I would not expect it to work for identical twins.

  38. Avatar
    Anonymous October 5, 2020 at 9:56 pm #

    Which one will perform better? VGGface or FaceNet

    • Avatar
      Jason Brownlee October 6, 2020 at 6:51 am #

      Depends on your problem. Perhaps test each and select the approach that works best for you.

  39. Avatar
    sai October 18, 2020 at 7:32 am #

    Looks like the detector brings out upper left corner. The Y axis starts with 0 at the top and goes down to max height at the origin. The X axis of the picture starts at 0 at the origin and max width right side. If it was lower left corner then the face would be from [y2:y1] (top to bottom), but we see face cropped by [y1:y2] height wise. Please correct me if i am wrong.

  40. Avatar
    kris October 21, 2020 at 8:56 am #

    can models like these be used commercially? For VGGFACE2 it says dataset is under cerative commons but nothing about the model itself. For VGGFACE it clearly calls out prohibiting commercial use.

    • Avatar
      Jason Brownlee October 21, 2020 at 9:15 am #

      Good question, I guess it is a case by case basis for each model and business. Perhaps you can contact the author of a given model and request a commercial license or use their procedure to generate a new model that you own.

  41. Avatar
    Trystan October 28, 2020 at 9:55 am #

    Hi, incredible work, thank you so much for this tutorial, it’s helped a huge amount!

  42. Avatar
    Biniyam Sol December 20, 2020 at 6:15 pm #

    Very nice helpful explanation.
    I would like to train a new model like VGGface2, which works both for face verification and identification.
    My training dataset doesn’t have label, and I would like to train it in unsupervised manner. How can I achieve this?

    • Avatar
      Jason Brownlee December 21, 2020 at 6:38 am #

      Thanks.

      If you’re images are not labeled, I don’t know how you would prepare a model for verification or identification.

  43. Avatar
    Matt Pollack December 21, 2020 at 1:12 pm #

    Hello Jason

    Your blog is amazing.

    In “How to Perform Face Identification With VGGFace2” section you use SoftMax layer for face identification. (one to many)

    But in “How to Perform Face Verification With VGGFace2”, you use the last layer before SoftMax layer for face verification (one to one). You get embeddings and compute similarity(one to one). So my question is we didn’t use the same principle for face identification, get the embeddings and compute similarity to many (one to many). Why we didn’t do these way.
    Thank you for your response

    • Avatar
      Jason Brownlee December 21, 2020 at 1:56 pm #

      Thanks.

      Use of the model is adapted based on different applications.

      In the first case we use the pre-trained model to classify images, e.g. multi-class classification.

      In the second case we use the pretrained model with just embeddings in a binary classification type problem.

      You can adapt the usage of the model anyway you like.

  44. Avatar
    safoora January 13, 2021 at 12:40 am #

    Hello Jason
    I’m using this model to find similarity between two faces in images.
    model = VGGFace(model=’resnet50′, include_top=False, input_shape=(224, 224, 3), pooling=’avg’)
    # perform prediction
    yhat = model.predict(samples)
    why the dimension of predict is 2048?

    • Avatar
      Jason Brownlee January 13, 2021 at 6:16 am #

      That is the output of the model, e.g. the number of nodes in the layer prior to the output layer.

  45. Avatar
    Stefan Reining February 22, 2021 at 11:00 pm #

    Hi,
    thank you for the great tutorial. I have a short question: Does the face verification in the section “How to Perform Face Verification With VGGFace2” work equally well when the persons on the images are not among the 8631 celebrities used for training? That is, does it work equally well when I want to check whether two images of non-celebrities depict the same person?

    Best,
    Stefan

  46. Avatar
    DJ March 31, 2021 at 11:09 pm #

    Hi,
    Thanks for the good explanation.

    between ‘face verification’ and ‘face identification’, which one is better and which one is used mostly?

    does this all depend on dataset and condition?

    • Avatar
      Jason Brownlee April 1, 2021 at 8:17 am #

      It depends on the problem you are trying to solve, then choose a solution that addresses your problem.

  47. Avatar
    Simone Cotugno April 22, 2021 at 12:12 am #

    Hi,
    Thanks for help us with this explanation.
    I hope you can help me with this doubt: because I have to map range value of output in other range, I have to know which is the range of output value. Can you say me which is?

    The last output layer function shoud be softmax right? But the range I have is something like [-0.99, 0.99]. How is it possibile?

    Thank you for help.

    • Avatar
      Jason Brownlee April 22, 2021 at 5:42 am #

      The range if 0-1 for each output.

      If you need a different range you can modify the output function or scale the output to the new range after the fact.

  48. Avatar
    Vivek June 26, 2021 at 5:27 pm #

    runfile(‘C:/Users/Thananyaa/.spyder-py3/vggface1.py’, wdir=’C:/Users/Thananyaa/.spyder-py3′)
    Traceback (most recent call last):

    File “C:\Users\Thananyaa\.spyder-py3\vggface1.py”, line 7, in
    from keras_vggface.vggface import VGG16

    File “C:\Users\Thananyaa\anaconda3\lib\site-packages\keras_vggface\__init__.py”, line 1, in
    from keras_vggface.vggface import VGGFace

    File “C:\Users\Thananyaa\anaconda3\lib\site-packages\keras_vggface\vggface.py”, line 9, in
    from keras_vggface.models import RESNET50, VGG16, SENET50

    File “C:\Users\Thananyaa\anaconda3\lib\site-packages\keras_vggface\models.py”, line 20, in
    from keras.engine.topology import get_source_inputs

    ModuleNotFoundError: No module named ‘keras.engine.topology’

    • Avatar
      Rohith A K September 27, 2021 at 1:28 pm #

      please edit the /usr/local/lib/python3.7/dist-packages/keras_vggface/models.py file, in this file please replace below mentioned line
      from keras.engine.topology import get_source_inputs

      with

      from keras.utils.layer_utils import get_source_inputs

      This file can be opened using colab terminal if you are using colab pro
      or
      once you get the error (which you mentioned) , look for the file where the error is showing. Usually it shows error in “from keras_vggface.vggface import VGGFace” at first step,
      just below this there will be one more error pointing to /usr/local/lib/python3.7/dist-packages/keras_vggface/models.py file, click on this link and comment the line as specified above and replace with new one

      • Avatar
        Adrian Tam September 28, 2021 at 8:47 am #

        Thanks for the update. Keras has changed due to Tensorflow 2.x made it official module. Hence some functions are relocated.

      • Avatar
        Will Gabriel February 15, 2022 at 10:22 pm #

        Hi Rohith A K.
        Were you able to figure out the solution to the exact problem you mentioned concerning replacing keras.engine.topology with keras.utils.layer_utils on Colab to resolving it on jupyter notebook?
        If i download my colab notebook to my jupyter notebook, I’d still get that error. How do we fix that?

  49. Avatar
    Locke August 29, 2021 at 6:28 pm #

    Hello

    I have a question about the algorithm behind face embedding.
    I am currently on a project about predicting BMI from the face.

    After MTCNN, the aligned faces have different sizes and resize to 224×224(required by the VGG) even distort the ratio. Do you think directly resizing matters? Or VGG still can give it correct embedding?

    Or I should fix the ratio and then resize to 224×224, but this way, it will leave black margin surrounding the face…

    • Avatar
      Adrian Tam September 1, 2021 at 7:13 am #

      I think leaving black margin should not matter. But I would believe a heavily distorted aspect ratio will impact more negatively. My reasoning is that, with the fixed convolution size, you are now putting more (or less) amount of data into each convolution operation and the feature you extracted may not be the same any more.

  50. Avatar
    Anuj September 30, 2021 at 2:10 am #

    Hello,
    I have tried following both this and the Facenet tutorial, but I run into issues related to using Python 3 rather than 2. In this case in particular, in this case when importing the pre trained model, I get a str object has no decode attribute error. (In the case of the facenet the issue was also when trying to load the models). Could you perhaps give me some advice regarding this?

    • Avatar
      Adrian Tam October 1, 2021 at 12:11 pm #

      Can you point out which line of code causing the error?

      • Avatar
        Anuj October 4, 2021 at 9:20 pm #

        I have lost it, and was unable to recreate it, it seems to have occured from using older versions of the packages required. I rebuilt my environment which took care of most of the issues with a slight edit to the models.py file in vgg_kerasface to let it work for tensorflow 2.

  51. Avatar
    Huy Tran October 1, 2021 at 2:23 pm #

    Hi Jason.
    As I understood, VGGFace2 is trained for classification the celebrities and then is further trained with the triplet loss function. Am I right?

  52. Avatar
    Nuha November 8, 2021 at 7:07 am #

    Hi

    i tried to execute pip install git+https://github.com/rcmalli/keras-vggface.git in anaconda prompt but i get this error massage:

    ERROR: Error [WinError 2] The system cannot find the file specified while executing command git clone -q https://github.com/rcmalli/keras-vggface.git ‘C:\Users\Nuha\AppData\Local\Temp\pip-req-build-ffiv6mrk’
    ERROR: Cannot find command ‘git’ – do you have ‘git’ installed and in your PATH?

    do you know how i can fix it ?

  53. Avatar
    win January 30, 2023 at 2:08 pm #

    Hi, thank you for your great tutorial. It helps me a lot. I am wondering why did you use the cosine distance instead of simple distance like euclidean distance? Is there any objectives for it?

    • Avatar
      James Carmichael January 31, 2023 at 8:53 am #

      Hi win…You are very welcome! The choice was simply to illustrate the process. You may certainly try other options. If you do, please let us know what you find.

  54. Avatar
    Jaydeep December 24, 2023 at 4:38 am #

    Hi James,
    Thank you very much for this very useful blog.
    I am getting error as ValueError: Input 0 of layer “model” is incompatible with the layer: expected shape=(None, None, None, 3), found shape=(None, 183, 230, 4)

    do we need to reshape the image before using detector.detect_faces ()

Leave a Reply