How to Develop a Face Recognition System Using FaceNet in Keras

Face recognition is a computer vision task of identifying and verifying a person based on a photograph of their face.

FaceNet is a face recognition system developed in 2015 by researchers at Google that achieved then state-of-the-art results on a range of face recognition benchmark datasets. The FaceNet system can be used broadly thanks to multiple third-party open source implementations of the model and the availability of pre-trained models.

The FaceNet system can be used to extract high-quality features from faces, called face embeddings, that can then be used to train a face identification system.

In this tutorial, you will discover how to develop a face detection system using FaceNet and an SVM classifier to identify people from photographs.

After completing this tutorial, you will know:

  • About the FaceNet face recognition system developed by Google and open source implementations and pre-trained models.
  • How to prepare a face detection dataset including first extracting faces via a face detection system and then extracting face features via face embeddings.
  • How to fit, evaluate, and demonstrate an SVM model to predict identities from faces embeddings.

Kick-start your project with my new book Deep Learning for Computer Vision, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

  • Update Nov/2019: Updated for TensorFlow v2.0 and MTCNN v0.1.0.
How to Develop a Face Recognition System Using FaceNet in Keras and an SVM Classifier

How to Develop a Face Recognition System Using FaceNet in Keras and an SVM Classifier
Photo by Peter Valverde, some rights reserved.

Tutorial Overview

This tutorial is divided into five parts; they are:

  1. Face Recognition
  2. FaceNet Model
  3. How to Load a FaceNet Model in Keras
  4. How to Detect Faces for Face Recognition
  5. How to Develop a Face Classification System

Face Recognition

Face recognition is the general task of identifying and verifying people from photographs of their face.

The 2011 book on face recognition titled “Handbook of Face Recognition” describes two main modes for face recognition, as:

  • Face Verification. A one-to-one mapping of a given face against a known identity (e.g. is this the person?).
  • Face Identification. A one-to-many mapping for a given face against a database of known faces (e.g. who is this person?).

A face recognition system is expected to identify faces present in images and videos automatically. It can operate in either or both of two modes: (1) face verification (or authentication), and (2) face identification (or recognition).

— Page 1, Handbook of Face Recognition. 2011.

We will focus on the face identification task in this tutorial.

Want Results with Deep Learning for Computer Vision?

Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

FaceNet Model

FaceNet is a face recognition system that was described by Florian Schroff, et al. at Google in their 2015 paper titled “FaceNet: A Unified Embedding for Face Recognition and Clustering.”

It is a system that, given a picture of a face, will extract high-quality features from the face and predict a 128 element vector representation these features, called a face embedding.

FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity.

FaceNet: A Unified Embedding for Face Recognition and Clustering, 2015.

The model is a deep convolutional neural network trained via a triplet loss function that encourages vectors for the same identity to become more similar (smaller distance), whereas vectors for different identities are expected to become less similar (larger distance). The focus on training a model to create embeddings directly (rather than extracting them from an intermediate layer of a model) was an important innovation in this work.

Our method uses a deep convolutional network trained to directly optimize the embedding itself, rather than an intermediate bottleneck layer as in previous deep learning approaches.

FaceNet: A Unified Embedding for Face Recognition and Clustering, 2015.

These face embeddings were then used as the basis for training classifier systems on standard face recognition benchmark datasets, achieving then-state-of-the-art results.

Our system cuts the error rate in comparison to the best published result by 30% …

FaceNet: A Unified Embedding for Face Recognition and Clustering, 2015.

The paper also explores other uses of the embeddings, such as clustering to group like-faces based on their extracted features.

It is a robust and effective face recognition system, and the general nature of the extracted face embeddings lends the approach to a range of applications.

How to Load a FaceNet Model in Keras

There are a number of projects that provide tools to train FaceNet-based models and make use of pre-trained models.

Perhaps the most prominent is called OpenFace that provides FaceNet models built and trained using the PyTorch deep learning framework. There is a port of OpenFace to Keras, called Keras OpenFace, but at the time of writing, the models appear to require Python 2, which is quite limiting.

Another prominent project is called FaceNet by David Sandberg that provides FaceNet models built and trained using TensorFlow. The project looks mature, although at the time of writing does not provide a library-based installation nor clean API. Usefully, David’s project provides a number of high-performing pre-trained FaceNet models and there are a number of projects that port or convert these models for use in Keras.

A notable example is Keras FaceNet by Hiroki Taniai. His project provides a script for converting the Inception ResNet v1 model from TensorFlow to Keras. He also provides a pre-trained Keras model ready for use.

We will use the pre-trained Keras FaceNet model provided by Hiroki Taniai in this tutorial. It was trained on MS-Celeb-1M dataset and expects input images to be color, to have their pixel values whitened (standardized across all three channels), and to have a square shape of 160×160 pixels.

The model can be downloaded from here:

Download the model file and place it in your current working directory with the filename ‘facenet_keras.h5‘.

We can load the model directly in Keras using the load_model() function; for example:

Running the example loads the model and prints the shape of the input and output tensors.

We can see that the model indeed expects square color images as input with the shape 160×160, and will output a face embedding as a 128 element vector.

Now that we have a FaceNet model, we can explore using it.

How to Detect Faces for Face Recognition

Before we can perform face recognition, we need to detect faces.

Face detection is the process of automatically locating faces in a photograph and localizing them by drawing a bounding box around their extent.

In this tutorial, we will also use the Multi-Task Cascaded Convolutional Neural Network, or MTCNN, for face detection, e.g. finding and extracting faces from photos. This is a state-of-the-art deep learning model for face detection, described in the 2016 paper titled “Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks.”

We will use the implementation provided by Iván de Paz Centeno in the ipazc/mtcnn project. This can also be installed via pip as follows:

We can confirm that the library was installed correctly by importing the library and printing the version; for example:

Running the example prints the current version of the library.

We can use the mtcnn library to create a face detector and extract faces for our use with the FaceNet face detector models in subsequent sections.

The first step is to load an image as a NumPy array, which we can achieve using the PIL library and the open() function. We will also convert the image to RGB, just in case the image has an alpha channel or is black and white.

Next, we can create an MTCNN face detector class and use it to detect all faces in the loaded photograph.

The result is a list of bounding boxes, where each bounding box defines a lower-left-corner of the bounding box, as well as the width and height.

If we assume there is only one face in the photo for our experiments, we can determine the pixel coordinates of the bounding box as follows. Sometimes the library will return a negative pixel index, and I think this is a bug. We can fix this by taking the absolute value of the coordinates.

We can use these coordinates to extract the face.

We can then use the PIL library to resize this small image of the face to the required size; specifically, the model expects square input faces with the shape 160×160.

Tying all of this together, the function extract_face() will load a photograph from the loaded filename and return the extracted face. It assumes that the photo contains one face and will return the first face detected.

We can use this function to extract faces as needed in the next section that can be provided as input to the FaceNet model.

How to Develop a Face Classification System

In this section, we will develop a face detection system to predict the identity of a given face.

The model will be trained and tested using the ‘5 Celebrity Faces Dataset‘ that contains many photographs of five different celebrities.

We will use an MTCNN model for face detection, the FaceNet model will be used to create a face embedding for each detected face, then we will develop a Linear Support Vector Machine (SVM) classifier model to predict the identity of a given face.

5 Celebrity Faces Dataset

The 5 Celebrity Faces Dataset is a small dataset that contains photographs of celebrities.

It includes photos of: Ben Affleck, Elton John, Jerry Seinfeld, Madonna, and Mindy Kaling.

The dataset was prepared and made available by Dan Becker and provided for free download on Kaggle. Note, a Kaggle account is required to download the dataset.

Download the dataset (this may require a Kaggle login), data.zip (2.5 megabytes), and unzip it in your local directory with the folder name ‘5-celebrity-faces-dataset‘.

You should now have a directory with the following structure (note, there are spelling mistakes in some directory names, and they were left as-is in this example):

We can see that there is a training dataset and a validation or test dataset.

Looking at some of the photos in the directories, we can see that the photos provide faces with a range of orientations, lighting, and in various sizes. Importantly, each photo contains one face of the person.

We will use this dataset as the basis for our classifier, trained on the ‘train‘ dataset only and classify faces in the ‘val‘ dataset. You can use this same structure to develop a classifier with your own photographs.

Detect Faces

The first step is to detect the face in each photograph and reduce the dataset to a series of faces only.

Let’s test out our face detector function defined in the previous section, specifically extract_face().

Looking in the ‘5-celebrity-faces-dataset/train/ben_afflek/‘ directory, we can see that there are 14 photographs of Ben Affleck in the training dataset. We can detect the face in each photograph, and create a plot with 14 faces, with two rows of seven images each.

The complete example is listed below.

Running the example takes a moment and reports the progress of each loaded photograph along the way and the shape of the NumPy array containing the face pixel data.

A figure is created containing the faces detected in the Ben Affleck directory.

We can see that each face was correctly detected and that we have a range of lighting, skin tones, and orientations in the detected faces.

Plot of 14 Faces of Ben Affleck Detected From the Training Dataset of the 5 Celebrity Faces Dataset

Plot of 14 Faces of Ben Affleck Detected From the Training Dataset of the 5 Celebrity Faces Dataset

So far, so good.

Next, we can extend this example to step over each subdirectory for a given dataset (e.g. ‘train‘ or ‘val‘), extract the faces, and prepare a dataset with the name as the output label for each detected face.

The load_faces() function below will load all of the faces into a list for a given directory, e.g. ‘5-celebrity-faces-dataset/train/ben_afflek/‘.

We can call the load_faces() function for each subdirectory in the ‘train‘ or ‘val‘ folders. Each face has one label, the name of the celebrity, which we can take from the directory name.

The load_dataset() function below takes a directory name such as ‘5-celebrity-faces-dataset/train/‘ and detects faces for each subdirectory (celebrity), assigning labels to each detected face.

It returns the X and y elements of the dataset as NumPy arrays.

We can then call this function for the ‘train’ and ‘val’ folders to load all of the data, then save the results in a single compressed NumPy array file via the savez_compressed() function.

Tying all of this together, the complete example of detecting all of the faces in the 5 Celebrity Faces Dataset is listed below.

Running the example may take a moment.

First, all of the photos in the ‘train‘ dataset are loaded, then faces are extracted, resulting in 93 samples with square face input and a class label string as output. Then the ‘val‘ dataset is loaded, providing 25 samples that can be used as a test dataset.

Both datasets are then saved to a compressed NumPy array file called ‘5-celebrity-faces-dataset.npz‘ that is about three megabytes and is stored in the current working directory.

This dataset is ready to be provided to a face detection model.

Create Face Embeddings

The next step is to create a face embedding.

A face embedding is a vector that represents the features extracted from the face. This can then be compared with the vectors generated for other faces. For example, another vector that is close (by some measure) may be the same person, whereas another vector that is far (by some measure) may be a different person.

The classifier model that we want to develop will take a face embedding as input and predict the identity of the face. The FaceNet model will generate this embedding for a given image of a face.

The FaceNet model can be used as part of the classifier itself, or we can use the FaceNet model to pre-process a face to create a face embedding that can be stored and used as input to our classifier model. This latter approach is preferred as the FaceNet model is both large and slow to create a face embedding.

We can, therefore, pre-compute the face embeddings for all faces in the train and test (formally ‘val‘) sets in our 5 Celebrity Faces Dataset.

First, we can load our detected faces dataset using the load() NumPy function.

Next, we can load our FaceNet model ready for converting faces into face embeddings.

We can then enumerate each face in the train and test datasets to predict an embedding.

To predict an embedding, first the pixel values of the image need to be suitably prepared to meet the expectations of the FaceNet model. This specific implementation of the FaceNet model expects that the pixel values are standardized.

In order to make a prediction for one example in Keras, we must expand the dimensions so that the face array is one sample.

We can then use the model to make a prediction and extract the embedding vector.

The get_embedding() function defined below implements these behaviors and will return a face embedding given a single image of a face and the loaded FaceNet model.

Tying all of this together, the complete example of converting each face into a face embedding in the train and test datasets is listed below.

Running the example reports progress along the way.

We can see that the face dataset was loaded correctly and so was the model. The train dataset was then transformed into 93 face embeddings, each comprised of a 128 element vector. The 25 examples in the test dataset were also suitably converted to face embeddings.

The resulting datasets were then saved to a compressed NumPy array that is about 50 kilobytes with the name ‘5-celebrity-faces-embeddings.npz‘ in the current working directory.

We are now ready to develop our face classifier system.

Perform Face Classification

In this section, we will develop a model to classify face embeddings as one of the known celebrities in the 5 Celebrity Faces Dataset.

First, we must load the face embeddings dataset.

Next, the data requires some minor preparation prior to modeling.

First, it is a good practice to normalize the face embedding vectors. It is a good practice because the vectors are often compared to each other using a distance metric.

In this context, vector normalization means scaling the values until the length or magnitude of the vectors is 1 or unit length. This can be achieved using the Normalizer class in scikit-learn. It might even be more convenient to perform this step when the face embeddings are created in the previous step.

Next, the string target variables for each celebrity name need to be converted to integers.

This can be achieved via the LabelEncoder class in scikit-learn.

Next, we can fit a model.

It is common to use a Linear Support Vector Machine (SVM) when working with normalized face embedding inputs. This is because the method is very effective at separating the face embedding vectors. We can fit a linear SVM to the training data using the SVC class in scikit-learn and setting the ‘kernel‘ attribute to ‘linear‘. We may also want probabilities later when making predictions, which can be configured by setting ‘probability‘ to ‘True‘.

Next, we can evaluate the model.

This can be achieved by using the fit model to make a prediction for each example in the train and test datasets and then calculating the classification accuracy.

Tying all of this together, the complete example of fitting a Linear SVM on the face embeddings for the 5 Celebrity Faces Dataset is listed below.

Running the example first confirms that the number of samples in the train and test datasets is as we expect

Next, the model is evaluated on the train and test dataset, showing perfect classification accuracy. This is not surprising given the size of the dataset and the power of the face detection and face recognition models used.

We can make it more interesting by plotting the original face and the prediction.

First, we need to load the face dataset, specifically the faces in the test dataset. We could also load the original photos to make it even more interesting.

The rest of the example is the same up until we fit the model.

First, we need to select a random example from the test set, then get the embedding, face pixels, expected class prediction, and the corresponding name for the class.

Next, we can use the face embedding as an input to make a single prediction with the fit model.

We can predict both the class integer and the probability of the prediction.

We can then get the name for the predicted class integer, and the probability for this prediction.

We can then print this information.

We can also plot the face pixels along with the predicted name and probability.

Tying all of this together, the complete example for predicting the identity for a given unseen photo in the test dataset is listed below.

A different random example from the test dataset will be selected each time the code is run.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

In this case, a photo of Jerry Seinfeld is selected and correctly predicted.

A plot of the chosen face is also created, showing the predicted name and probability in the image title.

Detected Face of Jerry Seinfeld, Correctly Identified by the SVM Classifier

Detected Face of Jerry Seinfeld, Correctly Identified by the SVM Classifier

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Papers

Books

Projects

APIs

Summary

In this tutorial, you discovered how to develop a face detection system using FaceNet and an SVM classifier to identify people from photographs.

Specifically, you learned:

  • About the FaceNet face recognition system developed by Google and open source implementations and pre-trained models.
  • How to prepare a face detection dataset including first extracting faces via a face detection system and then extracting face features via face embeddings.
  • How to fit, evaluate, and demonstrate an SVM model to predict identities from faces embeddings.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Develop Deep Learning Models for Vision Today!

Deep Learning for Computer Vision

Develop Your Own Vision Models in Minutes

...with just a few lines of python code

Discover how in my new Ebook:
Deep Learning for Computer Vision

It provides self-study tutorials on topics like:
classification, object detection (yolo and rcnn), face recognition (vggface and facenet), data preparation and much more...

Finally Bring Deep Learning to your Vision Projects

Skip the Academics. Just Results.

See What's Inside

582 Responses to How to Develop a Face Recognition System Using FaceNet in Keras

  1. Avatar
    Abkul June 7, 2019 at 6:28 am #

    Great tutorial.

    Was looking at whether Transfer learning, Siamese network and triplet loss approaches are applicable to animal face(eg a sheep, goat etc) recognition particularly mobileNet(or otherwise) when your crystal clear blog came up.

    Kindly shed more light on its applicability and any other auxiliary hints.

    • Avatar
      Jason Brownlee June 7, 2019 at 8:11 am #

      I don’t see why not.

      • Avatar
        Irtaza H M January 22, 2021 at 9:32 pm #

        How do I solve this error ???

        ValueError: Input 0 of layer Conv2d_1a_3x3 is incompatible with the layer: : expected min_ndim=4, found ndim=2. Full shape received: [None, 128]

        • Avatar
          Jason Brownlee January 23, 2021 at 7:05 am #

          Sorry to hear that you’re having trouble, some of these tips may help:
          https://machinelearningmastery.com/faq/single-faq/why-does-the-code-in-the-tutorial-not-work-for-me

          • Avatar
            Irtaza H M January 23, 2021 at 4:38 pm #

            Sorry Sir! but it’s not helping me out because I’m creating my own datasets of 15 people so that’s why facing that error can you help me about what changes should I change in your code ??? and how do i train my own new facenet model ????

          • Avatar
            Jason Brownlee January 24, 2021 at 5:55 am #

            Perhaps start with the above tutorial that works, and then adapt it for your own dataset.

            Start by ensuring the existing tutorial works on your environment.

            Then ensure you have loaded your dataset correctly.

            Finally, adapt the example to use your loaded data.

  2. Avatar
    Shravan Kumar June 7, 2019 at 3:16 pm #

    Hi Jason,

    This is fantastic, thanks for sharing.

    What do you suggest when we have tens of thousands of classes.

    A Facenet model itself as a classifier or a specific classifier model is to be trained. In terms of scalability and performance which is the preferred method.

    Referring to:
    “The FaceNet model can be used as part of the classifier itself, or we can use the FaceNet model to pre-process a face to create a face embedding that can be stored and used as input to our classifier model. This latter approach is preferred as the FaceNet model is both large and slow to create a face embedding.”

    • Avatar
      Jason Brownlee June 8, 2019 at 6:34 am #

      Good question, the facenet embedding approach is a good starting point, but perhaps check the literature for more scalable approaches.

  3. Avatar
    Anand June 21, 2019 at 11:06 pm #

    Hi jason,
    As per my understanding The triplet loss is used so that the model can also learn the dissimilarity between the classes rather than only learning similarity between same class.
    But here we are not training our model on the actual image dataset on which we need our classification to be done. Rather we are using SVM for that.
    So, how can we make use of triplet loss in this method of face recognition?

  4. Avatar
    Karan Sharma June 24, 2019 at 2:36 pm #

    Hi Jason,

    I want to try this on Cat and Dog dataset. What do you thing the pre-trained networks face embeddings will work in this case?

    • Avatar
      Jason Brownlee June 25, 2019 at 6:08 am #

      No, I don’t think it will work for animals.

      Interesting idea though.

      • Avatar
        Karan Sharma June 25, 2019 at 2:40 pm #

        Thanks for your reply.

        What do you think how much effort will it take to train facenet from scratch?

        And certainly how much data?

  5. Avatar
    Karan Sharma June 26, 2019 at 3:19 pm #

    Thanks for the response Jason.

  6. Avatar
    Karan Sharma June 28, 2019 at 4:27 pm #

    Hi Jason,

    Can MTCNN detect faces of cats and dogs from image?

    • Avatar
      Jason Brownlee June 29, 2019 at 6:35 am #

      I don’t see why not, but the model would have to be trained on dogs and cats.

  7. Avatar
    Thinh July 8, 2019 at 5:35 pm #

    Hi Jason,
    Thanks for a very nice tutor. But i cant set up mtcnn in my python2? Is there a way to install mtcnn for python2?

  8. Avatar
    Thinh July 10, 2019 at 7:47 pm #

    Hi Jason. nice tutor!
    But i wonder that, if i want to identify who is stranger. should i make a folder for ‘stranger’ contains alot of stranger faces???(exclude your 5 Celebrity Faces Dataset ??)

    • Avatar
      Jason Brownlee July 11, 2019 at 9:47 am #

      Good question.

      No, if a face does not match any known faces, it would be “unknown”. E.g. has a low probability for all known faces.

      • Avatar
        Thinh July 11, 2019 at 12:25 pm #

        Thank you very much. Keep writting tutorials to help thoudsands of thoudsands people like me a round the world learning ML,DL. <3

  9. Avatar
    Thinh July 10, 2019 at 7:48 pm #

    yeah! I figured out that MTCNN only require Python3.
    Follow this bellow: https://pypi.org/project/mtcnn/

  10. Avatar
    Nghia July 16, 2019 at 6:57 pm #

    Thanks for your tutorial.
    But when i flollow you, i have a warning :
    embeddings.npz is not UTF-8 encoded
    UserWarning: No training configuration found in save file: the model was *not* compiled. Compile it manually.
    Do you know how to fix it.

    • Avatar
      Jason Brownlee July 17, 2019 at 8:22 am #

      You can safely ignore these warnings.

      • Avatar
        Nghia July 17, 2019 at 6:20 pm #

        Thank you so much.
        But when I have a new image to recognize, do I need to put it to the validate folder and rerun the code ?
        And how can we use this to recognite face in a video ?

        • Avatar
          Jason Brownlee July 18, 2019 at 8:23 am #

          You can use the model directly, e.g. in memory.

          Perhaps you can process each frame of video as an image with the model?

  11. Avatar
    Karim July 21, 2019 at 8:06 pm #

    Hello Jason,
    Thanks for your wonderful tutorial, I’d like to know what is the best solution to apply recognition part if I have a very small dataset -only one face per each identity- in this case, I think SVM wouldn’t help.

    • Avatar
      Jason Brownlee July 22, 2019 at 8:24 am #

      I think a model that uses a face embedding as an input would be a great starting point.

  12. Avatar
    Esha July 27, 2019 at 10:16 pm #

    Hello Jason,
    Thank you for this amazing tutorial, I used python 3 to run this code. I would like to know why am i getting this error (No module named ‘mtcnn’) and how can I correct it?

    • Avatar
      Jason Brownlee July 28, 2019 at 6:46 am #

      The error suggests you must install the mtcnn library:

  13. Avatar
    Sabbir July 28, 2019 at 4:39 pm #

    I want to use transfer learning for masked face recognition. But i didn’t found any better masked face recognition dataset. I need a masked face dataset with proper labeling of each candidate. So is there any better masked face dataset available? where can i find this dataset?

    • Avatar
      Jason Brownlee July 29, 2019 at 6:10 am #

      Perhaps you can take an existing face dataset and mask the faces using opencv or similar?

      • Avatar
        Sabbir July 29, 2019 at 9:04 pm #

        Thanks for response. Can you refer any work or blog like your for doing mask face using opencv or similar?

        • Avatar
          Jason Brownlee July 30, 2019 at 6:11 am #

          Sorry, I do not have a tutorial on this topic, perhaps in the future.

  14. Avatar
    Al August 10, 2019 at 3:25 pm #

    Hello Jason, great tutorial.
    Im beginner in python.

    I try to understand your code, and little bit confusing when you choice random example from dataset when doing classification

    in line 28. selection = choice([i for i in range(testX.shape[0])]),
    its choose random vector value in testX.shape[0] from embeddings.npz right?

    so how if we want using spesific image from val folder?, Can you refer any work or blog to doing this

    Thanks.

  15. Avatar
    Al August 12, 2019 at 3:41 pm #

    Thank you so much for the respons,

    well I tried and it worked.

    But I have another question, because sometime when i run the code, all worked perfectly and when I run the code again, sometime i have this error warning in load_model part although the face recognition still work

    UserWarning: No training configuration found in save file: the model was *not* compiled. Compile it manually.
    warnings.warn(‘No training configuration found in save file: ‘

    why did this happen?

    Thanks.

    • Avatar
      Jason Brownlee August 13, 2019 at 6:05 am #

      Well done!

      You can safely ignore that warning message.

    • Avatar
      Akash May 8, 2020 at 6:12 pm #

      Hey @AI – how did you do the “image loading” part instead of randomly choosing an array from validation? could you please share the code? Thanks!

  16. Avatar
    Jack August 15, 2019 at 7:55 pm #

    Hi Jason, while I was executing the code “load_model(‘facenet_keras.h5’)”, the exception “tuple index out of range” is thrown, can you tell me why? thanks in advance.

  17. Avatar
    Steve August 21, 2019 at 6:11 pm #

    Hi Jason, again wonderful article and tutorial you provided to us. I wonder how I can customize dataset for my needs such as my friends dataset and perform training on it?

  18. Avatar
    Hamed August 27, 2019 at 11:53 am #

    Thanks Jason, really helpful as always but I got a weird “invalid argument” error. But I fixed it by changing ‘f’ to ‘F’ in facenet_keras.h5 because I notice it couldn’t recognize character ‘f’. Maybe because it’s trained on Ubuntu but I run your code on Windows 10. I don’t know!

    • Avatar
      Jason Brownlee August 27, 2019 at 2:16 pm #

      Nice work!

      • Avatar
        Hamed August 29, 2019 at 1:39 am #

        Thank you! Dear Jason, could you please tell me how I can get access to other properties of model. I mean I don’t need model.predict. I need other properties. Is there a way to list all of them such as different convs or avgpool. I tried __dict__ and dir() but they don’t give what I want. For example, how did you know model has a property called “.predict”? Where can I find all of them? Thank you!

        • Avatar
          Jason Brownlee August 29, 2019 at 6:15 am #

          You can access all the weights via get_weights()

  19. Avatar
    Akash August 28, 2019 at 6:02 pm #

    Jason Can you Please post a tutorial on how to convert David sandberg tensorflow model in keras using Hiroki Tanai script to convert it into keras

  20. Avatar
    Wajeeha Jamil August 28, 2019 at 9:37 pm #

    How can I convert this script to tensorflow lite format in order to be used in an android applicaton?? Pleaseeee helpp !!

    • Avatar
      Jason Brownlee August 29, 2019 at 6:06 am #

      Sorry, I don’t have experience with that transform.

  21. Avatar
    Jahir August 29, 2019 at 1:48 am #

    This will work for many hundreds of people?

  22. Avatar
    Saurabh September 4, 2019 at 7:46 pm #

    Hello Jason,

    Thanks for sharing the interesting article!

    I have read your two articles on Face Verification: 1) this one and 2) https://machinelearningmastery.com/how-to-perform-face-recognition-with-vggface2-convolutional-neural-network-in-keras/

    Which one would you suggest? If I have to develop Face Verification system then there are few approaches (listing two approaches from your article):

    Approach 1: Detect face using MTCNN, train VGGFACE2 on the collected dataset which helps to predict the probability of a given face belonging to a particular class

    Approach 2: Detect face using MTCNN, get face embedding vector using facenet keras model and then apply SVM or Neural Network to predict classes

    Which approach would you recommend? Can you please explain?

    Thanks for sharing views.

    • Avatar
      Jason Brownlee September 5, 2019 at 6:53 am #

      Perhaps prototype a few approaches and see what works well for your specific project and requirements?

      • Avatar
        Saurabh September 5, 2019 at 5:49 pm #

        It means, I can try both approaches and have a look at efficiency, and select an approach with the best accuracy.

        Thank you!

      • Avatar
        Saurabh September 5, 2019 at 5:55 pm #

        Hi,

        I am looking for Speech recognition tutorial on Deep Learning using Keras.

        I have gone through your this URL: https://machinelearningmastery.com/category/deep-learning/ but I couldn’t find any tutorial.

        Could you please point to the tutorial link (if you have)?

        Thank you!

        • Avatar
          Jason Brownlee September 6, 2019 at 4:52 am #

          Sorry, I don’t have tutorials on that topic, I hope to cover it in the future.

  23. Avatar
    Alexander September 8, 2019 at 9:11 pm #

    Thanks for the tutorial.
    Unit length normalization isn’t for SVM. For SVM you typically use range scaling – MinMaxScaler, or standardization – StandardScaler. The goal is to make different features uniform. Actually, it’s a surprise that unit length normalization produced 100% accuracy in your case. That’s probably due to small data. It does not work for SVM in general and didn’t work for me.

    • Avatar
      Jason Brownlee September 9, 2019 at 5:15 am #

      Thanks for your note.

      I followed best practices when using face embeddings from the literature.

  24. Avatar
    Ahmad September 9, 2019 at 8:32 pm #

    Hi Jason,

    Great article. You have explained all the necessary steps to implement a face recognition system. I am working on a similar problem but in a bigger scale. I am in a belief that a classification based face identification is not a scalable solution. Please give me your opinion.

    If I want to recognise a thousand faces in real time manner then, what type of changes do I need to make to your implementation.

    I believe it would be really helpful if you create an article about large scale face recognition.

    • Avatar
      Jason Brownlee September 10, 2019 at 5:45 am #

      Good question, perhaps an efficient data structure like a kdtree for the pre-computed embeddings?

  25. Avatar
    Wajeeha Jamil September 9, 2019 at 11:22 pm #

    Can we extract eyes part out of the extracted face using mtcnn detector?? Any help..

    • Avatar
      Jason Brownlee September 10, 2019 at 5:49 am #

      I don’t see why not.

      It will find them with a point, you can draw a circle around that point and roughly extract the eyes.

      You might have to scale the faces/images to the same pixel size prior to modeling and extraction.

      Let me know how you go.

  26. Avatar
    azouz September 10, 2019 at 1:14 am #

    bonsoir monsieur pouvez vous me dire cette application peut fonctionner avec une interface tkinter qui affiche le nom et prénom et la photo reconnu

  27. Avatar
    Abhijit Kumar September 16, 2019 at 5:41 pm #

    Hi Jason,
    1 . Here we are using 5 faces, what if we have thousands of faces, how to get the identity or index of those faces.

    2. If we have fingerprints or voice which pretrained model would be most suitable.

    • Avatar
      Jason Brownlee September 17, 2019 at 6:24 am #

      I don’t see why not.

      You may need a different model for fingerprints/voice.

      • Avatar
        Abhijit Kumar September 18, 2019 at 3:05 pm #

        If I have thousands of faces, SVM takes a lot of time. What do I do to get a quick result?

        • Avatar
          Jason Brownlee September 19, 2019 at 5:49 am #

          Perhaps try as simpler linear model?
          Perhaps try running on a large EC2 instance?
          Perhaps try running the code in parallel?

  28. Avatar
    arundev September 24, 2019 at 3:07 am #

    Once i have trained the model on 5 class (each class having 50 images). Now i use the model to detect images that it has not seen, it correctly guesses that the person in the image is class A for example with an accuracy ( prediction ) 65%. Is it possible to now add such image back to training and expect to get better results ?

  29. Avatar
    Anna September 25, 2019 at 4:59 pm #

    Awesome post, thanks for sharing.

  30. Avatar
    Abhinav September 25, 2019 at 7:21 pm #

    Hi Jason

    Thanks for this tutorial. Its really helpful. I wanted to know why you used train and val dataset. I mean are these two used for training purpose. What is the use of val here.?

    In the face classification, I am not able to understand where are you selecting the random photo to test against your dataset. How can I add my jpg photo to test again the dataset. Can you explain please. Thanks

    • Avatar
      Jason Brownlee September 26, 2019 at 6:33 am #

      Here, the validation set is a hold out or test set.

      We fit on the train set, then confirm the model has skill by evaluating it on the test set (val).

      You could add a new directory to train and val sets, to train and evaluate the model for new people.

      • Avatar
        Abhinav September 26, 2019 at 6:23 pm #

        Got it. Thanks
        What I have seen is that in train dataset I put my pictures more than 30 images and in val dataset I put 1 image of mine for testing. So it was recognizing me fine. But when put some other person pic in val dataset, it was still recognizing it as me

        Any idea how can this be solved

        • Avatar
          arun September 27, 2019 at 12:08 am #

          Yes, even i was wondering this.

          I have trained on 30 classes with
          45 images in Train folder and
          15 images in Test folder (val)

          after this upon testing with a new image which belongs to a class
          im getting good results:
          Image A – class A (99.996 %) which is correct
          Image X – class A (99.996 %) it belongs to an unkown class to the model but still it says that it belongs to class A with extremely high confidence.
          Any thoughts on why this occurs ??

          • Avatar
            Jason Brownlee September 27, 2019 at 8:03 am #

            You must define the problem and model the way you intend to use it.

            If you want the model to classify unknown people as unknown, you must give examples during training.

        • Avatar
          Jason Brownlee September 27, 2019 at 7:48 am #

          You might need to train the model on “you” vs “not-you”, or people vs unknown.

          • Avatar
            arun September 27, 2019 at 4:11 pm #

            Thanks for your reply.
            Could you please explain or guide to towards the direction of
            “””You might need to train the model on “you” vs “not-you”, or people vs unknown.”””

            So when we train the model, do i put a unkown folder?
            like train folder :
            class A (30 images)
            class B (30 images)
            unkown ???.

            Sorry if this doesnt make sense, its a bit hard to understand what you mean by train the model on “you” vs “not-you”.

            Help would be appriciated.

            Thanks

          • Avatar
            Jason Brownlee September 28, 2019 at 6:11 am #

            Yes, if your goal is to fit a model that can detect you, and when you show it photos of other people, it knows it’s not you, then you need a dataset of lots of photos of you and photos of other people (unknown group) so the model can learn the difference.

          • Avatar
            Seble Kidane December 20, 2020 at 1:45 am #

            Highly appreciated for your assistance.
            What if we have more than 2 classes.
            Training dataset has 5 classes.
            During testing if we feed unseen face of one of the above classes, it may predict the face as one of the class.
            But what if we feed faces of unknown class? I would expect the model to predict unknown.
            So how we can accomplish this scenario?

          • Avatar
            Jason Brownlee December 20, 2020 at 5:59 am #

            You might need to add an “unknown” class during training, or add some if-statements to interpret the predicted probbilities.

  31. Avatar
    Parikshit September 28, 2019 at 8:20 pm #

    Hi Jason,

    Thanks for the code. it is very helpful.

    For a few images i am getting a error as follows

    AttributeError: ‘JpegImageFile’ object has no attribute ‘getexif’ or
    AttributeError: module ‘PIL.Image’ has no attribute ‘Exif’

    This error occurs when i use the Image.open command to import the image.

    few examples for the images i am getting an error are as follows:

    httpssmediacacheakpinimgcomxfecfecaefaadfebejpg.jpg (traing data for elton john)
    httpwwwjohnpauljonesarenacomeventimagesEltonCalendarVjpg.jpg (training data fro elton john)

    I tried searching for this issue online but was not able to find any helpful solution. Do you have any idea how i may solve this issue?

    Thanks

  32. Avatar
    Debanik October 7, 2019 at 7:41 am #

    is it possible in the real-time after training?

    • Avatar
      Jason Brownlee October 7, 2019 at 8:34 am #

      I don’t see why not.

      • Avatar
        Debanik Roy October 7, 2019 at 5:12 pm #

        please write a programme about how it works in real-world after training?

        • Avatar
          Jason Brownlee October 8, 2019 at 7:54 am #

          Do you mean making a prediction after it is trained?

          If so, the tutorial above shows that.

          Otherwise, can you please elaborate what you mean exactly?

  33. Avatar
    Gabriel October 7, 2019 at 11:51 pm #

    Hi Jason,
    Your content is so much helpful.

    What about a system where new faces can be registered? Would I have to retrain the model, containing the new classes (new faces)?

    Thanks.

    • Avatar
      Jason Brownlee October 8, 2019 at 8:05 am #

      No, just create the face embeddings and fit the classification model.

      • Avatar
        GAK August 29, 2021 at 12:20 am #

        Dear Jason,
        How can we create the face embeddings and fit the classification model without re-training the whole training dataset? I would be grateful if you can share me the code.

        • Adrian Tam
          Adrian Tam August 29, 2021 at 12:25 pm #

          Why is the problem of retraining the whole training dataset? Usually the problem is training the model from scratch, which takes a long time to converge. But if you are based on a network trained for one particular purpose and retrain it for a different use, it would not take too long.

          • Avatar
            GAK August 30, 2021 at 7:13 pm #

            @Adrian
            When the dataset is big (e.g. in millions of classes) and it is regularly increases, how it is possible to retrain the whole dataset every time? For example my initial dataset contains 2000 classes/subjects and every day it increases by 300 subjects. How can I proceed without retraining the existing model?
            Thank you

          • Adrian Tam
            Adrian Tam September 1, 2021 at 8:09 am #

            Usually the first half of the network need not be retrained if your output classes are changed. But if you need to change the output layer from size 2000 to 2300, you need to retrain it anyway. You might think of some other ways make it smarter, e.g., always set one of your output as “not matched” and try to train the network to tell if the face is not any of the known one. Then you can always create a new network for the increased subjects.

  34. Avatar
    Abhishek Gupte October 9, 2019 at 7:24 am #

    Hey Jason,
    First of all thank you so much for putting out the effort and organizing this tutorial! You’re truly awesome! 🙂
    So I extracted facial embeddings of 3 people(6 high-quality high-resolution 5MP+ images per person as dataset )and trained them using SVM and used my built-in DELL WEBCAM(need I mention it generates a mirror image , ie my left hand appears right on the screen; also it’s a 0.3 MP 640×480 resolution feed) to identify faces.
    So my problem is that the probabilities are always different for the same trained face by sometimes a difference as great as 20% under the same lighting conditions! It’s mostly around 71% but sometimes dwindles to 51% for a trained face. For a stranger face it varies between 40% and 68% hence because of this variation, I can’t set a single probability value as a threshold and it’s really cumbersome.
    Can these differences be because of the difference in webcam quality and the dataset quality, that the algorithm has a tough time identifying the faces and varies the probability all the time, given the embeddings generated by the dataset are of much higher quality than those of the feed and also that the feed generates a mirror image of the faces in the dataset?

    Hope this isn’t too much trouble 🙂

    • Avatar
      Abhishek Gupte October 9, 2019 at 7:26 am #

      forgot to mention, the variations in probability happen whenever I run the program on different occasions

      • Avatar
        Jason Brownlee October 9, 2019 at 8:21 am #

        Could also be light level, etc.

        Try data prep that counters these issues?

    • Avatar
      Jason Brownlee October 9, 2019 at 8:20 am #

      Yes, it is likely the image quality.

      Perhaps downsample the quality of all images to the same baseline level before fitting/using models?

      • Avatar
        Abhishek Gupte October 9, 2019 at 8:30 am #

        I’ll do just that. Any idea how to downsample? A friend tried with same dataset but with Logitech C310 HD webcam and got a consistent probability score .It’s unlikely it’s the light level in my case as it shows variations in probability at the exact same light conditions.

          • Avatar
            Abhishek Gupte October 10, 2019 at 4:20 am #

            Thank you for your prompt replies!
            Also, can mirroring the feed be the cause as I’ve mentioned my webcam does that?

          • Avatar
            Jason Brownlee October 10, 2019 at 7:04 am #

            Probably not related.

          • Avatar
            Abhishek Gupte October 10, 2019 at 7:44 am #

            Okay. One quick thing.
            Does both the dataset and webCam feed have to be of the same quality? Cuz I trained my face, Emma Watson and Daniel Radcliffe’s faces (their images size around 5 kb) and my image quality around 70 kb and there’s still some variation in probability

          • Avatar
            Jason Brownlee October 10, 2019 at 2:16 pm #

            Generally yes, but you can achieve the desired result by using data augmentation either during training, during testing, or both to get the model used to data of varying quality – or to downsample all images to the same quality prior to training/using the model.

    • Avatar
      DAG January 31, 2020 at 2:30 am #

      In addition to previous suggestions, you can also limit what is analyzed. For example only take face detections of very high confidence. You can use the face detection data to reject highly oblique angles (like if the nose is further left than the left eye, as one of many examples).

      Not shown in this tutorial – but is very easy, is to increase the quality of the input by growing the face detection box vertically or horizontally to make it square… do this before resizing to low res. This prevents stretching of faces.

      You may also have a shortage of data for your known faces. If possible try to grow that dataset. Furthermore if your dataset is too sparse (you have only a few known faces) you may have trouble because of the SVM maths. It may be beneficial to litter your known faces with a moderately sized dataset from the web (they would need to be caught and handled in code – you could call them JaneDoe1, JaneDoe2, etc). By doing this the SVM should have more cases of ambiguity whereas with only a couple known faces it may have displayed inappropriate confidence.

  35. Avatar
    Debanik Roy October 9, 2019 at 8:02 pm #

    sir, My problem is my model is not able to distinguish between a known and unknown person in the real world.
    Do you have any idea about how to identify an unknown person in the real world?

    • Avatar
      Jason Brownlee October 10, 2019 at 6:56 am #

      You must train the model on known and unknown people for it to learn this distinction.

  36. Avatar
    Abhishek Gupte October 10, 2019 at 10:42 pm #

    I guess it’s not really “data augmentation” when 5 out of the 6 images for Daniel radcliffe are 6KB, the last 67…and my face image quality are on average 120 KB, whereas for Emma Watson 2 out of the 3 images are 7 kb and the last 70. (The images generated by the webcam feed, the “test set” are 70 kb.). I guess both the dataset and feed should be same baseline image quality right?

  37. Avatar
    Akhil Kumar October 10, 2019 at 11:01 pm #

    Thanks for the great tutorial,

    By creating a data set of 500 people of 50 images each and train the model, can I expect good accuracy regarding detection?
    Can I try deploying the same model on a Raspberry Pi with a Pi Camera?
    Can you suggest any idea about adding a new person’s face to the model once it is deployed?

    • Avatar
      Jason Brownlee October 11, 2019 at 6:21 am #

      Perhaps try it and see?

      Yes, compute face embeddings for new people as needed and re-fit the model.

  38. Avatar
    RK October 11, 2019 at 5:38 am #

    Hi!
    My question is that you train the network every time you want to recognise a face set.

    How can we train once and run it multiple times ,say like a new face set every day.

    Is it possible to implement this in the example you have coded?

    • Avatar
      Jason Brownlee October 11, 2019 at 6:25 am #

      No, the network is trained once. We use face embeddings from the network as inputs to a classifier that has to be updated when new people are added.

  39. Avatar
    Hai October 11, 2019 at 12:40 pm #

    Hi Jason
    i run the code and add two of my own photos in both train and val dataset, SVM predict show correct class_index for my own photos in val dataset, but the SVM predict_proba show probability as below: class_index is 2(0.01181915, 0.01283217), it is the smallest value.
    [0.14074676 0.21515797 0.01181915 0.10075247 0.15657367 0.37494997]
    [0.1056123 0.20128499 0.01283217 0.1100254 0.23492927 0.33531586]

    i see documents saying that SVM predict_proba show meanlingless result on small dataset, is it caused by that? How can i detect one-face class probability?

    second question: can you show more code on how to train unknown people class?

  40. Avatar
    Saurabh October 21, 2019 at 6:45 am #

    Hello Jason,

    Thank you again for sharing the nice blog.

    I went through your tutorial and I got 100 train and test efficiency. Till now everything is clear.

    But the problem arises when I apply the developed model (through your tutorial) to live frames through a webcam (my train ~700 images/class and test images ~300 images/class are captured using webcam).

    The model does more misclassification when I apply the trained model to a frame from a webcam.

    I am not sure how to normalize the face embedding vector in this case?

    Could you please guide me?

    Thanking you,
    Saurabh

  41. Avatar
    Saurabh October 21, 2019 at 5:20 pm #

    Hello Jason,

    Thanks for the reply. It means I should normalize the input image rather than the embedding. If the input image is normalized then I don’t need to normalize the embedding.

    Please feel free to correct me!

    Thanking you,
    Saurabh

    • Avatar
      Jason Brownlee October 22, 2019 at 5:44 am #

      Perhaps try scaling pixels prior to using the embedding.

      • Avatar
        Saurabh October 23, 2019 at 5:51 pm #

        Thank you Jason, it’s working now. Thanks for the kind help! Looking forward to grow in the Deep Learning era under your guidance.

        • Avatar
          Jason Brownlee October 24, 2019 at 5:37 am #

          Happy to hear that.

        • Avatar
          Ram February 22, 2020 at 4:35 pm #

          @saurabh How did you do it for webcam can you please explain me the procedure or share the code…….

  42. Avatar
    jada October 22, 2019 at 12:08 am #

    Hello Jason, Thank you for your tutorials, is there a method i can use to implement on the raspberry pi 3 ?

    kind regards

    jada

    • Avatar
      Jason Brownlee October 22, 2019 at 5:51 am #

      Sorry, I don’t know about “raspberry pi 3”.

  43. Avatar
    HB October 22, 2019 at 12:35 am #

    hi, thanks for the tutorial! it was really helpful!!

    I have followed the tutorial and got successful result with my own data set of pictures.

    Let’s say I used person A, B, C to trained the model

    Now I’m trying to introduce a unsorted pictures of the above 3 people(A, B, C) in one folder and sort them
    based on the code from your tutorial.

    However, I can’t figure out how to introduce the new and unsorted pictures into the above code.

    please help?

    Thank you in advance!

    • Avatar
      Jason Brownlee October 22, 2019 at 5:53 am #

      Sounds straightforward, what is the specific problem you’re having?

      • Avatar
        HB October 22, 2019 at 9:21 am #

        I can’t figure out how to introduce the new unsorted pictures into the code. I tried making an npz file using the new pictures in one folder and loading them into the classifier(# load faces
        data = load(‘5-celebrity-faces-dataset.npz’), but the classification result was pretty bad so im assuming what i did is not correct.

        • Avatar
          Jason Brownlee October 22, 2019 at 1:46 pm #

          Perhaps start with the code in the tutorial and slowly adapt it for your specific dataset?

          • Avatar
            HB October 22, 2019 at 6:06 pm #

            I think my explanation wasn’t clear enough…

            In following the your tutorial my directory for the pictures looked like this:
            ├── train
            │ ├── A : pictures of person A
            │ ├── B : pictures of person B
            │ └── C : pictures of person C


            └── val
            ├── A : pictures of person A
            ├── B : pictures of person B
            └── C : pictures of person C
            and got a successful result, with let’s say
            “ABC_dataset.npz” & “ABC_embeddings.npz”

            So, I’m trying to one step further, and added a folder to the directory
            ├── train
            | ├── A : pictures of person A
            | ├── B : pictures of person B
            | └── C : pictures of person C
            |
            |
            ├── val
            | ├── A : pictures of person A
            | ├── B : pictures of person B
            | └── C : pictures of person C
            |
            └── test : pictures of persons A, B, C

            and the newly added test folder contains pictures of all A, B, C.

            In an attempt to introduce the data from the “test” folder,

            I extracted arrays of the faces from the pictures of the “test” folder saved the
            extracted arrays into an npz file, let’s say “ABC_test_dataset.npz”

            And loaded “ABC_test_dataset.npz” into the last part of the tutorial

            # load faces
            data = load(‘”ABC_test_dataset.npz”‘)
            testX_faces = data[‘arr_2’]

            # load face embeddings
            data = load(‘ABC_embeddings.npz’)
            trainX, trainy, testX, testy = data[‘arr_0’], data[‘arr_1’], data[‘arr_2’], data[‘arr_3’]

            and so on.

            When I tried this, the result I got was pretty bad so I’m assuming what I did is
            a wrong way of introducing new dataset into the code.

            Sorry for the VERY LONG question.
            Thank you!

          • Avatar
            Jason Brownlee October 23, 2019 at 6:39 am #

            Thanks for the elaboration!

            I think you’re on the right track, well done!

            What if the cause of poor performance is the specific pictures in test? What if you swap around some of the pictures from test with those in train and see if that lifts skill?

            What if you confirm the pipeline and add pictures from train into test and confirm they are predicted correctly – they should be?

            Let me know how you go.

          • Avatar
            HB October 23, 2019 at 10:15 am #

            Thanks for the answer!
            I will play with the picture data sets a bit more and tell you how it goes.

            Last questions,
            1. Is it ok to use gray-scale pictures for the training?

            2. What should be the ratio between the number of pictures in the train and val
            folders?
            For example, train : 1000 pics & val : 100? or train : 1000 & val : 50 is fine?

            Thanks!

          • Avatar
            Jason Brownlee October 23, 2019 at 1:48 pm #

            Sure. Try it and see.

            Great question. I like 50/50 splits if there are enough data. I don’t want to be fooled by an optimistic result.

            Perhaps run tests to see how sensitive the model/system is to training data size?

        • Avatar
          Priyanka July 4, 2020 at 10:39 pm #

          Hi, I’m planning on doing something similar. Please post your progress, while I work on it as well
          It would be very helpful

          Thanks

  44. Avatar
    RK October 22, 2019 at 2:12 am #

    Hey Jason!
    I am trying to develop an attendance collector from video footage.
    My problem arises during the classification part,it constantly interchanges the output label names.

    Say X and Y are two people then X is always identified as Y and vice versa.
    The code is error free and there is no error in the input labels.

    How can correct this.Will this be solved if use something apart from SVM. If so what?
    Or should i do some fine tuning as specified in one of your earlier answers?
    Please guide me.

    Awaiting your reply.

    • Avatar
      Jason Brownlee October 22, 2019 at 5:56 am #

      Perhaps some of the labels for these people were swapped in the training data?

      • Avatar
        RK October 22, 2019 at 6:44 pm #

        No that’s not the case.Its correct

        • Avatar
          Jason Brownlee October 23, 2019 at 6:40 am #

          You may have to step through your system carefully in order to debug it. I don’t think its something obvious, sorry.

    • Avatar
      Saurabh October 22, 2019 at 8:01 pm #

      Hello RK,

      I am facing the same problem. I think there is a problem with the last model i.e. binary classification.

      You should try comparing unknown face embedding with know face embedding.

      I think this will help you!

      Kindly share the output with me as I am facing a similar problem. I will also update you if I will make progress.

      • Avatar
        Jason Brownlee October 23, 2019 at 6:43 am #

        It might be related to the label encoder. The mapping of names to labels must be consistent across code examples.

        You can force the consistency with arguments to the encoder, or simply save the encoder object via pickle.

        Does that help?

      • Avatar
        Saurabh October 23, 2019 at 5:59 pm #

        Hello RK,

        Finally, I got the solution.

        – First, extract only the face from an image using MTCNN. Please make sure to scale pixels.
        – Then find the face embedding using the Facenet model.
        – Train binary/multiclass classification model using face embedding vectors obtained in the above step.
        – So now you have face embedding vectors for train images and compare them with unknow encoding (a frame from webcam) using Consine or another metric.
        – If score < threshold then face is recognized and in order to get a label, predict the class using binary/multiclass classification model trained in step 3.

        @Jason, please feel free to correct me!

        Thanking you,
        Saurabh

  45. Avatar
    Saurabh October 22, 2019 at 8:08 pm #

    Jason has already written the blog. You can find it here: https://machinelearningmastery.com/how-to-perform-face-recognition-with-vggface2-convolutional-neural-network-in-keras/

  46. Avatar
    RK October 24, 2019 at 1:13 am #

    Thanks a lot Saurabh!I will try implementing what you have proposed.Thanks a lot for helping.And i will also go through the blog.

    Thank you once again Jason

    Thanking you,
    RK

  47. Avatar
    RK October 24, 2019 at 6:01 pm #

    Hey Jason!
    I wanted to try doing image super resolution using GANs to improve the result of my recognition process,but i’m not finding any suitable website or blog to learn from.
    Could you please guide.
    I prefer using Python for the same.
    Thank You!

    • Avatar
      Jason Brownlee October 25, 2019 at 6:38 am #

      Thanks for the suggestion, I hope to cover the topic in the future.

  48. Avatar
    Virinchi October 25, 2019 at 7:11 am #

    Can this model differentiate between a live video of human and an picture of a human shown in front of the camera

    • Avatar
      Jason Brownlee October 25, 2019 at 1:45 pm #

      I don’t know, perhaps try it out?

      If that distinction is important to your application, I believe you could fit a model to detect “non live” examples.

    • Avatar
      Saurabh October 25, 2019 at 6:40 pm #

      Hello Virinchi,

      This model doesn’t able to differentiate between a live video of human and a picture of a human. Bcoz this model looks for the only face, it doesn’t look for the Liveness in the face.

      This is another research area in my view. Perhaps for the quick try, you can look whether eyes are blinking or a person is speaking to detect the Liveness. There are other techniques too.

  49. Avatar
    M Awais October 29, 2019 at 2:53 am #

    Will this Work for sketch matching with real picture of that person?

    • Avatar
      Jason Brownlee October 29, 2019 at 5:30 am #

      Perhaps try it and see?

      • Avatar
        M Awais October 29, 2019 at 5:43 am #

        can you recommend any other method which will be good for matching sketch with picture of that person?

        • Avatar
          Jason Brownlee October 29, 2019 at 1:43 pm #

          Not off the cuff, perhaps start with some pre-trained models in order to create embedding and see what happens.

          • Avatar
            M Awais October 30, 2019 at 2:45 am #

            ok Thanks Alot Sir

  50. Avatar
    Navi November 5, 2019 at 2:39 am #

    Hi jason, I am only looking at algorithm to tell if an image has face or not, face can be far, at any angle . Preesntly neiter open cv, nor any model works well, Do you know any transfer learning method to only detect if image has face or not with lot of other things in background, Some images has only body but no face, in those case it should say as no face.

    • Avatar
      Jason Brownlee November 5, 2019 at 6:58 am #

      Perhaps try detecting and extracting the face first, then pass to a pre-trained model to get a face embedding. Perhaps try a suite of models to see which is the best for your dataset – also fit different model types on the embedding to perform the recognition.

  51. Avatar
    Parul Mishra November 9, 2019 at 12:19 am #

    I am trying to classify 800 celebrity faces.
    1-using MTCNN to extract faces
    2- using facenet to prepare the embedding
    3- using KNN to classify the faces.
    Problem is-Though we get neglible ‘not detection’ but we get lots of wrong detection.
    Is there any solution to reduce the wrong detection.
    Other combination of pipeline used-
    1-MTCNN+Facenet+SVM
    2-One shot learning
    3-Also tried regularisation,data augmentation.
    (used 100 faces per face)

    • Avatar
      Jason Brownlee November 9, 2019 at 6:14 am #

      Sounds like a great project!

      Perhaps test different models?
      Perhaps test different data preparation?
      Perhaps review the errors and see if there is commonality?

    • Avatar
      Vinod January 30, 2020 at 8:45 pm #

      Hi Parul Mishra,

      Can you share the code for the second mentioned approach? It will be very helpful.

      Implement ion of one shot learning .

      Thanks

  52. Avatar
    Mustafa November 23, 2019 at 8:15 am #

    thanks for this amazing work
    can i use this trained model in android application ?

  53. Avatar
    Abhinav Reddy K December 3, 2019 at 3:45 pm #

    Hey!
    In the Keras FaceNet Pre-Trained Model (88 megabytes) you have mentioned, how should it be downloaded, it has 2 files models and weights and each has “.h5” file. Could you please tell me which has to be downloaded.
    I have tried both, but the code is not getting executed, it is stopping after load_model()

    • Avatar
      Jason Brownlee December 4, 2019 at 5:30 am #

      Download the model, not the standalone weights.

  54. Avatar
    Mustafa December 5, 2019 at 1:46 am #

    hey!
    how can i detect all faces in the pic ? this one takes only the first face

    • Avatar
      Jason Brownlee December 5, 2019 at 6:43 am #

      You can adapt the example to operate on all faces. I don’t have the capacity to make this change for you, sorry.

  55. Avatar
    Ali December 7, 2019 at 11:20 pm #

    thanks for the amazing work sir

    Is it normal to get F1 score =1 ? and i expanded the dataset to be like 250 pics pf 15 persons
    and im getting this :

    accuracy: train=100.000, test=100.000
    precision recall f1-score support

    0 1.00 1.00 1.00 6
    1 1.00 1.00 1.00 6
    2 1.00 1.00 1.00 6
    3 1.00 1.00 1.00 6
    4 1.00 1.00 1.00 5
    5 1.00 1.00 1.00 5
    6 1.00 1.00 1.00 5
    7 1.00 1.00 1.00 5
    8 1.00 1.00 1.00 5

    accuracy 1.00 49
    macro avg 1.00 1.00 1.00 49
    weighted avg 1.00 1.00 1.00 49

    is it okay to be 100% correct ?

    also what other algorithms can i use to make comparison with SVC ?

  56. Avatar
    Manjusha December 11, 2019 at 4:08 am #

    Hey!

    Can I use this for real=time applications using raspberry pi?

  57. Avatar
    dara December 15, 2019 at 8:06 am #

    thanks sir
    What other algorithms that i can try beside the SVM ?

  58. Avatar
    paga December 15, 2019 at 9:48 pm #

    Hello. Can any one tell me why this line is returning a None value?

    results = detector.detect_faces(pixels)

    Thank you

  59. Avatar
    Jenny December 19, 2019 at 6:35 pm #

    hello , thanks for the code, it is very easy to read and go through.

    but I am struggling to understand what you mean by

    “No, the network is trained once. We use face embeddings from the network as inputs to a classifier that has to be updated when new people are added.”

    so does that mean I do not need to put new person folder on the train folder? and just put the folder in the val folder and run code?

    or do have have to rerun the code from start to end with the new people in val and train folder?

    • Avatar
      Jason Brownlee December 20, 2019 at 6:42 am #

      You’re welcome.

      Not sure I follow.

      There are 2 models. The one that gives you a face embedding and one that classifies embeddings as people. The first model does not need to be retrained. The second model is only trained once and is then used to make predictions for people that it knows about (e.g. during training).

      Does that help?

      • Avatar
        Jenny December 20, 2019 at 2:00 pm #

        sorry i don’t quite understand

        for every new person I add,
        do i put the images in Train and Val folder?

        so you mean as i have run the code throughout,
        i only need to run the svm part and the following code to guess the person?

        many thanks for your response

        • Avatar
          Jason Brownlee December 21, 2019 at 7:06 am #

          If you have new people, you must train the SVM model on these people.

  60. Avatar
    Yasser December 26, 2019 at 4:18 pm #

    Is that posible to make embedding without prediction? Because I need to separate the train and test dataset from the picture that I want to predict

  61. Avatar
    furkan December 26, 2019 at 9:19 pm #

    Hi Jason. I’m new in Keras and I copied your code for see what will be. But there is an error. Please help me. Thank you.

    Traceback (most recent call last):

    File “C:\Users\train\untitled0.py”, line 60, in
    model_scores = get_model_scores(faces)

    File “C:\Users\train\untitled0.py”, line 55, in get_model_scores
    return model.predict(samples)

    File “C:\Users\Anaconda3\lib\site-packages\keras\engine\training.py”, line 1149, in predict
    x, _, _ = self._standardize_user_data(x)

    File “C:\Users\Anaconda3\lib\site-packages\keras\engine\training.py”, line 751, in _standardize_user_data
    exception_prefix=’input’)

    File “C:\Users\Anaconda3\lib\site-packages\keras\engine\training_utils.py”, line 128, in standardize_input_data
    ‘with shape ‘ + str(data_shape))

    ValueError: Error when checking input: expected input_7 to have 4 dimensions, but got array with shape (2, 1, 224, 224, 3)

  62. Avatar
    Dasha December 27, 2019 at 7:47 am #

    Great explanation thanks,
    Since it’s a biometric systems how can i find the ROC and FAR and FFR?

  63. Avatar
    Amar December 31, 2019 at 9:21 pm #

    HI …

    I have problem here results = detector.detect_faces(pixels) it give that the result is empty

  64. Avatar
    Amar January 2, 2020 at 9:32 pm #

    Hello
    When I make a test for one image of one person, in every time give different result of probability. Is that right? Or should it give me the same result?

  65. Avatar
    Ryan January 6, 2020 at 3:29 pm #

    Hello,

    This was great thanks. I am wanting to know how to change the randomisation section to score the entire test dataset. Not clear as it appears it was built to score only on a single, randomly selected index from that array. Any guidance on this?

    • Avatar
      Jason Brownlee January 7, 2020 at 7:17 am #

      After the SVM model is fit, you can enumerate your dataset and perform the prediction.

  66. Avatar
    Putut January 7, 2020 at 8:45 pm #

    Hi Jason it’s a good tutorial, but when I try to run this tutorial in tensorflow 2.0 the mtcnn is not compatible with tensorflow v2.0. do you have any idea for this issue.
    thank you

    • Avatar
      Jason Brownlee January 8, 2020 at 8:23 am #

      Yes, the tutorial works with Python 3.6, TensorFlow 2.0, and MTCNN v0.1.0.

      Perhaps confirm your versions?

  67. Avatar
    Azzurri January 13, 2020 at 9:56 am #

    Hello sir that’s great tutorial
    I’ll use some of the code and expand the dataset and try different algorithms and ill use Kaggle is it okay ?

    • Avatar
      Jason Brownlee January 13, 2020 at 1:44 pm #

      Sure, as long as you cite the source and link back to this blog post.

  68. Avatar
    Amar January 13, 2020 at 6:29 pm #

    Is the algorithm good for this scenario? > If I take one image and make some transformation on it and put all images in the training dataset.

    Thank u…

    • Avatar
      Jason Brownlee January 14, 2020 at 7:20 am #

      Not sure I follow. What are you trying to achieve exactly?

      • Avatar
        Amar January 17, 2020 at 1:43 am #

        Sorry for the delay.
        I want to make a faceprint system. So, when people come to register, I take one picture for each person and make some transformation in the picture. Will it correctly predict the person?

        • Avatar
          Jason Brownlee January 17, 2020 at 6:03 am #

          Perhaps test it on idealized versions of your task.

  69. Avatar
    sasi January 20, 2020 at 6:33 am #

    Thanks for providing the complete code.

  70. Avatar
    vinay January 20, 2020 at 6:55 pm #

    How about Face matching, match two faces, How to approach it ?

    • Avatar
      Jason Brownlee January 21, 2020 at 7:09 am #

      You will need a distance measure. I don’t have an example sorry.

  71. Avatar
    Fahad January 20, 2020 at 9:16 pm #

    How to give my own image path from local storage to predict the face?

  72. Avatar
    Sparsh Garg January 21, 2020 at 3:32 am #

    Hi Jason I used your code on my own set.In all there were about 20 people each having approximately 30 images.
    For testing I decided to see how the code performs in real time. If a random person walks in front of a web cam then will it be able to distinguish between UNKNOWN and some other person in the dataset.
    Although I got pretty good results,(each of the person in my set is being correctly identified with a probability of 99%),the unknown people were also being assigned a score of 99%.
    From my understanding,the unknown category shouldn’t recieve such a high score.Do you know what is going wrong here.
    Training and cross Val error are coming out to be 1 and 0.99 .So I am not sure.Do you think I should reduce the dataset size.

    • Avatar
      Jason Brownlee January 21, 2020 at 7:19 am #

      Perhaps experiment with various changes to the model and evaluate the impact.

    • Avatar
      Dominic Ng April 10, 2020 at 11:15 pm #

      Hey may i know how did you implement the code in real time?

  73. Avatar
    Sparsh Garg January 21, 2020 at 4:20 pm #

    So should I add more layers to the model.

  74. Avatar
    siarblack January 25, 2020 at 12:55 am #

    I have Python 3.6, TensorFlow 2.0, and MTCNN v0.1.0., but I get an error:
    AttributeError: module ‘tensorflow’ has no attribute ‘get_default_graph’

    • Avatar
      siarblack January 25, 2020 at 1:59 am #

      I have found a reason – it is the version of keras. V2.2 caused the error, V2.3.1 works fine

    • Avatar
      Jason Brownlee January 25, 2020 at 8:37 am #

      Ensure you are using Keras 2.3

  75. Avatar
    Vinod January 28, 2020 at 11:52 pm #

    Hi Jason,

    I have used the same code to detect the faces from the my own dataset. I am unable to store the dataset in npz format. My dataset has images around Train-750 pics and val – 400.

    The python code is getting killed after certain time . Let me know the workaround

    System Specification:
    RAM – 16 GB
    Swap – 2 GB
    OS – Ubuntu 18.04

    • Avatar
      Jason Brownlee January 29, 2020 at 6:38 am #

      Perhaps try less data?
      Perhaps try a smaller model?
      Perhaps try running on ec2?

      • Avatar
        Vinod January 30, 2020 at 12:40 am #

        I found the solution for it. Actually in the code we are loading the model MTCNN() every time. No need of doing it. Load the model only once.

        In the code load the MTCNN() only once. Mention detector = MTCNN() outside of the function . It will solve the memory starvation problem.

        Thanks

  76. Avatar
    DAG January 29, 2020 at 4:02 am #

    I had to change your very first code from this

    from keras.models import load_model

    to this

    from tensorflow.keras.models import load_model

    I’m not sure but since you’re using keras in TF it seems your code might need to be changed according to the above.

  77. Avatar
    DAG January 30, 2020 at 8:56 am #

    Thanks so much for this Jason Brownlee… I got everything working because of this walkthrough.

    I also converted the h5 to tflite which is just one commandline command. Additionally you use the interpreter instead of calling “predict” directly on the model… which involves about 8 lines of additional code, found on Tensorflow’s website. By doing this the embedding is obtained 10 times faster (maybe more). I’m on CPU only btw.

    Thank you!

  78. Avatar
    Emanuel February 3, 2020 at 1:16 pm #

    Hi Jason, been following you for the past 3 years and you are very inspiring. Just wanted to say, I tried this code in Colab and eventually hit errors. Are there any alterations when using Google Colab? Thank you again.

  79. Avatar
    Ammar February 7, 2020 at 6:59 am #

    Hi,
    How can I improve the performance of the predict face process? It takes 4 seconds to predict(1.8 seconds to extract face)

  80. Avatar
    Abhishek Jain February 13, 2020 at 3:16 am #

    Hi Jason,
    I am getting this warning whenever I load facenet_keras.h5

    UserWarning: No training configuration found in save file: the model was *not* compiled. Compile it manually.
    warnings.warn(‘No training configuration found in save file: ‘

    • Avatar
      Jason Brownlee February 13, 2020 at 5:43 am #

      You can safely ignore it.

      • Avatar
        Abhishek Jain February 16, 2020 at 1:55 am #

        Thanks, It worked

        • Avatar
          Jason Brownlee February 16, 2020 at 6:09 am #

          Well done.

          • Avatar
            Abhishek Jain February 20, 2020 at 2:44 am #

            Hi again,
            please guide me how to handle the unknown faces in the input.

            The model is working fine when it gets the known faces,
            But when an unknown face comes, the model is classifying them to the classes on which it was trained.
            Example- model is trained to detect me but when I showed it some other face it is again saying that’s you.

            Help me with this

          • Avatar
            Jason Brownlee February 20, 2020 at 6:18 am #

            Perhaps create a new class of “unknown” and add many different faces to it.

  81. Avatar
    Pramesh Regmi February 21, 2020 at 3:34 am #

    I have been getting not so properly cropped images from the MTCNN detect_faces method. And, I am doing it for a short term project and can’t really get into details inside MTCNN. Just wanted to know why my face recognition is not as per published in this blog. I have used the same datasets and I have followed all instructions thorough.

  82. Avatar
    deva February 28, 2020 at 7:08 pm #

    “IndexError: tuple index out of range” i got this error on your facenet_keras.h5 how to solve it?

  83. Avatar
    Kevin March 3, 2020 at 6:40 am #

    Hi Jason,

    That’s great tutorial!!

    I have a database with only one photo per person. Does the model have good results in this scenario as well?

    I don’t see how to make a classifier from just one sample per class. Is there how or should I calculate the distance between the vectors directly (without a classifier)?

    (Sorry about my english skills)

    Thanks,
    Kevin

  84. Avatar
    Tanuj March 6, 2020 at 10:22 pm #

    Hi Jason!

    Could you please tell me how to incorporate a database into this? Which will store all the images of the people on whom I want to train the network?

    • Avatar
      Jason Brownlee March 7, 2020 at 7:17 am #

      I don’t have an example of loading images from a database, sorry.

  85. Avatar
    Rachit Gupta March 7, 2020 at 5:51 pm #

    When we are predicting the final images from the test set and using the pyplot.imshow(random_face_pixels) script

    I am getting an error saying ‘Invalid shape for (128,0) for image data’

    the shape of random_face_pixels list is (128,0)

    I want it to show me the visual image of the prediction

  86. Avatar
    Edwin M. March 15, 2020 at 6:53 pm #

    I have implemented this FR system but I’m having a problem understanding why we are using the SVM classifier. I don’t have specific questions because I think my problem is in my formative understanding of FaceNet. Kindly tell me if the following statements are true:
    1) The FaceNet model, once loaded and we pass the detected face “predicts” an embedding of the face.

    2) We then compare the embedding with the stored embeddings of faces in our dataset using the SVM’s predict function and determine who is the likely owner of the face.

    3)Is there a way that we can implement FaceNet without having to use a classifier in the last stage?

    • Avatar
      Jason Brownlee March 16, 2020 at 5:53 am #

      The neural net model gives features from an image and the model, e.g. svm, classifies them as different people.

      Yes, but it would be fragile – requiring the entire facenet model to be re-trained for any change to your dataset. Or perhaps just the output/classifier part of the model as we would use transfer learning.

      • Avatar
        Edwin M. March 17, 2020 at 8:51 pm #

        Thanks a lot Sir. I appreciate the clarification.

  87. Avatar
    Pranay Narang March 24, 2020 at 11:04 pm #

    Hey Jason,
    The tutorial is very specific and easy to understand, a very big thank you for that.
    But I wanted to use an image in my local directory instead of a random selection.
    According to the comments above I understand that I had to read https://machinelearningmastery.com/how-to-load-and-manipulate-images-for-deep-learning-in-python-with-pil-pillow/ and I was supposed to pass it to “selections” variable.
    But I got really confused because selections has just 1D value and I guess I’m supposed to convert my image to a numpy array and then pass it on to selections right?
    So what should i do after converting it to a numpy array.

    • Avatar
      Jason Brownlee March 25, 2020 at 6:33 am #

      Thanks.

      The linked tutorial shows you how to load an image as a numpy array directly. Perhaps re-check it?

      • Avatar
        Pranay Narang March 26, 2020 at 2:33 am #

        Hey so should I straight up insert the numpy array into the “selection” variable ?

        • Avatar
          Jason Brownlee March 26, 2020 at 7:59 am #

          Sure, try it.

          • Avatar
            Pranay Narang March 27, 2020 at 7:25 am #

            Yeah so I tried everything but I can’t seem to be able to insert an image from local into “selections” could you please help me out LOL really need it for my project

          • Avatar
            Jason Brownlee March 27, 2020 at 8:04 am #

            Sorry, I don’t have the capacity to write code for you.

            Perhaps you can hire an engineer on upwork.com?

  88. Avatar
    Rajul Dubey March 27, 2020 at 5:17 am #

    Hi Jason,

    when i try to load model it throws below error

    code = marshal.loads(raw_code)
    ValueError: bad marshal data (unknown type code)

    ValueError: bad marshal data (unknown type code)
    code = marshal.loads(raw_code)
    ValueError: bad marshal data (unknown type code)

    • Avatar
      Jason Brownlee March 27, 2020 at 6:21 am #

      Sorry, I have not seen this error before, perhaps try searching on stackoverflow?

    • Avatar
      Vish April 14, 2020 at 12:29 pm #

      How did you solve this error??

    • Avatar
      Srikanth July 13, 2020 at 4:56 pm #

      I am having same issue and was wondering how Rujul solved it. I am using Python 3.8.3 and using load_image from keras (version 2.4.3) and not tf.keras. Any help would be greatly appreciated.

      • Avatar
        Jason Brownlee July 14, 2020 at 6:14 am #

        What version of tensorflow are you use?

        • Avatar
          Srikanth July 24, 2020 at 5:03 pm #

          Sorry for the late reply…using Tensorflow 2.2

    • Avatar
      peng li July 14, 2020 at 11:30 pm #

      Same error message. Looking for solution. @Jason, could you list the version I should use for this tutorual? python, keras, tensorflow?

      • Avatar
        Jason Brownlee July 15, 2020 at 8:25 am #

        Works with most versions.

        Try TF 2.2, Keras 2.3 or 2.4, Python 3.6.

  89. Avatar
    Rajul Dubey March 29, 2020 at 5:03 am #

    Thank you Jason!!! i was able to do that as well, one thing i have noticed, sometimes it confuses faces with someone else, lets i have a trained set for ‘Jimmy Fallon’ so sometimes in a video it detects a random person and says its Jimmy Fallon and that too with more than 90% probability is that normal???

  90. Avatar
    Ovais April 4, 2020 at 7:24 am #

    Is it possible to recognize faces real time?

    • Avatar
      Jason Brownlee April 4, 2020 at 8:59 am #

      Yes, perhaps if this model operated on a subset of video frames per second.

  91. Avatar
    Pratik April 11, 2020 at 4:51 pm #

    I tried implementing the model but the accuracy obtained on the 5-Celebrity-dataset is nearly 75% and similarly I created my on custom dataset with around 15 images each training class and 5-8 images in test class , the test accuracy obtained was just 61% and I tried increasing the size of the training dataset to nearly 30 images in each dataset but the test accuracy did not increase it remained the same. What should I do to improve the accuracy ?

  92. Avatar
    Ammar April 12, 2020 at 10:23 pm #

    Hi…
    Thank you for this tutorial.

    I have tow questions

    1- In the paper, it said that facenet layers equal to 140M, but the model that you use it equal to 22,808,144. what is the difference?

    2- How can I make a good embedding function that fit my model? Because In the embedding function in facenet and vggface2 there is a difference and I do not understand it

    • Avatar
      Jason Brownlee April 13, 2020 at 6:17 am #

      Don’t know, perhaps a difference in implementation.

      Perhaps start with an embedding from the pretrained models and only move to your own model if you want to try and lift performance.

      • Avatar
        Ammar April 13, 2020 at 6:47 am #

        what I mean is the function that you built it in get_embedding in facenet and vggface2

        I have another question

        How I can get the value of thresholds of predict image.

        Thank you for help

  93. Avatar
    Yasser April 16, 2020 at 9:00 pm #

    Hi …
    When I want to calculate the accuracy of the facenet model, it should the test-data contains known and unknown labels? Because I tried to put labels that not in the dataset, It gave me an error

    • Avatar
      Jason Brownlee April 17, 2020 at 6:20 am #

      Yes, a test dataset should be images and labels not seen by the model during training.

      • Avatar
        Yasser April 17, 2020 at 7:22 am #

        I tried to put unknown faces and labels, but when I encode the labels, It gives me an error that the dataset labels do not contain the label that I want to test it.
        the error in this line:
        testy = out_encoder.transform(testy)

        • Avatar
          Jason Brownlee April 17, 2020 at 7:48 am #

          The labels must be known to the model during training.

          • Avatar
            Yasser April 17, 2020 at 11:37 pm #

            So, How can I check for the true and false negative if it must be known labels?

          • Avatar
            Jason Brownlee April 18, 2020 at 6:00 am #

            I don’t follow.

            You only know whether a prediction is true when training the model and when evaluating the model. After that – when you are using your model in practice, you don’t know if individual predictions are correct or not. All you know is the expected performance of the model on average.

  94. Avatar
    Ammar April 26, 2020 at 10:27 pm #

    Hello doctor
    Thank you for this tutorial

    Is the MTCNN using face alignment by default?

    • Avatar
      Jason Brownlee April 27, 2020 at 5:35 am #

      What do you mean “by default”? Default in what?

      • Avatar
        Ammar April 27, 2020 at 5:54 am #

        I mean is the MTCNN using face alignment when it detects faces? If not, are you using face alignment in your function or I should adjust it?

        Thanks for your help

        • Avatar
          Jason Brownlee April 27, 2020 at 7:33 am #

          MTCNN is used for face detection, it does support face alignment.

    • Avatar
      Jimw November 20, 2020 at 2:35 am #

      I change by below code, that will read by photo EXIF Orientation tag. Hope it helps.
      original code: image=Image.open(filename)
      photo alignment by exif: image = ImageOps.exif_transpose(Image.open(filename))

  95. Avatar
    FAIZ April 28, 2020 at 1:34 pm #

    please I need too much an algorithm or tutorial to compare a photo of face with another and have the percentage of similarity after the possibility of comparing a single photo with thousands of photo of faces to have the percentage of similarity

    • Avatar
      Jason Brownlee April 29, 2020 at 6:15 am #

      Perhaps you can use a distance measure between the embedding of the two faces.

      Perhaps also check the literature for similar projects to get ideas.

  96. Avatar
    Ramakrishnan .V April 28, 2020 at 4:09 pm #

    Hii Jason,
    I have implemented this face net model to train 5 person’s dataset and the accuracy is also too good but it takes very long time for recognition. Can you give any advice to reduce the computation time for recognition like running the code on gpu or any other classification algo that takes less time.

    Thank you.

    • Avatar
      Jason Brownlee April 29, 2020 at 6:16 am #

      Yes, perhaps run on faster hardware or use smaller images or a smaller model.

  97. Avatar
    Asif May 8, 2020 at 6:57 am #

    Hi Jason,

    Firstly, thanks for such amazing tutorials. Your website has been really helpful. So my question goes like this – how can I input multiple face arrays to vggface2 or FaceNet? In both the examples you have posted, you have used the function extract_face which outputs a single face.

    • Avatar
      Jason Brownlee May 8, 2020 at 8:00 am #

      You’re welcome.

      Each face is one sample, you can pass in multiple samples to a model as an array and get many predictions in return. Each face must be prepared first before being passed to the model as input.

  98. Avatar
    Wiss May 10, 2020 at 3:26 am #

    Helo , your explanation helps me much ! ty for this tuto , well i’m a beginner and i find it usefull and clearly , but i have one question plz if u don’t mind !
    i’ve tried to make some modifications on ur code in the final to make it display all the test data instead of just one random picture but i couldn’t , so plz can u write for me the instructions that makes me test model on all the test dataset instead of one exemple ?

  99. Avatar
    Jitender May 12, 2020 at 10:01 pm #

    Hi,

    This is really a good example and I manage to run it and predict from it. But I have really one concern here, It’s good that we’ve predicted on the test data but what if I want to predict it from any external image? or my video cam image? it seems to me too cumbersome to process the raw/cam images to that level (embedding and so on). Could really have been great if you’ve taken the examples on some external images rather than same test data.

    • Avatar
      Jason Brownlee May 13, 2020 at 6:35 am #

      Thanks.

      We do use a separate train and test sets. Perhaps re-read the tutorial.

  100. Avatar
    Andrei May 15, 2020 at 2:26 am #

    Hi.

    I used MTCNN to detected faces and it worked fine. But now, when I run the same code on the same photos, it doesn’t detect faces anymore. Just returns an empty list as a result.

    Maybe you know how to fix it or why it happened?

    Thanks.

    • Avatar
      Jason Brownlee May 15, 2020 at 6:05 am #

      Sorry to hear that, perhaps you modified the code. Perhaps try to copy-paste the code directly from the tutorial again.

  101. Avatar
    Meo May 17, 2020 at 3:33 am #

    hi great work thank you so much
    well in my case i want to take more than one image in last function of the code how can i do please ? thank you !

    • Avatar
      Jason Brownlee May 17, 2020 at 6:40 am #

      You can change the code to load more than one image. I cannot change the code for you. What is the problem exactly?

  102. Avatar
    A May 27, 2020 at 10:43 pm #

    Thanks for the article! This is one of the very few codes on the topic that is explained step for step. It’s a great guide. I wanted to test the model with a single or multiple new images which are not a part of the original dataset. How can I do that? My efforts have not yielded results. I would appreciate if you could tell me the process or the functions that need to be called/modified for the same.

    • Avatar
      Jason Brownlee May 28, 2020 at 6:15 am #

      Thanks!

      See the example at the end of the tutorial for testing the model on a single image.

      • Avatar
        A June 2, 2020 at 5:26 am #

        Thanks! Got it.

        Another thing is, this seems to work for a limited dataset. I can’t figure out why but when a larger dataset is taken, the shape of the Numpy array changes when loading dataset (function: load_dataset) and the consequent functions since the changed structure of trainX, trainy, etc. is not compatible with the following functions.

        What do you suppose is the issue at play?

        • Avatar
          Jason Brownlee June 2, 2020 at 6:22 am #

          You will have more samples, that is all. Perhaps double check your image loading code and perhaps scale all images to the same input size?

  103. Avatar
    Aashray Mody June 2, 2020 at 3:16 am #

    I am very new to this system how to find out your current directory in IDLE and how to keep the file ‘facenet_keras.h5‘ please help e

  104. Avatar
    Paolo June 6, 2020 at 7:22 pm #

    Hi Jason and sorry for my trivial question but I have a problem when I load the model:

    model = load_model(‘facenet_keras.h5’)

    this give me the following error:

    bad marshal data (unknown type code)

    I suppose that the problem is generated because the facenet_keras.h5 was generated with the 1.x version of tensorflow (while I am usintg the 2.2). Infact to correctly import the method I wrote:
    from tensorflow.keras.models import load_model

    but when I load the model I have the error mentioned above, please have you some suggestion?

    • Avatar
      Jason Brownlee June 7, 2020 at 6:22 am #

      I believe the problem is because you are trying to load the model using tf.keras instead of the standalone Keras library, not the version of tensorflow.

      • Avatar
        Paolo June 9, 2020 at 1:54 am #

        Thank you for you prompt answer, I have tested under Windows with TF 2.2 and Python 3.7 and works properly, for the moment is enough for me

      • Avatar
        Vitomir February 27, 2021 at 7:35 am #

        I have the same problem as Paolo, but for me it doesn’t work with any load_model(). I am currently using Python 3.8, TF 2.4.1 and Keras 2.4.3.

        Any other suggestions?

    • Avatar
      Neeraj August 24, 2020 at 6:28 am #

      Just to add. Rest all versions being same, for me, it worked in python 3.6, but didn’t work on Python 3.8. Did not test on Python 3.7. So it is python version issue and not that of libraries.

      • Avatar
        Jason Brownlee August 24, 2020 at 6:32 am #

        Thanks.

        I believe Python 3.8 is not supported by most machine learning libraries.

        • Avatar
          K Guravaiah June 22, 2021 at 4:36 pm #

          What about python 3.9.5 version

          • Avatar
            Jason Brownlee June 23, 2021 at 5:34 am #

            I use python 3.6 for compatibility.

            I don’t know about other versions, I believe newer versions are not supported by many libraries.

  105. Avatar
    Aashish June 17, 2020 at 3:15 pm #

    Hello Sir, I have worked with facenet earlier but not using Keras. The model outputs embedding of 128 dimensions but the new Facenet model outputs 512-dimensional embedding. Is it possible to get higher dimensions using the above method?
    Does this have the same accuracy as the facenet model?

    • Avatar
      Jason Brownlee June 18, 2020 at 6:19 am #

      Perhaps there are multiple versions of the model and you are referring to a more recent version?

      • Avatar
        Aashish June 18, 2020 at 2:51 pm #

        I am searching for a recent model. This gives quite a different accuracy than the one I am used to with 512 dimensions. Where can I find different versions of the model?

  106. Avatar
    Sherif June 23, 2020 at 12:57 am #

    Hello Sir, I have a question regarding the part of detecting faces in a specified directory, it works just fine for 9 images exactly (it prints the face shape and show the image as the code says) and then I come across an error “IndexError: list index out of range” at this line —> x1, y1, width, height = results[0][‘box’]

    I just want to understand why is it doing like this

    If you can help that would be so helpful.

  107. Avatar
    Nirvitaraka June 23, 2020 at 4:43 am #

    Great tutorial for beginners like me, thank you.

    I generated classifier for 2 labels, personA and personB.
    When run the code from this article as it is, I can see accuracy 100 % for both labels on test data.

    Now I am trying to recognize all faces in an image.
    I modified the extract_face function as below and generated embeddings for each face found in one image.


    # extract all faces from source image
    def extract_faces(image, required_size=(160, 160)):
    faces = list()

    # convert to RGB, if needed
    image = image.convert('RGB')
    # convert to array
    pixels = asarray(image)
    # create the detector, using default weights
    detector = MTCNN()
    # detect faces in the image
    results = detector.detect_faces(pixels)

    for faceFound in results:
    # extract the bounding box
    x1, y1, width, height = faceFound['box']
    # bug fix
    x1, y1 = abs(x1), abs(y1)
    x2, y2 = x1 + width, y1 + height
    # extract the face
    face = pixels[y1:y2, x1:x2]
    # resize pixels to the model size
    img = Image.fromarray(face)
    img = image.resize(required_size)
    face_array = asarray(img)

    faces.append(face_array)
    return faces

    ######################### RECOGNIZE ##############################
    X = []
    image = Image.open(test_img)
    faceResults = extract_faces(image)
    for result in faceResults:
    X.append(result)

    # convert each face in the test set to an embedding
    newTestX = list()
    for face_pixels in X:
    embedding = get_embedding(model, face_pixels)
    newTestX.append(embedding)
    newTestX = asarray(newTestX)

    I executed this code on one image from the same test data which I used while generating classifier.
    However, the embedding generated with this code and the one generated from your code are totally different.
    And hence the prediction result is also incorrect with this embedding generated with my code.

    Can you please help me spot what I am doing wrong here?

    • Avatar
      Jason Brownlee June 23, 2020 at 6:32 am #

      Perhaps confirm that image data is prepared in an identical manner during training and afterward for new data.

      • Avatar
        Sujata June 23, 2020 at 4:22 pm #

        Thank you for the response Jason, I got it working. You were correct, I was not converting the new image data in the same way as for training.

  108. Avatar
    henry June 23, 2020 at 6:32 pm #

    Hi Jason, this tutorial is great. I have a question, when i have a new image(has single face),
    i get embedded vector through get_embedding() function so, how can i predict this embedding vector for any one in file *_embeddings.npz?
    face_embedded_data = np.load(‘file_embeddings.npz’)
    trainX, trainy = face_embedded_data[‘arr_0’], face_embedded_data[‘arr_1′]
    in_encoder = Normalizer(norm=’l2′)
    trainX = in_encoder.transform(trainX)

    out_encoder = LabelEncoder()
    out_encoder.fit(trainy)

    model_predict = SVC(kernel=’linear’, probability=True)
    model_predict.fit(trainX, trainy)

    embedding = get_embedding(model, face_array)
    samples = np.expand_dims(embedding, axis=0)
    y_hatclass = model_predict.predict(samples)
    y_hatprob = model_predict.predict_proba(samples)
    class_index = y_hatclass[0]
    class_probability = y_hatprob[0,class_index] * 100

    • Avatar
      Jason Brownlee June 24, 2020 at 6:26 am #

      Thanks.

      You can create an embedding for an ad hoc photo then pass it through your model for a prediction.

  109. Avatar
    Praveen June 24, 2020 at 5:08 pm #

    Hi Jaon,
    How can we decrease the loading time of pre-trained facenet model from keras.load_model(‘facenet_keras.h5’). It’s approximately taking 6-7 seconds for me in my laptop. I am deploying the model in raspberry pi which takes more longer. Is there any solution for this??

    • Avatar
      Jason Brownlee June 25, 2020 at 6:13 am #

      I don’t know sorry, perhaps use a smaller model?

  110. Avatar
    sathish June 28, 2020 at 1:25 am #

    Hello Jason Brownlee,

    I just started research and unable to run some of David’s project can I have any Github repo
    you can suggest to me get all the codes from any repo

    Thanks

  111. Avatar
    Priyanka Jadli July 4, 2020 at 10:45 pm #

    Hi Jason. Thanks for the post here. It has been very helpful for my project.
    I am now trying to train the network on my data (It has images of faces of different people)
    But there is an error I am running into
    In the create face embedding section, I gout the output showing – Loaded (20,128) (20,) (8,128) (8,)
    It has also loaded the model

    But in the following line – embedding = get_embedding(model, face_pixels)
    I get the below error
    ValueError: Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (1, 128)

    Unsure how to proceed with this

    Would appreciate the help

      • Avatar
        Priyanka July 6, 2020 at 10:22 pm #

        I tried re-doing the code from the beginning and it’s working now. I have 2 doubts
        1. I used my dataset with this code now and it had 2 classes. The code worked perfectly except the the final code snippet.
        The pyplot.imshow(random_face_pixels) shows an TypeError: Invalid shape (128,) for image data
        And I can confirm that I have followed the code to the letter. Again, not sure why that is the case
        2. This code shows the class probability for a randomly selected image from the val folder.
        Is there a tutorial on how to find the OVERALL accuracy of the entire model can be found instead of random images taken up?
        (Considering all the val images) The papers that I have read up all mostly don’t mention their accuracy based on 1 randomly
        selected val image
        Hopefully I’ve explained point 2 well. Thanks

  112. Avatar
    Kai Tra July 7, 2020 at 1:12 am #

    Hi Jason,

    After #load the dataset and #normalize input vectors, I got the error found in variable TrainX. It is said that “ValueError: Found array with dim 4. Estimator expected <= 2.". I don't know how to handle that, even though I tried to reshape it.

    Could you please help me?

  113. Avatar
    Nabil July 10, 2020 at 5:03 am #

    Hi Jason,
    Thanks for your tutorial. It’s great. I have a question.
    How can I recognize a face that does not exist in the training set by this approach?
    I have tried to measure the Euclidean distance between the face embeddings to distinguishing faces but the result I got is not satisfying. Your Suggestion will be highly appreciated. Thank you

    • Avatar
      Jason Brownlee July 10, 2020 at 6:08 am #

      Pass the image to the model and get a prediction as we do at the end of the tutorial.

      • Avatar
        Nabil July 10, 2020 at 8:26 am #

        suppose I pass an image that not among the 5 celebrities. How the algorithm can identify that as an unrecognized face?

        • Avatar
          Jason Brownlee July 10, 2020 at 1:44 pm #

          It cannot.

          You can add a “unknown class” during training.
          You can interpet the probabilities and return an “unknown” result if none of the classes respond strongly enough.
          You can re-train the model to support the new person.

          • Avatar
            Nabil July 10, 2020 at 9:34 pm #

            Thanks, Jason!

          • Avatar
            Jason Brownlee July 11, 2020 at 6:11 am #

            You’re welcome.

  114. Avatar
    SunnyMo July 10, 2020 at 6:04 pm #

    Hi Jason, i found one error in your demonstration of extracting face part, the bounding box in MTCNN, some images may have negative values of x or y, this is not a bug, because the face is partially out of the image, like this one: 5-celebrity-faces-dataset/train\jerry_seinfeld\httpwwwwatchidcomsitesdefaultfilesuploadsightingBreitlingwatchJerrySeinfeldjpg.jpg.
    if you fix the bug by “x1, y1 = abs(x1), abs(y1)”, then the face image is not properly clipped.

  115. Avatar
    Dominique July 11, 2020 at 2:26 pm #

    Dear Jason,

    I have finished the reading and practicing of your book “Deep Learning for Computer Vision”. I would like to thank you for this book which I like very much. I have written a blog post to summarise in my own way your book: https://questioneurope.blogspot.com/2020/07/deep-learning-for-computer-vision-jason.html

    I will certainly go for a next book, hesitating between LSTM, NLP or Time series forecasting. Any advice?

    Thanks,
    Kind regards
    Dominique

    • Avatar
      Jason Brownlee July 12, 2020 at 5:40 am #

      Well done Dominique!

      Thank you for the review.

      I recommend selecting a topic that you are most excited about or that you can apply immediately. If pressed, I would suggest NLP.

  116. Avatar
    Asha Karthick July 17, 2020 at 1:27 am #

    Hi jason,
    While converting the image to numpy array which filename should be given in filename? Kindly reply as soon as possible

    • Avatar
      Jason Brownlee July 17, 2020 at 6:23 am #

      A filename is only needed if you save it. You can save as any filename you want.

  117. Avatar
    Dominique July 21, 2020 at 11:40 pm #

    Dear Jason,

    I ran MTCNN code provided in your book on my own photos and I share it with you:
    https://questioneurope.blogspot.com/2020/07/running-mtcnn-on-my-own-photos.html

    Kind regards
    Dominique

    • Avatar
      Jason Brownlee July 22, 2020 at 5:33 am #

      Well done, very impressive!

      You’re one of the few that not only put the methods into practice, but also share results. I love it!

  118. Avatar
    tejas July 22, 2020 at 2:45 am #

    Will this model able to identify images out of 10k different faces if trained properly ?

    • Avatar
      Jason Brownlee July 22, 2020 at 5:43 am #

      Perhaps develop a prototype and evaluate it for your use case.

  119. Avatar
    usama July 24, 2020 at 8:34 am #

    HI Jason, how to implement it in live video?

    • Avatar
      Jason Brownlee July 24, 2020 at 10:35 am #

      Perhaps you can run the procedure on single frames from your video file.

  120. Avatar
    Adarsh Narayanan July 24, 2020 at 9:52 pm #

    Hey Jason , im getting the following error and have trying to find a solution for days . Could you please help me .
    I am using google colab for the code and I placed the model and weight files in a folder name facenet_keras.h5 .
    Then when I’m running the code this error is coming.
    Unable to open file (unable to open file: name = ‘facenet_keras.h5’, errno = 2, error message = ‘No such file or directory’, flags = 0, o_flags = 0)
    My path is /drive/My Drive/facenet_keras_h5 then also the error is shown

    • Avatar
      Jason Brownlee July 25, 2020 at 6:18 am #

      Sorry,I have never used google colab, I cannot give you advice on the topic.

  121. Avatar
    SUDARSHAN SAIKIA July 25, 2020 at 2:41 am #

    I am getting this error. please help me out

    NameError: name ‘face_pixels’ is not defined

  122. Avatar
    Kumar Amit July 25, 2020 at 10:50 pm #

    Dear Jason,

    I was going through this blog from first line till the end. More than what wrote is the consistently replying every queries. Hats-off to your patience.

    My query: I was planning to use your face recognition guide for my kindergarten project to detect kids faces.

    1. Will this guide will works well on kids faces?
    2. Does the camera feed has to be at certain angle and beyond which detection or recognition could be a problem?
    3. Since, new kids enrolled very frequently, how to automate the training of new faces?

    Thanks !

    • Avatar
      Jason Brownlee July 26, 2020 at 6:19 am #

      Thanks.

      Perhaps test it to see if it is appropriate.

      Faces should be head on with the camera.

      Sounds like an application question, not a machine learning question. You will need to devise an appropriate procedure to maintain the system in its environment.

  123. Avatar
    Karan July 29, 2020 at 4:19 am #

    Can we use this code and make an android face recognition app from this?

  124. Avatar
    Santhosh August 3, 2020 at 3:24 pm #

    The test image when i was giving from my local system,it predicted the output.But when i gave the same image from webservice model.predict() throws an error as ,”expected input_1 to have 4 dimensions but got (1,128)”. When i was doing from my local i followed the same step as you processed test images before model.fit().When i was giving from webservice. I am training images from one service passed it to model.fit().and from match webservice i am trying to predict but it throws me an error.

    • Avatar
      Jason Brownlee August 4, 2020 at 6:33 am #

      Sorry, I don’t know about your webservice. I suspect it is the cause of the error.

  125. Avatar
    Santhosh August 4, 2020 at 3:56 pm #

    Thanks for the reply.. I re-frame my question without webservice part and lets assume train and test images are local.

    My project requires train images to be added in model before testing of face. So, I process my train image one-by-one and fit it with the model. Up to this, everything works fine.

    Now, I have a test image and processed it as usual (process I mean by extract face and get embedding) and give it to model.predict(). This is where I get this error ”expected input_1 to have 4 dimensions but got (1,128)”

    One more thing, if I process one of my test image before model.fit(), I get my output for this face properly.

    • Avatar
      Jason Brownlee August 5, 2020 at 6:08 am #

      The new image must be prepared in an identical manner as the training data. Same pixel scaling and same shape, then passed through the model to get the embedding, then passed to the model to get a prediction.

  126. Avatar
    Priyanka August 4, 2020 at 9:54 pm #

    Hi, I need the architecture of FaceNet model, but unable to find it. If there is an official source for it, kindly help out
    The paper I found consists of two different architectures and I’m unsure which one is relevant/ being used over here

    • Avatar
      Priyanka August 4, 2020 at 10:58 pm #

      I’m referring to the FaceNet: A unified embedding for face recognition and clustering paper, and see two architectures there – Zeiler&Fergus based an d Inception based

    • Avatar
      Jason Brownlee August 5, 2020 at 6:12 am #

      I believe it is in the facenet paper.

      Also you can load the model and summarize its structure to see.

  127. Avatar
    Anil Kumar August 13, 2020 at 10:42 pm #

    Currently the mtcnn package does not have alignment feature, but the original facenet repo (https://github.com/davidsandberg/facenet) has alignment feature.

    1. Does face alignment affect the accuracy of model predictions?
    2. Is there any library to align the face detected by mtcnn?

    I am currently using this model 20180402-114759(Inception ResNet v1) from facenet repo.

  128. Avatar
    Thiago Oliveira August 17, 2020 at 10:16 pm #

    Congratulations on the tutorial! I’m getting the following warning:

    WARNING:tensorflow:6 out of the last 11 calls to <function Model.make_predict_function..predict_function at 0x7f0016e2eae8> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.

    while using the extract_face function. Do you know that does it mean?

    • Avatar
      Jason Brownlee August 18, 2020 at 6:04 am #

      I have not seen that error before. Are you Keras and TensorFlow libraries up to date?

      • Avatar
        Thiago Oliveira August 18, 2020 at 6:23 am #

        Yes, I’m using keras 2.4.0 and tensorflow 2.3.0

        • Avatar
          Jason Brownlee August 18, 2020 at 1:25 pm #

          It looks like a warning, perhaps try ignoring it for now.

    • Avatar
      Neeraj August 24, 2020 at 6:31 am #

      Try moving code,

      detector = MTCNN()

      outside of the function. That is create detector only once. This warning should go away.

  129. Avatar
    Neeraj August 24, 2020 at 6:21 am #

    Hi Jason,

    Great tutorial. Just had one question.

    I am trying to implement masked face recognition – using your approach as a starting point. And have been somewhat successful, by adding artificial masks to unmasked images before generating face embeddings.

    However, when I take masked images, MTCNN code is not able to detect any face in those images. As a result, I am not able to train using real life examples. Do you have any recommendation or documentation or tutorial at to what can make MTCNN detect masked faces?

    • Avatar
      Jason Brownlee August 24, 2020 at 6:33 am #

      Thanks!

      Sorry, I don’t have tutorials on training a mtcnn.

  130. Avatar
    HengYL August 27, 2020 at 7:40 pm #

    Hi Jason,

    Why am I get this error at the end of your code?

    TypeError Traceback (most recent call last)
    in ()
    43 # plot for fun
    44 #a = random_face_pixels.resize(128,128)
    —> 45 pyplot.imshow (random_face_pixels)
    46 title = ‘%s (%.3f)’ % (predict_names[0], class_probability)
    47 pyplot.title(title)

    5 frames
    /usr/local/lib/python3.6/dist-packages/matplotlib/image.py in set_data(self, A)
    697 or self._A.ndim == 3 and self._A.shape[-1] in [3, 4]):
    698 raise TypeError(“Invalid shape {} for image data”
    –> 699 .format(self._A.shape))
    700
    701 if self._A.ndim == 3:

    TypeError: Invalid shape (128,) for image data

  131. Avatar
    Manikanteswar Punnam August 28, 2020 at 8:01 pm #

    Awesome Explanation sir.
    I have a built my own FaceNet Model.But i don’t have any dataset.
    where can i find the tripplet dataset?

  132. Avatar
    Rasheed September 3, 2020 at 6:49 am #

    Great article. I need to know that you also included val samples to create Face_Embeddings npz file, so it means that in case of video capture, we will have to pass all sample frames via complete procedure and create 2x .npz files for each frame and then identify each image in our other trained Face_Embeddings npz file with new Face_Embedding npz file of current frame?

    • Avatar
      Jason Brownlee September 3, 2020 at 7:48 am #

      No, you can create the embedding just in time for your application.

  133. Avatar
    Rasheed September 3, 2020 at 9:55 am #

    For Face Recognition and already having Face_Embeddings.npz file. For Image ‘Test.jpg’ I have acquired testX and testY from face after face detection:

    testY will be name that i wll be giving to Test.jpg external file?

    X.extend(face)
    Y.extend(face)
    testX = asarray(X)
    testY = asarray(Y)

    It gives me error:
    testY = out_encoder.transform(testY)

    ValueError: y should be a 1d array, got an array of shape (160, 160, 3) instead.

    • Avatar
      Jason Brownlee September 3, 2020 at 1:41 pm #

      Out encoder takes the name of the “class” or person and converts it into an integer for modeling.

  134. Avatar
    Safi September 3, 2020 at 5:34 pm #

    Hi Dr. Brownlee, thanks for sharing this awesome tutorial,

    I have been working on one shot learning with siamese network and facenet, even tried with knn, svm and vggg16 and resnet.

    for the training I use one image per classe so if i’ve 200 students, i’ve 200 images for classify using one shot learning and facenet.

    First of all, The goal is to classify and embedding the students images on database and then detect the images on live camera. unfortunately the detections was not really accurate. sometimes the students are detected as unknown or detected with with correct labelled name in the db.

    For instance, when unknown person tested via the camera it will detect this unknown person already existed in the database.

    Secondly, I would like to compute face embeddings for new people that doesn’t existed in the db before. then add these new added people would be detected if they tested again.

    how can I approach these, Please advise me on how to achieve this? and specifically for embedding new faces to the db, later would be detected with their correct labelled.

    share with me if there are some reference that would guide me to achieve that.

    Thanks Dr. Brownlee

    • Avatar
      Jason Brownlee September 4, 2020 at 6:24 am #

      You’re welcome.

      I believe some experimentation will be required. E.g. if an embedding cannot be predicted with high confidence, mark it as “new” and “unknown”. The threshold for what is and what is not confident might have to be tuned for your specific dataset/environment.

      • Avatar
        Safi September 4, 2020 at 11:23 am #

        Thanks quick response Dr Brownlee,

        I will take a look at that and see what I can improve it.

        what about adding new faces to the db and encode it automatically without re-training or encoding again. could you please advise or share with me any reference that would be useful to compute face embeddings for new people.

        Thanks Dr Brownlee.

        • Avatar
          Jason Brownlee September 4, 2020 at 1:37 pm #

          I don’t have an example of this. As I said, you will need to experiment to see what works well for your system. It is perhaps more of an engineering question.

  135. Avatar
    Rasheed September 4, 2020 at 2:36 am #

    Thanks. Resolved the issue.

    We need To assign Labels to testY just same as we used for trainY

    Then it will recognise Image in RAM from webcam with all the already trained embedded images in the dataset. Because in Feature_Embedded_Dataset.npz we have Labels for each POI to compare with Image in RAM

    Btw ur code is really resource intensive for 2.2GHz CPU. Will buy GPU soon for my BCS AI Project though 🙂

    Thanks alot for the Tutorial!!!

  136. Avatar
    Nicolas September 9, 2020 at 4:58 am #

    I made an implementation based in your example using a web cam with low deffinition and I found that removing this normalizacion

    face = face.astype(‘float32’)
    # standardize pixel values across channels (global)
    mean, std = face.mean(), face.std()
    face = (face – mean) / std

    decreases the distance between embeddings for the same person

  137. Avatar
    Pinku September 16, 2020 at 5:22 am #

    Hi Dr. Jason,

    Thanks for sharing this great tutorial. I am facing below error on extract_faces() function:

    ValueError: The channel dimension of the inputs should be defined. Found None.

    I have just copied the function and trying to run the program on a set of images. I am not sure where I am wrong – is there anything you can shed lights on?

    • Avatar
      Jason Brownlee September 16, 2020 at 6:40 am #

      Sorry to hear that.

      Perhaps try running it on the dataset used in the tutorial first to confirm the code works on your system, then perhaps prepare your image data in an identical way as the dataset used in the tutorial.

  138. Avatar
    Cloud September 20, 2020 at 8:59 pm #

    Hi Jason
    I am trying to train svm in 1000 classes by using the face net extracted feature. The training time of SVM takes longer time never seems to stop. Is there any way to reduce the training time of SVM

    No of Class:1000
    No of image per class: 500

    • Avatar
      Jason Brownlee September 21, 2020 at 8:09 am #

      Perhaps try an alternate model that trains faster, such as multinomial logistic regression?

  139. Avatar
    davidwaf September 21, 2020 at 7:38 pm #

    Just to let you know that I have adapted your tutorial to build a working face verification system. Working quite well!!

  140. Avatar
    Khushwinder Singh September 24, 2020 at 5:07 pm #

    Hi Jason,
    First of all Thanks for such a detailed explanation of the implementation of the facenet model.
    I am stuck at a point where we import the model, I am getting this error
    ValueError: bad marshal data (unknown type code)
    I searched for the solution I am not able to find anything.
    Could you please let me know what should I do.
    Waiting for your reply

  141. Avatar
    Santhosh September 29, 2020 at 6:00 pm #

    It is giving good accuracy when i train with 9-10 different persons images the same way you did, When i check the accuracy it is giving like 90-95%. When i train with 90-100 different persons of images and it is giving 10% accuracy , The accuracy starts to decline when i increase the train images, How can i fix this. Thanks

    • Avatar
      Jason Brownlee September 30, 2020 at 6:25 am #

      Ouch.

      Perhaps try tuning the classification model?
      Perhaps try using an alternate classification model?

  142. Avatar
    Arvind October 13, 2020 at 10:55 pm #

    Can you explain the process of ‘downloading the model file and place it in your current working directory’ ?

    • Avatar
      Jason Brownlee October 14, 2020 at 6:18 am #

      Sorry, if downloading files from the internet is new for you, then I don’t think I can help you learn machine learning.

  143. Avatar
    Mohd Rameez October 14, 2020 at 2:13 am #

    Hey Jason great post!!!
    I have a question I am working on a personal project which is Face Recognition using LBPH,
    I made a similar approach using SVM Classifier, but how can I use it to mark Unknown person.

    • Avatar
      Jason Brownlee October 14, 2020 at 6:23 am #

      Good question.

      Perhaps add a class of “other” during training?
      Perhaps interpet the predicted probabilities and classify as “unknown” for low probabilities?
      Perhaps develop a separate model for known vs unknown faces?
      Perhaps check the literature for common solutions to this problem?

  144. Avatar
    Edward November 2, 2020 at 12:33 pm #

    Hi Jason, first, great thanks to this tutorial. i learnt a lot.

    I have question, is this Facenet + SVM able to recognize over 100 faces?
    After testing myself, it works well if there are a total of 7 or 8 people (with 5 samples each).
    However, the ‘probability’ declines significantly if there are around 20 people (from 0.8 to 0.3). I wonder if this works for 100 people?

    • Avatar
      Jason Brownlee November 2, 2020 at 1:39 pm #

      Good question, I have not tested 100 faces. Perhaps try an alternate model from the SVM, e.g. an xgboost, multinomial regression, or neural net?

  145. Avatar
    Nicholas Hunter November 24, 2020 at 5:00 am #

    Hu, Jason, thanks for the tutorial. I seem to be missing a piece. In the following step

    # load the facenet model
    model = load_model(‘facenet_keras.h5’)
    print(‘Loaded Model’)

    I received the following error.

    OSError: SavedModel file does not exist at: facenet_keras.h5/{saved_model.pbtxt|saved_model.pb}

    Could you comment, please?

    • Avatar
      Nicholas Hunter November 24, 2020 at 5:14 am #

      Please delete and we shall never speak of this again. Thanks.

      • Avatar
        Jason Brownlee November 24, 2020 at 6:23 am #

        Hahah, no problem. We all make errors – it’s part of development, and talking about them helps all other readers.

    • Avatar
      Jason Brownlee November 24, 2020 at 6:23 am #

      Looks like the file is not on your computer. Perhaps download and save it in the same directory as your python script.

    • Avatar
      Abhy March 2, 2021 at 1:09 am #

      How u solved this error?

  146. Avatar
    Nazmus December 14, 2020 at 2:58 am #

    # face detection for the 5 Celebrity Faces Dataset
    from os import listdir
    from os.path import isdir
    from PIL import Image
    from matplotlib import pyplot
    from numpy import savez_compressed
    from numpy import asarray
    from mtcnn.mtcnn import MTCNN

    # extract a single face from a given photograph
    def extract_face(filename, required_size=(160, 160)):
    # load image from file
    image = Image.open(filename)
    # convert to RGB, if needed
    image = image.convert(‘RGB’)
    # convert to array
    pixels = asarray(image)
    # create the detector, using default weights
    detector = MTCNN()
    # detect faces in the image
    results = detector.detect_faces(pixels)
    # extract the bounding box from the first face
    x1, y1, width, height = results[0][‘box’]
    # bug fix
    x1, y1 = abs(x1), abs(y1)
    x2, y2 = x1 + width, y1 + height
    # extract the face
    face = pixels[y1:y2, x1:x2]
    # resize pixels to the model size
    image = Image.fromarray(face)
    image = image.resize(required_size)
    face_array = asarray(image)
    return face_array

    # load images and extract faces for all images in a directory
    def load_faces(directory):
    faces = list()
    # enumerate files
    for filename in listdir(directory):
    # path
    path = directory + filename
    # get face
    face = extract_face(path)
    # store
    faces.append(face)
    return faces

    # load a dataset that contains one subdir for each class that in turn contains images
    def load_dataset(directory):
    X, y = list(), list()
    # enumerate folders, on per class
    for subdir in listdir(directory):
    # path
    path = directory + subdir + ‘/’
    # skip any files that might be in the dir
    if not isdir(path):
    continue
    # load all faces in the subdirectory
    faces = load_faces(path)
    # create labels
    labels = [subdir for _ in range(len(faces))]
    # summarize progress
    print(‘>loaded %d examples for class: %s’ % (len(faces), subdir))
    # store
    X.extend(faces)
    y.extend(labels)
    return asarray(X), asarray(y)

    # load train dataset
    trainX, trainy = load_dataset(‘5-celebrity-faces-dataset/train/’)
    print(trainX.shape, trainy.shape)
    # load test dataset
    testX, testy = load_dataset(‘5-celebrity-faces-dataset/val/’)
    # save arrays to one file in compressed format
    savez_compressed(‘5-celebrity-faces-dataset.npz’, trainX, trainy, testX, testy)

    ERROORRRR!!!

    WARNING:tensorflow:5 out of the last 13 calls to <function Model.make_predict_function..predict_function at 0x0000029E333A7730> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.

  147. Avatar
    Samson December 16, 2020 at 11:55 pm #

    # prediction for the face
    samples = expand_dims(random_face_emb, axis=0)
    yhat_class = model.predict(samples)
    yhat_prob = model.predict_proba(samples)

    In this sample, the test face embeddings are loaded from npz file (which created before in ‘Create Face Embeddings’ phase).
    So, the model.predict runs fast.

    But when I tried to classification images from a stream (camera), I must get new embeddings for every frame. The ‘get embeddings’ process quite slow (around 2-3 seconds on each frame on my computer), so the video output becomes laggy.
    Is there anyway to speed up the ‘get embeddings’ process?

    • Avatar
      Jason Brownlee December 17, 2020 at 6:35 am #

      Good question.

      It might not be the best solution for a real-time system.

  148. Avatar
    Seble Tefera December 20, 2020 at 12:54 am #

    You are one of the best that explains complex staffs steps by steps to the point. Your book and blogs are amazing.
    The output of MTCNN is the box and keypoints. I was expecting those will be input to the FaceNet.
    But it seems you used only the box information as input to the FaceNet? Why you did that?
    How we can use the keypoints information for face recognition pipeline?

    • Avatar
      Jason Brownlee December 20, 2020 at 5:58 am #

      Thanks!

      No facenet just classifies an image, no object detection in that model. You need a face detection model first, then face classification.

      • Avatar
        Seble Tefera December 20, 2020 at 4:21 pm #

        Thank you for your response.
        In this work you used the below steps in that order.
        1. Face detection model, MTCNN
        2. Create embedding using facenet
        3. Classify using SVN

        My question is the output of face detection are the bounding box and keypoints (landmark locations like left eye, right eye, … ). But you only used the bounding box information and extracted the face and feed it to facenet to create embeddings. Why you didn’t use the landmark locations informations?

        • Avatar
          Jason Brownlee December 21, 2020 at 6:34 am #

          Correct.

          The chosen model does not require that information, so it is discarded.

          • Avatar
            Seble Tefera December 22, 2020 at 3:18 pm #

            Thank you so much Jason!!
            The typical face recognition consists of four stages: detect, align, represent and classify.
            I think for detect we used MTCNN, for represent we used facenet and for classify we used svn.
            What did we use for face alignment?

          • Avatar
            Jason Brownlee December 23, 2020 at 5:28 am #

            I guess no alignment of the image was required or performed in this case.

  149. Avatar
    Mark Zellelew December 20, 2020 at 4:02 pm #

    Dear Dr. Jason,

    “We will focus on the face identification task in this tutorial.” Per your note this task is face identification, which stage (face detection, FaceNet or classification with svn)of face recognition system determined it?

    The other question is:
    what is the difference between binary classification and face verification?
    what is the difference between multi-class classification and face identification?

    Thank you so much for your assistance.

    • Avatar
      Jason Brownlee December 21, 2020 at 6:33 am #

      Classification is an identification task, e.g. SVM, but it requires face detection and face embedding first.

      Binary classification has two classes, multi-class classification has more than two classes, you can learn more here:
      https://machinelearningmastery.com/types-of-classification-in-machine-learning/

      Not sure what face verification is – perhaps whether a face can be recognized by a model or not.

    • Avatar
      Anup Kumar March 1, 2021 at 7:12 pm #

      Classification is mostly a machine learning problem, such as SVM. Using machine learning based classifier like SVM, LR etc, your model is able to generalize, which means learning from features of different images of same class. This is not possible without a classifier. Face classification is also called as face identification.

      A face verification is a one to one match which is usually achieved using distance metrics such as euclidean distance, cosine similarity. Distance between two face embedding is being calculated and a threshold will identify whether two faces are same or not. The distance approach can’t generalize like a machine learning model but is helpful when you have a single image of a person,and is thus less accurate than classification approach.

  150. Avatar
    Shekhar Rana December 22, 2020 at 8:08 pm #

    Hii Jason,

    Thanks for such a helpful article.

    I want a clarification on one point. I have an image which has total 10 faces. But when I implemented detection algorithm(part of code) on it and save this numpy array as an image, then every time it detect only one face(which is the first face in image).
    How I can detect all the faces separately??

    • Avatar
      Jason Brownlee December 23, 2020 at 5:31 am #

      Perhaps write a for-loop over all faces detected in your image.

  151. Avatar
    Sarvendra Singh January 9, 2021 at 12:42 am #

    Hi jason,
    I am getting below error while loading model.Please suggest.searched a lot not getting solution.
    Error:”ImportError: load_model requires h5py.”

    from keras.models import load_model
    model=load_model(‘D:/ML_All/deep learning/face detection/FACENET_PRETRAINED/facenet_keras.h5’)

    I am using below version which is given in link(https://github.com/nyoki-mtl/keras-facenet)
    python3.6.12
    tensorflow: 1.3.0
    keras: 2.1.2

    • Avatar
      Jason Brownlee January 9, 2021 at 6:43 am #

      Try updating your keras and tensorflow libraries to the latest versions.

      • Avatar
        Sarvendra Singh January 9, 2021 at 6:54 am #

        Thanks Jason for reply..Earlier tried with TensorFlow 2.1.0 + Keras 2.3.1 but then also was getting some error while loading model.

        Since facenet Keras implementation((https://github.com/nyoki-mtl/keras-facenet) by Hiroki Taniai has used (tensorflow: 1.3.0,keras: 2.1.2).So should not we use same keras,TF version’s in which pretrained model was built to load model.Is it mandatory to have same version to load model in keras using load_model ?

        Thanks
        Sarvendra

        • Avatar
          Jason Brownlee January 9, 2021 at 8:33 am #

          I have developed and tested the tutorial against the later version of keras and tensorflow and recommend the same.

  152. Avatar
    Sarvendra Singh January 9, 2021 at 7:39 am #

    Hi Jason,
    Please let me know that above keras facenet implementation is done with which version of keras,tenserflow and python.

    I am getting too many issues while loading facenet h5 model.

    Thanks
    Sarvendra

  153. Avatar
    Blaise January 10, 2021 at 9:44 am #

    Hey Jason thanks for the tutorial, Is there a way of having the facenet keras model perform faster for video face recognition when running on a CPU.
    Perhaps maybe have it converted to a form in which it’s usable by opencv’s dnn module?

    • Avatar
      Jason Brownlee January 10, 2021 at 1:08 pm #

      I’m sure there is, I’m not across it sorry.

      Perhaps start by operating on fewer frames per second and run on fast hardware.

  154. Avatar
    Henok Bekele January 13, 2021 at 4:03 pm #

    Hi Jason,

    Thank you for such incredible tutorial

    “Face Identification. A one-to-many mapping for a given face against a database of known faces (e.g. who is this person?).”

    Please could you guide me what face dataset to use to evaluate the model for face identification task?

    • Avatar
      Jason Brownlee January 14, 2021 at 6:11 am #

      You would use your own dataset.

      • Avatar
        Henok Bekele January 14, 2021 at 12:09 pm #

        For example if I have a dataset a total of 165 face images which has 15 people having 11 different images per person.

        How I can prepare this dataset to evaluate the model for face identification task?

        • Avatar
          Jason Brownlee January 14, 2021 at 1:20 pm #

          Good question.

          You must have photos of each person in the train and test sets so that the model knows about each person (training) and can be evaluated when identifying each person (testing).

          The split is up to you, e.g. 50/50 might be a good place to start. Perhaps more or less aggressive depending on the time you have and the reliability you need to demonstrate.

          • Avatar
            Henok Bekele January 15, 2021 at 4:18 pm #

            Let’s say about 50/50 split, that means 15 people will have 6 face images in training whereas 5 face images in test.
            So while testing the first image, with how many images I will compare? (since identification is one-to-many.
            Plus would like to use just the similarity measure without the svm classifier.

          • Avatar
            Jason Brownlee January 16, 2021 at 6:53 am #

            Sorry, I don’t follow.

            If you split 50/50, then half of the images for each person will be used to train the model and half to evaluate it. Each image belongs to one person. You can evaluate model performance using classification accuracy.

  155. Avatar
    Eyob Zelellew January 24, 2021 at 10:50 pm #

    Hello Dr. Jason,
    Your blog is the best and you explained precisely. Really appreciate.

    Please could you compare the Universal Background Models (UBM) and FaceNet model?

    I don’t see a paper where UBM is applied for face verification, do you know the reason?
    It looks like UBM is mainly used for speaker verification not for face verification, I am wondering why?

  156. Avatar
    yasmeen February 4, 2021 at 8:00 am #

    Hello Dr. Jason
    I am interested to your blog. It is very useful. Thank you for your explain.
    I have a question about image size.
    I used FaceNet pytorch trained on vggface2 using backbone Inception ResNet v1 to predict my own dataset but i want to use image size 256×256. I tried it and predict embedding.
    now, I want to ask this is correct or must be image size 160 x160. specially, I read note in facenet site about the size. this is note
    Both pretrained models were trained on 160×160 px images, so will perform best if applied to images resized to this shape.
    so, I concluded that I can use image size 256.

    another question about Threshold. I choose T 1.03 @ 0.001 FPR but from model trained on 160. is this right?

    • Avatar
      Jason Brownlee February 4, 2021 at 9:36 am #

      Sorry, I’m not familiar with the pytorch implementation and the image sizes it supports.

      Perhaps you can contact the developer of the model directly.

      • Avatar
        YASMEEN February 5, 2021 at 7:01 am #

        Thank you

  157. Avatar
    Blaise February 4, 2021 at 11:21 pm #

    Is it possible to use an mp4 video file as input. Whereas faces within the video are detected and recognized.
    Thank you so much for your time and help, this tutorial helped me a lot

    • Avatar
      Jason Brownlee February 5, 2021 at 5:40 am #

      Perhaps the frames of the video (images) can be used as input.

  158. Avatar
    Steve John February 6, 2021 at 4:20 am #

    Can you please tell me why am I getting this error : “AttributeError: ‘NoneType’ object has no attribute ‘astype’: on this line: “face_pixels = face_pixels.astype(‘float32’)”.
    Actually it is not loading anything in trainX array. I don’t know why.
    I have loaded the file like this:
    data = load(‘/content/drive/MyDrive/face_detection.npz’,allow_pickle=True)
    trainX, trainy, testX, testy = data[‘arr_0’], data[‘arr_1’], data[‘arr_2’], data[‘arr_3’]

  159. Avatar
    Jacob February 9, 2021 at 5:23 pm #

    Hi Jason,
    I am getting the following error could you tell me how to fix it

    WARNING:tensorflow:5 out of the last 11 calls to <function Model.make_predict_function..predict_function at 0x000002EA8FCAF310> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.

    • Avatar
      Jason Brownlee February 10, 2021 at 8:00 am #

      It looks like a warning that you can probably safely ignore for now.

      TF displays a ton of warnings!

    • Avatar
      Anil Poudel February 18, 2021 at 7:16 am #

      Hi Jacob,
      This might help you

      import logging
      tf.get_logger().setLevel(logging.ERROR)

  160. Avatar
    Nicholas Hunter March 16, 2021 at 12:07 am #

    Thanks again for the very helpful article. With your help, knowing a little bit of python and next to nothing about machine learning, I have created a model consisting of several hundred classes and several thousand pictures. It works fairly well at recognizing pictures of people it has already loaded. I wonder if it is possible to detect whether two or more classes represent the same person? For example, suppose I did not realize that the Ben Affleck in Gone Girl was the same Ben Affleck in Justice League and so I created two separate classes, Ben_Affleck_Gone_Girl and Ben_Affleck_Justice_League. Is there some way to use the model to detect that they are the same person?

    • Avatar
      Jason Brownlee March 16, 2021 at 4:49 am #

      You’re welcome.

      Well done!

      You could use a model that predicts the probabilities of class labels and look at the predicted probabilities to see what it really “thinks” about given photos.

  161. Avatar
    Raghav March 18, 2021 at 11:41 pm #

    Hi Jason!
    Thanks for your article. Appreciate the work you’re doing for the community!

    When I train my model with 3-5 images/class it works fine.
    However, when I am training with a single image/class , the accuracy goes bad.
    I understand face-net works as a single shot detection.
    I’ve read your article on siamese network also which reaffirmed it should perform well with 1 image/class.
    what’s the issue in your opinion?

    • Avatar
      Jason Brownlee March 19, 2021 at 6:22 am #

      Sorry I don’t follow, why are you training on a single class?

      • Avatar
        Raghav March 19, 2021 at 9:23 pm #

        What I mean to say is that when I am training my model with 3-5 images for each person, the model accuracy is good.
        However, when I have 1 training image for each person, the testing accuracy is very poor.

        • Avatar
          Jason Brownlee March 20, 2021 at 5:20 am #

          Yes, we might expect this. The model does not have enough data to generalize to new cases.

          You may need a different model or data augmentation during training.

  162. Avatar
    Samuel March 19, 2021 at 6:49 am #

    Hello Jason, for my uni project (app for android with face recognition and pose detection) i need to be able trough a image, to face recognize people not by name just giving them an Id , for example, if me and you would appear in the camera the app would learn our faces and I would ve got the Id 1 and you the Id 2 or vice-versa, and the next time we appeared on the camera we would still’ve those Ids.

    What would be a solution? I ve seeing face recognition technologies such as amazon rekognition, azure face, tensorflow lite with facenet,, but i dont know if I can do this… Can I? With no previous data of people distinguish them and then learn with those images?

    • Avatar
      Jason Brownlee March 19, 2021 at 7:49 am #

      Sorry, I don’t know about android or the APIs you mention.

  163. Avatar
    Jay April 12, 2021 at 4:24 am #

    Hello Jason,

    I am facing error in loading the model: model = load_model(‘facenet_keras.h5’)

    Error: bad marshal data (unknown type code)

    I am using Python3.8/Tensorflow2.4.1/Keras==2.4.3

    If I downgrade to python 3.6.2/tensorflow1.3.0/keras2.1.2 (as mentioned in the GitHub): Lot of other errors starts coming.

    Could you please suggest?

    • Avatar
      Jason Brownlee April 12, 2021 at 5:11 am #

      Sorry, I have not seen this error before. Perhaps it is related to library versions?

      • Avatar
        Jay April 13, 2021 at 1:23 pm #

        Hello Dr Jason, Thanks for the reply. Is it possible to mention the version of Python, TensorFlow and Keras you are running the code with?

    • Avatar
      USMAN May 18, 2022 at 5:25 pm #

      HI Jay,
      how did you removed this Error, Kindly help me out with this. I m getting the same Error

  164. Avatar
    Jay April 22, 2021 at 5:05 am #

    Hello Dr Jason,

    How do we handle unknown faces?

    When we classify using e.g. SVM, the maximum probability value for a correct prediction in case of say 5 classes comes around 85%, but when the number of classes goes up to say 75 classes, the maximum prediction probability value for correct prediction comes around 20%.

    Model predicts a probability for an unknown face also. So, if I want to filter out an unknown faces using a threshold value, the threshold value will keep changing as I add more faces.

    What is the best way to handle unknown faces?

    • Avatar
      Jason Brownlee April 22, 2021 at 5:42 am #

      Good question, I’d recommend checking the literature for common solutions to this question. Perhaps you can add a new class for “unknown”.

  165. Avatar
    teja April 22, 2021 at 10:38 pm #

    how delete a person from model after trained
    instead of retrain model ?

    is there a way for use a facenet library or any thing built in python instead of loading this model ? (because it is take long time for loading)

    • Avatar
      Jason Brownlee April 23, 2021 at 5:03 am #

      Perhaps you can leave the model as-is and just ignore the inputs/outputs for the deleted person?

  166. Avatar
    Manikanta Bandla April 23, 2021 at 3:39 pm #

    How far the 128 values returned by FaceNet for a face is different from 128 encodings returned by face_recogniton.face_encodings() for the same face?

    • Avatar
      Jason Brownlee April 24, 2021 at 5:15 am #

      Yes, encoding will be different for each image. A model can be used to classify encodings as people.

  167. Avatar
    HungNguyen June 6, 2021 at 7:38 pm #

    How can i know which loss and optimizers function this model is trained by … Pls help me 🙁 Thank u

    • Avatar
      Jason Brownlee June 7, 2021 at 5:21 am #

      The loss is specified when you call fit() to train your model.

  168. Avatar
    HungNguyen June 6, 2021 at 7:42 pm #

    Model pre trained : facenet_keras.h5 by Hiroki Taniai, i have looking for the detail of this model but u know, i still dont know

  169. Avatar
    Ashok Kumar June 26, 2021 at 1:23 pm #

    I trained the custom face recognition classification model with 1.00000 accuracy but it also predict Unknown faces with class name’s how to fix this problem

    • Avatar
      Jason Brownlee June 27, 2021 at 4:34 am #

      Perhaps you can add unknown faces to the training dataset and train the model to predict “unknown”.

  170. Avatar
    Prakhar Prasad July 14, 2021 at 7:03 pm #

    Hello Jason,

    I have input array of shape mx112x92 and I reshaped them mx160x160x3 to be compatible with FaceNet. The accuracy score of the model is very bad. Is it preferred to have only colored images to generate the embeddings ?

    Thank you
    Prakhar

    • Avatar
      Jason Brownlee July 15, 2021 at 5:25 am #

      Perhaps try some alternate images?

      Perhaps compare your data to other images that work well for the model?

  171. Avatar
    Vince July 20, 2021 at 3:38 am #

    Hi, Jason

    It’s been a long way for me to get to your post, I learned a lot along the way, otherwise I might not be able to understand some point in this tutorial. lol. The post is clearer than all the other posts I’ve read though. Really appreciate

    After done read this post, I was wondering how can i group photos of different people in user’s ios album, just like ios have done in their album app: the album app will list a bunch of face of different people, and when user clicked one of them, it will show all the photos that the face is in.

    For now, I build a openface ios version model. I can get face embedings of all the faces in the user’s album, but how can i separate all this embedings into different group or give them different labels automatically without end user interfering. Should I group embedings based on the MSE(mean standard error) between different faces? I tried this method, not smart enough I think.

    Could u shed some light on this, thx

    • Avatar
      Jason Brownlee July 20, 2021 at 5:36 am #

      Thanks!

      That sounds like a cool project.

      Perhaps you can use clustering of embeddings somehow or distance measures between embeddings.
      Perhaps check the lecture for similar projects?

  172. Avatar
    Jason Zheng. September 2, 2021 at 12:45 am #

    Hey Jason,
    First of all thank you so much for this tutorial. it’s my first deep learning practice, and i completed all code also it’s work! But i have a question for you, how can i load another image or video to detect.
    I noticed you use “data = load(‘FGS-faces-embeddings.npz’)” of Test data to show predict result, how can i change “load(‘FGS-faces-embeddings.npz’)” to my realtime data?

  173. Avatar
    Jason Zheng. September 2, 2021 at 1:30 pm #

    Thank you for the prompt reply.I’ll try it.

  174. Avatar
    Alli Zein October 6, 2021 at 12:10 am #

    Hello Jason. Amazing Project!
    I was wondering what is the optimal way to use the model (detect – not train) over 300K+ images?
    Should I save their data in one .npz file as your way or is it too large for that? Is there another way like detecting each single image alone? and how will the code look like then?
    It may look a beginner’s question, which I am.. Thanks in advance

    • Adrian Tam
      Adrian Tam October 6, 2021 at 10:38 am #

      Their data means the 300+ images? That’s not the good way to use .npz because it means to load everything to memory when you read it; try to keep those images in a zip file or a folder.

  175. Avatar
    kingdomanma October 8, 2021 at 12:36 am #

    Awesome Explanation ~
    I have a built my own FaceNet Model.
    But i don’t have any dataset.
    Where can i find the tripplet dataset?

    • Adrian Tam
      Adrian Tam October 12, 2021 at 12:25 am #

      Do you think the link to Kaggle on this post works for you?

  176. Avatar
    manu October 18, 2021 at 2:22 am #

    Hi Adrian,

    Thank you for all the explanations given in this page. I am currently trying to use a pertained model (Swin Transformer, I found a version in Tensorflow) and use it for a multiple output classification task. Do you have any advice or page recommended for this kind of problem? The authors dive a block like the one above for using the pertained model, but since I am having multiple outputs I need to do it in Functional API.

    • Adrian Tam
      Adrian Tam October 20, 2021 at 9:15 am #

      Sorry, I don’t think I have anything related.

  177. Avatar
    A.F November 4, 2021 at 7:31 pm #

    Hello! I am currently trying to try this project on my own data. Every time I get to creating the trainX, I always get an error somewhere along the way during scanning my photos at this part:

    x1, y1, w, h = results[0][‘box’]

    that sais that the index is out of range when detecting the face, or something like that (sorry, I’m currently running in again and cant give the exact error code and message).

    But, does that mean there aren’t any detectable faces in my picture that have caused the error? Thanks!

    • Adrian Tam
      Adrian Tam November 7, 2021 at 7:41 am #

      If there is “index out of range” in this line of code, it means “results” is an empty list. Can’t tell what went wrong but you should investigate in this direction.

      • Avatar
        A.F November 7, 2021 at 4:32 pm #

        Thank you! Every time that happens I always check which image caused that to appear and for now I just assume that there are no detectable faces, so I just remove the picture from the data and it solved it. I need to look more to it but for now, I think it’s good enough

  178. Avatar
    Manav Khullar December 7, 2021 at 5:23 pm #

    Hello Sir
    was working on the example which was posted above tried the example on my own face dataset and some other of my friends dataset and saw that it performs very well on the dataset and saw that even if we generate the embedding for like 5 images still also it works well but in case of larger dataset if we try to do with neural nets dont we need to like change the structure of the neural nets again and again for classifying the image as new faces may come and go

    • Adrian Tam
      Adrian Tam December 8, 2021 at 8:03 am #

      Yes. That’s the limitation of this design.

  179. Avatar
    web designing company in trichy March 4, 2022 at 6:44 pm #

    We are too eager to know about this domain once l learn I want to embed it into the website to do experiments. As a leading web designing company in Trichy, we have done a lot of projects

    • Avatar
      James Carmichael March 5, 2022 at 12:42 pm #

      Great feedback! Keep moving forward on your machine learning journey!

    • Avatar
      Hariharasudhan September 8, 2022 at 11:09 pm #

      Hi, can you know your company name?, because I’m also located in Trichy and looking for opportunities in Machine Learning domain.

  180. Avatar
    A.S March 15, 2022 at 5:16 am #

    Hi thanks for this project

    what do i need to make this predict from a database of images?

    • Avatar
      James Carmichael March 15, 2022 at 1:28 pm #

      Hi A.S…When you say “database” are you referring to large repository on a drive location or loaded from a relational database such as MySQL or ORACLE?

      • Avatar
        A.S March 15, 2022 at 9:15 pm #

        i mean that the model can take an input image then predict and match if found a similarly with an images stored in a SQL or non-SQL as (mongo db or firebase) databases

        thanks for your reply.

  181. Avatar
    Reema April 8, 2022 at 4:51 am #

    Hello ,
    i am facing a probel when i first try to load the model using ” model = load_model(‘facenet_keras.h5’) ”

    i always end up with this error :
    ” bad marshal data (unknown type code) ”

    do you have an idea how i can solve it ?

    • Adrian Tam
      Adrian Tam April 8, 2022 at 5:20 am #

      Marshal is a serialization format internal to Python. I believe it may not relate to your h5 model file but could be some old pre-compiled code staying around. Try look for any .pyc file and delete them.

  182. Avatar
    Priyank May 4, 2022 at 1:01 am #

    Hi Jason,

    Thanks for the tutorial. It really helps developers like me to understand face recognition.

    I have a similar problem to classify images as Same or Different. I have seen your other post where you have used the cosine distance in order to do the same but if we need to do that with SVM classifier – is there an approach to do that ?

    Regards,
    Priyank

  183. Avatar
    Madhu Oruganti May 26, 2022 at 11:31 pm #

    Hi Jason,
    Thank you so much for sharing intersting blog.

    If posiible could you please share code for triplet loss function.

  184. Avatar
    Jake May 28, 2022 at 11:46 am #

    Hello Jason,
    Thanks for the article, it helped me a lot.

    I ran into a problem and would love to get help from you.

    I do not understand 100% of machine learning and how SVM works exactly, and I tried to extract from the model that you created a validation loss graph, and after a long search, I found that I can’t do it, and I am not 100% sure.

    Is it possible to remove the graph from the model? and how do I train the existing model? After a lot of runs, the accuracy did not change (and I’m not sure how to check the amount of accuracy correctly, all I did was create a graph that for each run shows the prediction probability, and the mean didn’t change)

  185. Avatar
    wneat August 20, 2022 at 4:54 am #

    Hello Jason,

    load_model(‘facenet_keras.h5’) has the error:

    ValueError: bad marshal data (unknown type code)

    I guess it is the version issue, while I have:

    Keras version: 2.9.0
    TF version: 2.9.1

    Any way to fix this? Thank you!

    • Avatar
      James Carmichael August 20, 2022 at 7:27 am #

      Hi wneat…We have not encountered this issue. Have you tried Google Colab to rule out version issues?

    • Avatar
      Dr. Anirban Dasgupta March 15, 2023 at 4:23 am #

      I too got this error. Still no solution.

  186. Avatar
    Sarah August 21, 2022 at 7:57 pm #

    when running the facenet classification part of this tutorial, I get the error :
    ValueError: Expected 2D array, got 1D array instead:
    array=[].
    Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.
    on line: trainX = in_encoder.transform(trainX)
    It would be glad if I could know from the q&a of the post till now, but there is not a search bar. I liked this post and I will be thankful if you could tell me what is wrong

  187. Avatar
    T.White October 28, 2022 at 9:14 pm #

    Is it me or getting embeddings from photos takes more and more time every prediction?..

    I mean, at start it takes about 20ms for prediction. After few thousands predictions it runs really slow. It takes 200ms for a single prediction. And this latency growth up with every prediction. After some time its more efficient to reload the model, than trying to get another set of embeddings.

    Do you have any ideas why this may happen and how to solve it?

  188. Avatar
    mubarak April 3, 2023 at 8:38 pm #

    i need a code on training a model please????????

  189. Avatar
    PRERONA DWIBEDI May 6, 2023 at 4:33 pm #

    Hi, please help, I am getting same error.

    • Avatar
      James Carmichael May 7, 2023 at 5:37 am #

      Hi Prerona…Please provide the exact verbiage of the error you are receiving. This will better enable us to assist you.

  190. Avatar
    Roman May 9, 2023 at 12:36 am #

    Hi, what should I do if I get an “object is not iterable” error when trying to embed an image?
    to get embedding I use the following code:

    from keras_facenet import FaceNet
    embedder = FaceNet()

    embedding = embedder.embeddings(embed)

  191. Avatar
    Aner June 29, 2023 at 3:45 am #

    Hi, is there a facenet_keras.h5 model available for python 3.10?
    I get a ValueError: bad marshal data (unknown type code) when downloading it with python 3.10,

    but when I use TF 2.2, Keras 2.3 or 2.4, Python 3.6- I get numerous other errors,
    so using a model written with a newer version seems to be an easier solution.

    thanks

  192. Avatar
    Quy Le July 24, 2023 at 11:06 pm #

    I know it’s been a long time since you posted this but I still hope I could get a reply from you. When I implemented myself based on your idea, I had problems related to load pretrained facenet model. It’s said that “EOFError: EOF read where object expected”. I search a lot to figure out this problem but still have no idea. For further information, I’m using both tensorflow and keras: 2.13 version.

    I’m looking forward to hearing from you.
    Your post is really amazing btw. It’s inspired me a lot.

    • Avatar
      James Carmichael July 25, 2023 at 8:42 am #

      Hi Quy…We do not have experience with that issue. Perhaps you could try your code in Google Colab. This may help determine if there is an issue with the code itself of the Python environment itself.

  193. Avatar
    Pablo July 30, 2023 at 6:44 am #

    Hi, I have 2 classes with 120 images per class. When I’m training the SVC model the accuracy is 50%. Is this due to only having 2 classes? Or is my dataset?

    Excellent post! Thanks in advance!

    • Avatar
      James Carmichael July 30, 2023 at 7:29 am #

      You are very welcome Pablo! Have you tried another dataset with 2 classes to compare performance?

      • Avatar
        pablo August 2, 2023 at 10:37 am #

        Hi James! I solved it! It seems like I look a lot like my mom… so the model classify me as her and her as me! As a recommendation for anyone having problem, don’t use data of someone that looks like you haha

  194. Avatar
    Pablo July 30, 2023 at 8:41 am #

    Thanks for answering! Yeah I tried with Madonna and Mindy faces, 22 faces per class. It only improved from 50% to 52%. I’m using facenet model from face_net keras and it gives me embeddings with length of 512 not 128. Could this be the problem?

  195. Avatar
    Reka October 14, 2023 at 3:29 pm #

    Thank you and it’s really great and really helps people learning about facial recognition. Your explanation is also very easy to understand. However, there are several things that have changed, such as in Keras FaceNet you no longer need model = load_model(‘facenet_keras.h5’) but only need model = FaceNet().

    Can you also help how to run SVC if the test data is obtained in real time from the camera?

    Thank you and wish you success

  196. Avatar
    jhon November 2, 2023 at 11:57 pm #

    hello , i want to load the file : facenet-keras.h5 » but i get get yhis error:

    EOFError Traceback (most recent call last)
    Cell In[4], line 2
    1 # load the model
    —-> 2 facenet = tf.keras.models.load_model(‘facenet_keras.h5’)
    4 print(facenet.inputs)
    5 print(facenet.outputs)

    File ~\AppData\Roaming\Python\Python311\site-packages\keras\src\saving\saving_api.py:238, in load_model(filepath, custom_objects, compile, safe_mode, **kwargs)
    230 return saving_lib.load_model(
    231 filepath,
    232 custom_objects=custom_objects,
    233 compile=compile,
    234 safe_mode=safe_mode,
    235 )
    237 # Legacy case.
    –> 238 return legacy_sm_saving_lib.load_model(
    239 filepath, custom_objects=custom_objects, compile=compile, **kwargs
    240 )

    File ~\AppData\Roaming\Python\Python311\site-packages\keras\src\utils\traceback_utils.py:70, in filter_traceback..error_handler(*args, **kwargs)
    67 filtered_tb = _process_traceback_frames(e.__traceback__)
    68 # To get the full stack trace, call:
    69 # tf.debugging.disable_traceback_filtering()
    —> 70 raise e.with_traceback(filtered_tb) from None
    71 finally:
    72 del filtered_tb

    File ~\AppData\Roaming\Python\Python311\site-packages\keras\src\utils\generic_utils.py:102, in func_load(code, defaults, closure, globs)
    100 except (UnicodeEncodeError, binascii.Error):
    101 raw_code = code.encode(“raw_unicode_escape”)
    –> 102 code = marshal.loads(raw_code)
    103 if globs is None:
    104 globs = globals()

    EOFError: EOF read where object expected

    • Avatar
      James Carmichael November 3, 2023 at 9:52 am #

      Hi jhon…Did you type the code or copy and paste it? Also, you may want to try your model in Google Colab.

  197. Avatar
    Rajesh Choudhary November 6, 2023 at 11:39 pm #

    How can I use this with latest versions or later versions of the tensorflow and mtcnn since used versions are no more available

    • Avatar
      James Carmichael November 7, 2023 at 10:40 am #

      Hi Rajesh…Please clarify portions of the code that are not working with your environment. You may also benefit from utilizing Google Colab.

      • Avatar
        Rajesh Choudhary November 8, 2023 at 3:03 pm #

        Thanks for replying Mr. James.
        I wasn’t able to load the model but got that particular issue resolved.
        Now can you help me with how can i use this for face recognition based attendance system? (i am new to this).

Leave a Reply