Now Reading
Face Recognition System using DEEPFACE (With Python Codes)

Face Recognition System using DEEPFACE (With Python Codes)

DeepFace

Recognition of the face as an identity is a critical aspect in today’s world. Facial identification and recognition find its use in many real-life contexts, whether your identity card, passport, or any other credential of significant importance. It has become quite a popular tool these days to authenticate the identity of an individual. This technology is also being used in various sectors and industries to prevent ID fraud and identity theft. Your smartphone also has a face recognition feature to unlock it. The recognition incorporated in such tasks demands three things: the ability to comprehend identity from unfamiliar faces, the ability to learn new faces, and the ability to acknowledge familiar faces. Although the concept of facial recognition is not new, technological advancements over the years have led to a massive expansion of this technology. This post will try to explore how facial recognition works and its role in identity verification.

How Facial Recognition works?

The process of facial recognition starts with the human face and identifying its necessary facial features and patterns. A human face comprises a very basic set of features, such as eyes, nose, and mouth. Facial recognition technology learns what a face is and how it looks. This is done by using deep neural network & machine learning algorithms on a set of images with human faces looking at different angles or positions.

How To Start Your Career In Data Science?

The process starts with detecting the human eyes, one of the most accessible features to detect, and then proceeds to detect eyebrows, nose, mouth, etc. Calculating the width of the nose, the distance between the eyes, and the shape & size of the mouth, the model created tries to find insights from the facial region. Multiple algorithm training can be performed to improve the algorithm’s accuracy to detect the faces and their positions. Once the face is detected, the model is then trained further with the help of computer vision algorithms to detect the facial landmark features such as eyebrow corners, eyes gap, the tip of the nose, mouth corners, etc. Each feature is considered as a nodal point, and each face consists of around 80 nodal points. These landmark features are the key to distinguish each face present in the database.

After the facial features are extracted, landmarks, face position, orientation & all key elements are fed into the model; the model generates a unique feature vector for each face in its numeric form. A unique code generated identifies the person among all the others in the dataset. The generated feature vector is then used to search and match from the entire dataset or database of faces present during the face detection process. 

About Deepface 

Deepface is a facial recognition and attributes analysis framework for python created by the artificial intelligence research group at Facebook in 2015. Keras and Tensorflow inspire this library’s core components. It is a hybrid face recognition framework that uses state-of-the-art models for analysis such as VGG-Face, Google Facenet, Facebook Deepface, all wrapped together in one. Deepface’s face identifying accuracy goes up to 97% and has proved to be more successful in detecting faces than the average face recognition frameworks. Facebook uses Deepface to prevent impersonation and identity theft on its platform. 

Getting Started with Facial Recognition Model

We will try to create a face detection and facial feature recognition model using Facebook’s Deepface Framework to identify and distinguish between a set of images. We will also compare the results using two of many in-house models present in the framework and predict the age of the faces present in the images. 

So let’s start!

Creating The Model

We will first install the Deepface Library to help us call our further modules to use. It can be done by running the following command :

!pip install deepface #install the Deepface Library 

We will now import and call our modules from the framework. We will also use OpenCV to help our model with image processing and matplotlib to plot the results.

 #calling the dependencies
 from deepface import DeepFace
 import cv2
 import matplotlib.pyplot as plt 

Importing our images to be used and setting their path in the model, here we will be using three images of the same face to test our facial recognition and one different face image to cross-validate our result.

 #importing the images
 img1_path = '/content/Img1.jpg'
 img2_path = '/content/Img2.jpg'
 img3_path = '/content/Img3.jpg'
 #confirming the path of images
 img1 = cv2.imread(img1_path)
 img2 = cv2.imread(img2_path) 

We will now plot and check if our images have been imported correctly 

   plt.imshow(img1[:, :, ::-1 ]) #setting value as -1 to maintain saturation
   plt.show()
   plt.imshow(img2[:, :, ::-1 ]) 
   plt.show() 

Here are our images

We will now call our first library model for facial analysis called VGG-Face. When we call the model, it imports a set of pre-trained deep learning networks with pre-trained weights.

 #calling VGGFace
 model_name = "VGG-Face"
 model = DeepFace.build_model(model_name) 
Verifying the Results

Creating a function called result to get our results and using the verify function to validate the images

 result = DeepFace.verify(img1_path,img2_path)#validate our images
 DeepFace.verify("Img1.jpg","Img2.jpg")#generating result of comparison 

We will get the following output :

 {'distance': 0.2570606020004992,
  'max_threshold_to_verify': 0.4,
  'model': 'VGG-Face',
  'similarity_metric': 'cosine',
  'verified': True} 

Here, the distance tells us how far apart are the two faces present in images, i.e. the difference between the two. We can also see that it provides us with our image verification result as TRUE telling us that the compared faces present in images are of similar people.

Cross Validating Our Model 

 We will now cross-validate our model and check whether the results generated before are irrational or not. For this, we will use a different face image and verify it with one of our first face images.

 img4_path = '/content/JAN.jpg' #setting path for different image
 img4 = cv2.imread(img4_path)
 #plotting the image
 plt.imshow(img4[:, :, ::-1 ]) 
 plt.show() 

Comparing  the two images,

 #comparing the faces in images using VGG Face
 DeepFace.verify("Img1.jpg","JAN.jpg") 

Here’s our result 

 {'distance': 0.6309288770738648,
  'max_threshold_to_verify': 0.4,
  'model': 'VGG-Face',
  'similarity_metric': 'cosine',
  'verified': False} 

As we can notice, the distance this time is very high, and the verification says FALSE, telling us that the compared faces are of two different people!

Testing using a Different Model 

We will create a separate model by calling a different analysis model named Facenet, comparing our the first two images, and seeing how different a result it provides us with than the VGG Face Model.

 #calling the model
 model_name = 'Facenet'
 #creating a function named resp to store the result
 resp = DeepFace.verify(img1_path = img1_path , img2_path = img2_path, model_name = model_name)
 resp #generating our result 

Here’s the output for Facenet :

 {'distance': 0.42664666323609834,
  'max_threshold_to_verify': 0.4,
  'model': 'Facenet',
  'similarity_metric': 'cosine',
  'verified': True} 

We can see by comparing the faces present in the first two images, although Facenet tells us that they are similar, the distance seems to be a bit high. Hence, telling us that the VGG Face model gives a more accurate representation of results than Facenet.

See Also
deep learning cover art

We can also match and rank the similarity of faces using a different image of the same person.

 #storing match and ranks by creating a dataframe
 df = DeepFace.find(img_path = '/content/Other.jpg', db_path ='/content/') 
 Finding representations:   0%|          | 0/4 [00:00<?, ?it/s]
 Finding representations:  25%|██▌       | 1/4 [00:10<00:31, 10.58s/it]
 Finding representations:  50%|█████     | 2/4 [00:22<00:21, 10.90s/it]
 Finding representations:  75%|███████▌  | 3/4 [00:31<00:10, 10.56s/it]
 Finding representations: 100%|██████████| 4/4 [00:41<00:00, 10.33s/it] 

df.head() #show top matches

Result :

Facial Feature Analysis using Deepface 

Using Deepface, we can also analyze the facial features. One can analyze the age, race, emotion and gender using Deepface’s functions.

 #creating an object to analyze facial features
 obj = DeepFace.analyze(img_path = "Img2.jpg", actions = ['age', 'gender', 'race', 'emotion'])
 print(obj["age"]," years old ",obj["dominant_race"]," ",obj["dominant_emotion"]," ", obj["gender"])
 Finding actions:   0%|          | 0/4 [00:00<?, ?it/s]
 Action: age:   0%|          | 0/4 [00:00<?, ?it/s]    
 Action: age:  25%|██▌       | 1/4 [00:07<00:22,  7.60s/it]
 Action: gender:  25%|██▌       | 1/4 [00:07<00:22,  7.60s/it]
 Action: gender:  50%|█████     | 2/4 [00:08<00:11,  5.57s/it]
 Action: race:  50%|█████     | 2/4 [00:08<00:11,  5.57s/it]  
 Action: race:  75%|███████▌  | 3/4 [00:08<00:04,  4.04s/it]
 Action: emotion:  75%|███████▌  | 3/4 [00:08<00:04,  4.04s/it]
 Action: emotion: 100%|██████████| 4/4 [00:13<00:00,  3.48s/it] 

Analyzing this image it tell us the following :

32  years old  white   neutral   Woman

Analyzing the next face, tell us the following :

28  years old  white   happy   Woman

ENDNOTES

This article has now implemented and learned how to create a Face Recognition & Facial Feature Detection model to analyze faces from a set of images. You can choose other models present in Deepface such as “OpenFace”, “DeepID”, “ArcFace”, “Dlib ” and check their recognition accuracy as well. The full Colab file for the following can be accessed from here.

Happy Learning!

References

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.
Join our Telegram Group. Be part of an engaging community

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top