Now Reading
Selfie Capture When We Smile – My Fun Project Using OpenCV

Selfie Capture When We Smile – My Fun Project Using OpenCV

Most smartphones these days have a feature that automatically takes a selfie when we smile. It is amazing how accurately it detects smiles for not only one but multiple faces and captures a selfie immediately. If you have wondered how this is possible, it is actually quite simple. Using some of the libraries like dlib and OpenCV it is possible to build a selfie capturing application with just a few lines of code. The concept involved here to identify the mouth region using dlib, measure the distance between the corners of the lips when the user smiles and immediately capture a picture. Let’s get started!

In this article, we will learn how to build a selfie capture application that automatically clicks pictures of you when you smile. 

Register for FREE Workshop on Data Engineering>>


  1. Using the 68-point landmark of dlib we will detect the lip points and write relevant functions.
  2. Face recognition 
  3. Detect smile and automatically capture and save the image. 

Finding the Mouth Region Using the 68 Point Landmark Detector. 

landmark detector

The 68 point landmark detector is part of the dlib library. It assigns 68 coordinates to every human face which makes detection of specific regions like lips, eyes, nose easier. If you have not already installed dlib you can do so by 

pip install dlib

To download the landmark detector file click here.

The first step is to identify the region around the mouth.


We have identified that the mouth region lies between points 48 to 59. To establish the ration of the mouth we need to find the distance between the corner of the lips, top and bottom of the lip and the left and right regions of the mouth. 

To find the ratio of the region we can use the euclidean distance formula as follows:

Now with these 8 points, we can successfully isolate the mouth region. To avoid confusion while programming, we can splice the array between 0 and 8 instead of between 48 to 59.

We will now load the libraries and write the function to isolate the mouth region. 

from import VideoStream, FPS
from imutils import face_utils
import imutils
import numpy as np
import time
import dlib
import cv2
from scipy.spatial import distance as dist
landmark_detect = dlib.get_frontal_face_detector()
landmark_predict = dlib.shape_predictor('shape_predictor_68_face_landmarks.dat')
(smile_start,smile_end) = face_utils.FACIAL_LANDMARKS_IDXS["mouth"]
def detect_lips(lip):
    corner_A = dist.euclidean(lip[3], lip[9])
    corner_B = dist.euclidean(lip[2], lip[10])
    corner_C = dist.euclidean(lip[4], lip[8])
    avg = (corner_A+corner_B+corner_C)/3
    corner_D = dist.euclidean(lip[0], lip[6])
    return ratio

Face Recognition

Next, we will open the web camera. Before we implement a smile detector we need to recognize the face. We will use convexHull from the OpenCV to detect faces. Once the faces are detected we will draw anchor boxed around the mouth. 

webcam = VideoStream(src=0).start()
while True:
    window_frame =
    window_frame = imutils.resize(window_frame, width=450)
    gray = cv2.cvtColor(window_frame, cv2.COLOR_BGR2GRAY)
    anchor = landmark_detect(gray, 0)
    for box in anchor:
        smile_finder = landmark_predict(gray, box)
        smile_finder = face_utils.shape_to_np(smile_finder)
        smile= smile_finder[smile_start:smile_end]
        ratio= detect_lips(smile)
        smileHull = cv2.convexHull(smile)
        cv2.drawContours(window_frame, [smileHull], -1, (255, 0, 0), 1)

In the above code, we have used the dawContours to draw a red coloured box. Once this is done we just need to auto-capture the image.

Selfie Capture

 We will set the time between 20 and 25 frames before the selfie is captured. The images are saved in the same folder as the file you are running. 

See Also
OpenCV Image Processing

count = 0
tot = 0
if ratio <= .2 or ratio > .25 :
            count = count+1
            if count >= 10:
                tot= tot+1
                window_frame =
                frame2= window_frame.copy()
                save_img = "selfie{}.png".format(tot)
                cv2.imwrite(save_img, window_frame)
                print("{} captured".format(save_img))
            count = 0

The last step is to show the frame on the screen and see the output. 

 cv2.imshow("Frame", window_frame)

We will also set q as the exit button for closing the window once the user is done. 

key2 = cv2.waitKey(1) & 0xFF
    if key2 == ord('q'):

Here is the output


All the captured images are stored in the same folder as your project. One of the captured pictures is given below.

selfie capture


In this implementation, we saw how simple it is to implement an application to capture selfies when you smile by detecting the smiles. It is a fun and easy implementation using OpenCV and dlib. 

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.
Join our Telegram Group. Be part of an engaging community

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top