With security being regarded as the most important factor in any organisation and is also a matter of grave concern, biometric authentication systems are the most reliable when it comes to verifying a person’s identity. In the article, we will cover in brief how Aditya Sharma reconstructed fingerprints using convolutional autoencoders.
Biometrics are usually classified as
- Physiological and
When it comes to physiological biometrics, fingerprints and DNA are considered the crucial elements, which cannot be copied as experts claim. The reason being that fingerprints or toe prints are unique and each individual has their own set of prints. Furthermore, the prints never change in individuals from birth to death.
Fingerprints may look complicated in nature, but they have a certain pattern, which can be classified into arch, loops or core. Furthermore, the arch, loops and core consist of ridge ending and ridge bifurcation, which is known as minutiae.
Minutiae is considered as the most important distinctive feature that is used for fingerprint matching. Earlier it was known that minutiae did not possess enough information to reconstruct an original fingerprint image. However, it is now possible to reconstruct with the use of a convolutional autoencoder.
A Brief Intro To Convolutional Autoencoder
Before jumping into convolutional autoencoder, one should be aware of the terms convolutional and autoencoder. For those who don’t know, convolutional neural networks (CNN) is a neural network that consists of more than one convolutional layer and is used to process image, classification and segmentation. Moving on to autoencoder, it is a part of the Neural Network for which the input and output remain the same. It works by compressing the input into a latent-space representation and then constructing an output from that representation.
After getting through the two terms, come convolutional autoencoder. The convolutional operator filters an input system to extract a certain part of its content. Convolutional autoencoders encode inputs in a set of simple signals and reconstruct the input from them.
Overview Of The Procedure
To begin with, one needs to take a look at the fingerprint datasets such as the kind of images it contains, how to read the images, creating a number of images and processing them to run through the model.
One can use FVC2002 fingerprint dataset to train a network. To calculate the efficiency of the model, one can test the model on two different fingerprint sensor datasets such as Secugen and Lumidigm sensor.
FVC2002 is a Fingerprint Verification Competition dataset, which comprises four different sensor fingerprints, namely Low-cost Optical Sensor, Low-cost Capacitive Sensor, Optical Sensor and Synthetic Generator, each sensor with image sizes.
One can use NumPy array attribute .shape to determine how images look in the dataset and their dimensions. The image from the dataset is likely to be in grayscale and it is imperative to process them before feeding into the model.
The images are likely to be in a size of 224 x 224 x 1, which are to be fed as an input in the network.
A batch size of 128 or higher can be done depending upon the kind of system one is used to train the model. The system does play a role in determining the learning parameters and affects the accuracy of prediction.
Moving forward, the autoencoder is divided into two parts, the encoder and decoder will create the model that must be compiled with the optimiser to be RMSProp (RMSProp is an optimiser, which uses the magnitude of recent gradients to normalise the gradients.) After creating the model, it is required to be trained with the Keras fit function and further reconstruct the images using the predict function of Keras.
As mentioned at the beginning of the article, it is important to test the model with the two sensors, i.e. Secugem and Lumidigm, which will give the final answer on how the model performs in creating the images.
To know more about how encoders, check this link.