IDfy CEO Ashok Hariharan reveals how their deep fake recognition tool works

We can train a model to accept a new type of card in 72 hours. So, if we have the data, we can launch in a new country in 72 hours.

Deep fakes, facial recognition and identity verification frauds are some of the biggest challenges of the 21st century. However, the growing technologies are only pushing for more innovative ways to trick verification systems, and the solutions are not growing at the speed of crimes. IDfy is India’s earliest and most widely used AI-based identity verification firm. Analytics India Magazine got in touch with the Founder & CEO, Ashok Hariharan, to discuss their proprietary AI-stack that can detect 2D, 3D and photo deepfakes, among various other solutions.

AIM: What was the pain point that IDfy is trying to solve?

We started working 11 years ago now. We are a 2011 Vintage startup. At that time, virtual transactions were increasing, and we could see India was heading in the direction of the US, where a lot of the interactions that people are going to have will be virtual. So virtual transactions are increasing, and as virtual transactions increase, how do you deal with risk? So it was a broad thesis; we cannot digitise or increase the speed of transactions unless we add layers of digital authentication.

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

It was early when we started; most of the technologies we use today were either non-existent or existed in universities. This is when an agent would come to your house to collect your physical documents, you would true-copy it, give it to that guy, that guy would probably go give it over to some street vendor. The chances of leakage of identity theft were significant. This is a lot of theft that could happen. If you look at UPI charts today, 200 crores a day is scammed by thieves. There are far easier and more accurate ways of determining some of these. So that’s sort of how we looked at the world. We got into the KYC solutions game from 2017 onwards; we are the largest video KYC implementors in the country.

AIM: Please explain the tech stack behind your AI-based verification solutions.

The information is captured either through our platform or on our clients’ websites/apps using our APIs. The journey includes capturing documents/pictures, tampering detection, data extraction, and verification from public sources. This is done on our media servers; we have built our queueing theory and a platform that can handle massive scale amounts. We use Elixir, Ruby on Rails, Go, and REACT.js for different steps of the user journey on live video. This allows us to work in very short bandwidths requirements; for example, we can work on a 70 kbps connection right. We will change the frame rates based on the use case; some may not need video conversation or high bandwidth video connection, so we will reduce the bandwidth requirement on the video but increase the bandwidth requirement on capture because you need to capture the documents correct. So we play around a lot to figure out the most optimal way to get the best outcome. 

We have 65 APIs to identify and authenticate an individual or business using their documents, pictures, and location. These are machine learning models with 120 odd data points on a card to figure out if this card looks genuine. Our face recognition technology for liveness detection examines a single picture of a person to determine whether it’s a live picture or a picture-of-a-picture. This is based on machine learning models that we are the only ones in the world to have. 

We use standard techniques for data crawling that look at public data sources. Additionally, we use our own models for court record checks. There are about 3500 courts whose data is available publicly. But the data is unstructured. Our models can confirm the details from those data sets. 

Our models also work on fuzzy logic. We understand nuances between languages. Sometimes in India, Hindi words are written using English alphabets; our fuzzy phonetic match can recognise and interpret them. Similarly, our fuzzy logic can identify an address despite differences in how they are written on different documents.  

While the tech stack itself is important, it is more important to build it such that we can scale. We are doing millions of verifications a month. Plus, we do several authentications on a single individual. All this demands our platform be able to scale. 

So we have AI models developed to auto-scale our systems. Our scale-up and scale-down times when volume suddenly fluctuates is less than 10 seconds. We have put a lot of thought into an architecture built for scaling and security. We have vaulting systems encryption at rest/ transit/ source. When we collect a document or data, our systems do not see it for security reasons. Our key handshakes – key-based and key rotation encryptions – allow the customer to rotate the key anytime.

AIM: Tell us about IDfy’s use cases

I call this our Trinity – Employees, Customers, Enterprises. We help verify and onboard all three entities. For enterprises, we usually do SME authentication for use cases like onboarding merchants onto E-commerce platforms. When it comes to customers, we help verify customers for banks & fintech. 90% of the verifications for digital wallets and gaming platforms is done by us. Employees – both full-time and part-time – are onboarded safely by us. E-commerce companies use our platform to onboard delivery personnel. Using our video platform, we are able to verify an aspiring delivery person in 10 minutes, where the entire process took 4-5 days earlier. 

There are four steps we operate in to complete verification. The first step in any onboarding authentication is the capture of information. This includes photos of documents or a live video call where a screenshot is captured. This can also be done via a chat interface like WhatsApp or through an API stream. The next step is document validation. We run checks to make sure that the document or the picture hasn’t been tampered with. The third step is verifying the details from sources. We run the details through government and public databases to ensure they match. We also run a FaceMatch between the document and the live selfie to make sure that the person is who they are claiming to be. Lastly, we examine the information to spot anomalies in this data. For example, someone says he earns Rupees 10 lakhs a year but is an associate at a BPO centre in Infosys. Now we have data today to tell us that an associate at Infosys BPO does not make that much. We can spot that the data being shared does not match the profile. So then you can start to play around with data shared and the data that we have collected through address or employment data and do anomaly detection to figure out if there is any chance of any risk of him defaulting lying.

AIM: How does IDfy overcome deepfakes in ID verification?

When we have control of the camera, the fakes cannot happen. Deepfakes can happen when a person takes a picture on their laptop, but again, since we control the camera, we can detect a 2D surface. The second is any liveness detection that checks if the picture is an alive person and not a picture of a picture. This means your picture needs to be a 3D surface. We have 200 data points in our model that are detecting features. So even if you take a picture of a wax model, we will be able to detect it because the light does not reflect off the skin’s surface correctly. 

AIM: What are the vulnerable sectors prone to fraud in India? How can the government/enterprises help ensure more safety? 

I don’t think security is the first thing that anybody thinks, unfortunately, in this country. Everybody gets funding for having cool front ends, but nobody thinks about the security. So I don’t think there’s much thought being put today in bettering security; there has to be a significant amount of intention from the regulators to push the agenda there. Today, if the data is leaked, it needs to move far beyond only a slap on the wrist.

I’m looking forward to the PDP bill coming. That will make a massive shift towards how you deal with personal security.

AIM: You have recently raised investments in a Series D round; how does this shape your expansion plan?

We want to go much deeper into our products. We need to get deeper into fraud. People figure out new ways to commit fraud every day; you launch a new technology, people will figure out new ways to commit fraud. Fraud will be evolving in business, so we want to go deeper into the nature of fraud, fighting out a high-risk profile, transactional monitoring. This is the direction we want to go in. We will also go international; towards Southeast Asia, Africa, the Middle East, South America. Developing markets is something that is a key focus to us. 

AIM: Tell us about your global expansion plans; how do you customise your services for different countries?

Internationally, the documents that need to be identified by the machine learning models add to the complexity. The nature of fraud is different in different countries as well. For example, African Prince fraud is no longer as big in the US as it is in India today. Likewise, there are these SMS and KYC scams that happen in India. We don’t customise our solutions; we expand. There are layers of configuration that are different as per use-cases. But we do not configure for the customers; we build extensions for new markets. For example, if you’re going to Southeast Asia, there are new cards that you need to pass. They will go through our machine learning stack. Today, we can train a model to accept a new type of card in 72 hours. So, if we have the data, we can launch in a new country in 72 hours.

More Great AIM Stories

Avi Gopani
Avi Gopani is a technology journalist that seeks to analyse industry trends and developments from an interdisciplinary perspective at Analytics India Magazine. Her articles chronicle cultural, political and social stories that are curated with a focus on the evolving technologies of artificial intelligence and data analytics.

Our Upcoming Events

Masterclass, Virtual
How to achieve real-time AI inference on your CPU
7th Jul

Masterclass, Virtual
How to power applications for the data-driven economy
20th Jul

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, Virtual
Deep Learning DevCon 2022
29th Oct

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM
MOST POPULAR

What can SEBI learn from casinos?

It is said that casino AI technology comes with superior risk management systems compared to traditional data analytics that regulators are currently using.

Will Tesla Make (it) in India?

Tesla has struggled with optimising their production because Musk has been intent on manufacturing all the car’s parts independent of other suppliers since 2017.