Listen to this story
|
The rapid advancement we are seeing in the field of AI, especially Generative AI presents a clear and present danger to digital identity verification, according to Srikanth Nadhamuni, who previously served as the chief technology officer (CTO) of Aadhar between 2009-2012. He is the CEO of Bangalore-based incubator Khosla Labs, which he co-founded with Vinod Khosla.
“Deep fakes—synthetic media that convincingly imitate real human speech, behavior, and appearance—pose a significant threat to the trust mechanisms carefully constructed within identity systems over time. In this increasingly likely future scenario, where AI-generated impersonations create chaos and erode trust in the system, the need for a “proof-of-personhood” verification capability likely using a person’s biometrics becomes paramount,” the tech guru said in a LinkedIn post titled ‘The Future of Digital Identity Verification: In the era of AI Deep Fakes.’
Generative AI is also giving disinformation a whole new dimension. Text-to-image AI models like Stable Diffusion, Midjourney and DALL-E2 can generate hyper realistic images that can easily be mistaken for genuine ones. This technology has opened up possibilities for generating deceptive visual content, further blurring the line between reality and falsehood.
Even though the Indian government has said they won’t regulate AI, they have confirmed that the upcoming Digital India Act (DIA) will have provisions to deal with AI generated disinformation. “We are not going to regulate AI but we will create guardrails. There will be no separate legislation but a part of DIA will address threats related to high-risk AI,” Union Minister Rajeev Chandrasekhar said.
However, since the draft is still not out yet, it is not known how it will handle Generative AI being a threat to digital identity verification.