A digital image verification software provider, Truepic, has raised $26
million in a Series B funding round. The round is led by M12, Microsoft’s
Sign up for your weekly dose of what's up in emerging technology.
venture fund, and supported by Adobe, Sony Innovation Fund by IGV, Hearst
Ventures and individuals from Stone Point Capital. Started in 2015, Truepic has since raised $36 million.
Truepic’s patent consists of their secure camera technology that proves real media from a deep fake, ensuring that photos haven’t been manipulated between capture and delivery. Essentially, the startup certifies the authenticity of an image at capture instead of finding deep fakes on the web.
The technology does so by acquiring provenance data for the images, like the origin, contents and metadata. The startup also uses cryptography to protect the images from tampering before they are shared.
“All images captured with the Vision app have verified metadata, are established as unedited, and have been confirmed as originals,” stated Truepic. In addition, the company’s software can verify the time and location for a given picture, termed as ‘provenance-based media authentication’ by CEO Jeff McGregor, to provide proof for whether they were manipulated or not.
Truepic allows companies to integrate its camera into their apps to collect visual evidence from users through their camera to ensure unedited and original images cryptographically.
Truepic is an essential step in facing the uncanny valley that is only becoming more enhanced in the era of deep fakes of synthetic media. For example, companies like accounting giant Ernst & Young are leveraging AI to have deep fakes of company employees making pitches and presentations in different languages. Nvidia’s CEO, Jensen Huang, himself used a deep faked CGI version of himself to deliver parts of his keynote speech.
While these are close to innocent attempts at using deepfake for entertainment, there is no doubt that robots and fake visuals make us humans feel unsettled and uncomfortable. For instance, the Japanese AI Vtubers have been known to raise the uncanny valley feeling among watchers.
Uncanny valley posits that it is unsettling to see humanoid objects, including robots and AI models, appear almost like humans. Associate professor at Texas Tech University, Jaime Banks, said this feeling of strange familiarity occurs when something is human but not quite human.
According to Istituto Italiano Di Tecnologia (IIT) in Genoa, a robot’s gaze has the power to trick humans into thinking we are socially interacting and slow our ability to make decisions. Professor Agnieszka Wykowska, the lead author of the research, talked about how gaze is an important social sign of interaction. In a research study conducted by them, the team asked 40 volunteers to play a video game of “chicken” – with every player having to decide between allowing a car to drive straight towards another car or deviating to avoid a collision. This took place with a humanoid robot sitting opposite them, and between each round, the player had to look at the robot – and sometimes the robot would look back too.
The results of electroencephalography showed that the robot’s gaze was processed as a social signal by the human brain and made the decision-making process much slower. Similarly, in another study conducted (Ciechanowski, Przegalinska, Magnuski, & Gloor, 2019), the team found that chatbots that attempt to reflect human characteristics produce negative evaluations because of the uncanny valley effect.
Deep fakes are on an increasing trend, but given the uncanny valley and the unethical nature of fake pictures, companies like Truepic are coming up just as fast.