One of the latest and much talked about tech advancements is creating hyper-realistic digital characters with the help of AI. However powerful, AI character generation has been mostly making the headlines for all the wrong reasons.
Synthetic media, or deepfake, was recently used to manipulate personal data and feed it into facial recognition systems, leading the Chinese government to lose as much as $76 million. And this is just one example of the widespread use of deepfake technology for dangerous activities.
Sign up for your weekly dose of what's up in emerging technology.
Researchers at the MIT Media Lab, along with collaborators at the University of California at Santa Barbara and Osaka University, have collaborated to change that narrative. In an attempt to create a positive outlook of AI character generation, the researchers have compiled an open-source and easy-to-use character generation pipeline. This open-source pipeline combines AI models for motion, facial gestures, and voice, enabling the creation of a variety of audio and video outputs.
Usually, generative adversarial networks, or GANs, are a combination of two neural networks that compete against each other. GANs, thus, make the creation of photorealistic images easier, along with the cloning of voices and animation of faces.
But what is the guarantee that this open-source model will not be put into negative use? Well, the researchers have made it possible to trace the usage of the pipeline. The pipeline marks the resulting output, that is, the AI-generated characters, with a traceable and human-readable watermark, allowing it to be distinguished from its original and authentic video content. This not only displays how a particular content was generated but also helps in the prevention of dangerous use.
Jeremy Bailenson, Founding Director of the Stanford Virtual Human Interaction Lab, told MIT News that this research paper does an excellent job of thought leadership, mapping the space of what is actually possible with AI-generated characters. It can be helpful in domains of education, healthcare, and interpersonal relationships, among others. Additionally, it will be refreshing to have a roadmap on how to avoid the unethical usage of AI-generated characters, concerns related to privacy and misinterpretation.
The researchers’ ultimate mission is to prove how AI character generation can be put into positive use cases. Some of the interesting examples put forward by the team include reviving Albert Einstein to take a physics class or to talk through a career change with one’s older self, or anonymise people while preserving facial communication. The pipeline can anonymize a person’s face in videos but keep their expressions and non-verbal cues intact, which could be useful during sensitive conversations and therapy.
Researchers used the technology to create a synthetic version of Johann Sebastian Bach, who “conversed” with cellist Yo Yo Ma during an MIT class. Students also used it to animate a historical Chinese painting.
In an attempt to inspire professors, teachers, students and healthcare professionals, researchers have made the pipeline open source. AI-generated characters can also be used for therapy for people with social anxiety.
In the online journal Nature Machine Intelligence, the researchers have written that with more healthcare workers, therapists, students and educators using tools like these, the output would mean improvement in health and wellbeing of all, along with the availability of personalised education.
AI-generated characters have been bad-mouthed for their association with deepfake technology. However, with an open-source pipeline like this, it would be refreshing to look at the potential of AI for creative expressions.