MITB Banner

Intel’s Research Scientist Advocates and Hunts for Safety in AI

Along with her team, Ilke Demir aspires to find those responsibility pillars.

Share

Ilke Demir, a senior staff research scientist at Intel Studios, has a rather offbeat wish. She wants the hype surrounding large language models to be as low as the hype around its ethics. 

In an exclusive interview with AIM, she said, “Every day there’s new information that the training set includes copyrighted images or something that is claimed to be written by a large language model is actually not, or vice versa. Such things happen because the ethical aspects are left out of the process,” she said, commenting on the state of affairs. “I’m trying to find those responsibility pillars.”

This call for responsible technology is at odds with recent events in the industry. Two weeks ago, software giant Microsoft fired its entire ethics and society team. And, this is not something new. In 2020, Google fired its ethical AI team leader Tinmit Gebru. The internet giant has made several efforts to stabilise the department but the chaos still reigns supreme. A few months after Gebru’s exit, Hanna’s next manager, Meg Mitchell, was also shown the door. 

In September 2022, Meta disbanded its Responsible Innovation (RI) team; however, the company has been taking baby steps towards creating responsible services.

Intent > Technology

‘Can we eliminate the impersonation aspect from deepfakes so that it forces the model to create someone that doesn’t exist at all, so they are not impersonating someone?’ – is one of the questions Demir and her team are trying to answer.

Deepfakes have been problematic historically and no concrete solution has been found. “Intent overpowers technology,” Demir said. A 2019 report showed that more than 95% of deepfakes are used for adult content. If it is already that extreme, then anything good someone tries to do in that space is actually a very small part, she added. 

She suggested, first, the community should overcome the problem of detecting them so people can make better decisions about what to believe. Last year, Intel introduced a real-time deepfake detector that analyses ‘blood flow’ in video pixels to return results in milliseconds with 96% accuracy.

Stating the example of the Zelensky video telling Ukraine troops to surrender, Demir said, deepfakes target vulnerable populations. People in a war zone would see such a video without criticising the resolution, body posture, etc. 

She suggested, an indicator that a certain percent of the video is fake should be a part of the industry module. Especially for immediate cases, such as elections, detection methods should be utilised and also in social media platforms for videos that go viral. “So that people can make their own decision, share it or leave it at their own risk. I think the more we push detection, the more the intent behind deepfakes will neutralise,” she said, hopeful.  

In the longer term, Demir wants all the created content to be related with information about who created it, how it was created, which tool was used, what was the intent, and was it created with consent, etc. “We want to embed the data itself so that when people consume it, they will look at the data and identify if it is a trusted source,” she said.

Protecting privacy via AI

We have many social media pictures we don’t want to be in but people keep uploading them without our consent. Then there are automatic paste permission algorithms that associate your face with your name. There are crawlers taking your face everywhere. “Even if you untag yourself on a platform your name is not visible there, but your name is associated with your face,” explains Demir. “Our faces are like digital passports, we need to have control over them,” she added. 

To stop this, Demir and her colleagues have developed a system called ‘My Face My Choice’. Elaborating on the method she said, “If you don’t want to appear in a photo, your face is wrapped with a quantifiable similar deepfake, and you don’t appear in the photo anymore. The photo looks very normal and natural, but you’re not there anymore.” 

Lewis Griffin, professor at University College London opined, the tool is significant as it could have a much bigger positive impact on online privacy. However, there are several technical hurdles regarding security and storage to overcome before deploying on large networks. “Also, it is unclear whether there would be enough demand from social media users who want their face obscured to strangers,” he added. 

Share
Picture of Tasmia Ansari

Tasmia Ansari

Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.