MITB Banner

Here’s The Ultimate Shield against Unauthorised Image Manipulation

MIT has come up with a new AI tool designed to protect against image manipulation without proper authorisation.

Share

Is AI Copyright Really Necessary?
Listen to this story

From romantic poems to Salvador Dali-inspired images, generative AI can now do it all. And it can do it so well, that it is often impossible to differentiate between AI and human-generated artworks. Since the Turing Test, which set the standard for successful AI performance as being able to mimic humans so well that it becomes indistinguishable, the discussion about technology imitating humans has been a major topic of public debate. The community has always tried to distinguish between text written by humans and text generated by AI, amidst the risk of possible misuse of technology.

MIT’s Computer Science & Artificial Intelligence Laboratory has come up with a solution for this. 

MIT Has A Solution: PhotoGuard

Scientists from MIT CSAIL have created a new AI tool called “PhotoGuard” that aims to stop unauthorised changes to images made by models like DALL-E and Midjourney. This tool is specifically designed to protect against image manipulation without proper authorization.

PhotoGuard leverages “adversarial perturbations,” which are minuscule alterations in pixel values that are not visible to the human eye but can be detected by computer models. These perturbations disrupt the AI model’s ability to manipulate images effectively. There are two attack methods used by PhotoGuard to generate these perturbations. 

The “encoder” attack targets the AI model’s latent representation of the image, causing the model to perceive the image as random. The goal of this attack is to disrupt the LDM’s process of encoding the input image into a latent vector representation, which is then used to generate a new image. They achieve this by solving an optimization problem using projected gradient descent (PGD). The resulting small, imperceptible perturbations added to the original image cause the LDM to generate an irrelevant or unrealistic image.

On the other hand, the “diffusion” attack defines a target image and optimizes the perturbations to make the final image resemble the target closely. This attack is more complex and aims to disturb the diffusion process itself, targeting not only the encoder but also the full diffusion process that includes text prompt conditioning. The goal is to generate a specific target image (e.g., random noise or a grey image) by solving another optimization problem using PGD. This attack nullifies not only the effect of the immunized image but also that of the text prompT.

Hadi Salman, lead author of the paper and a PhD student at MIT told AIM, “In essence, PhotoGuard’s mechanism of adversarial perturbations adds a layer of protection to images, making them immune to manipulation by diffusion models.” By repurposing these imperceptible modifications of pixels, PhotoGuard safeguards images from being tampered with by such models. Salman wrote the paper alongside MIT CSAIL graduate students and fellow lead authors Alaa Khaddaj and Guillaume Leclerc MS ‘18, as well as PhD student Andrew Ilyas ‘18 MEng ‘18.

For example, consider an image with multiple faces. You could mask any faces you don’t want to modify, and then prompt with “two men attending a wedding.” Upon submission, the system will adjust the image accordingly, creating a plausible depiction of two men participating in a wedding ceremony. Now, consider safeguarding the image from being edited; adding perturbations to the image before upload can immunise it against modifications. In this case, the final output will lack realism compared to the original, non-immunized image.

“I would be sceptical of AI’s ability to supplant human creativity. I expect that in the long-run AI will become just another (powerful) tool in the hands of designers to boost the productivity of individuals to articulate their thoughts better without technical barriers” concluded Salman. 

Decoding the Problem

The recent Senate discussion around AI regulation has turned the spotlight on the most pressing issues of copyright and artist incentivisation. Senior executives from OpenAI, HuggingFace, and Meta, among others have testified before the US Congress about the potential dangers of AI and suggested the creation of a new government agency to licence large AI models, revoke permits for non-compliance and set safety protocols. 

The major impetus behind this plea for regulation stems from concerns regarding copyright infringement. It first started when the artist community filed a lawsuit against the companies behind image generators like Stability AI, Midjourney, and DeviantArt seeking compensation for damages caused by these companies using their art without credit. 

AI-generated content is facing opposition from stock image companies like Shutterstock, Getty, as well as artists who see it as a threat to their intellectual property. But eventually, most of them have gotten on board with partnerships. Adobe’s Firefly is a generative image maker designed for “safe commercial use.” Adobe offers IP indemnification to safeguard users from legal issues related to its use. It is built on NVIDIA’s Picasso which is trained on licensed images from Getty Images, and Shutterstock. Shutterstock also partnered with DALL-E creator OpenAI to provide training data. It also now provides full indemnification to its enterprise customers who use generative AI images on their platform, ensuring protection against any potential legal claims related to the images’ usage. Google, Microsoft, and OpenAI have also started watermarking with the aim of mitigating copyright issues.

Read more: Lessons from YouTube for Gen AI Copyright Mess

Share
Picture of Shritama Saha

Shritama Saha

Shritama (she/her) is a technology journalist at AIM who is passionate to explore the influence of AI on different domains including fashion, healthcare and banks.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.