How Google, Adobe, And IBM Are Helping Newsrooms Fight Fake Images

Social media may offer a wealth of intelligence to journalists as they scout for stories, but harnessing its true potential becomes a challenge given how easy it is to misrepresent information online. With false facts and narratives intertwining these platforms, it becomes an arduous task for them to filter accurate information.

And it is not just with texts. Fuelled by the meme culture, images that we find in our newsfeeds may often be fake. With the second-highest number of internet users after China, these issues are magnified in India.


Sign up for your weekly dose of what's up in emerging technology.

Although there are tools that exist today that can be used to discover the origins of images found online, they are outdated and unreliable. Journalists typically use reverse image search to check if an image is old – but this is not a very accurate way of tackling the problem of fake images. This is because if these images are manipulated, journalists need to have the ability to identify the manipulation in order to trace the original image. 

In this scenario, global tech giants like Google, Adobe and IBM are stepping in to fill this vacuum by developing AI-powered tools and applications that help spot fake images.

Google’s Partnership With Storyful

As part of Google News Initiative, the company had partnered with an Ireland-based startup called Storyful to analyse content across digital platforms with the objective of identifying dated, inaccurate or modified images. This approach will help make better sense of the developments on social media and filter out fake information.

Storyful’s Source app uses Google’s AI technology to showcase a public image’s history, enabling users to understand if it has been manipulated or not. Thus, it improves journalists’ ability to verify the authenticity of an image, explaining why ‘130 people from 17 countries’ – including India – have been using this app to fight misinformation.

Adobe Research & UC Berkeley

Recognising the ethical implications of its technology in a world where photo manipulations have become ubiquitous, Adobe collaborated with researchers from UC Berkeley to leverage new technologies like AI to develop a method to detect image edits made using Photoshop’s popular Face Aware Liquify feature.

Used to exaggerate facial expressions, it automatically detects features that can be enhanced in a picture. 

The team trained a model using fake images that were generated using Photoshop’s Face Aware Liquify feature. They then demonstrated that this model outperforms humans when it comes to recognising fake images. Furthermore, it can also predict the location where the edits were made, and in some cases, even help retrieve the original image.

While still in its infant stages, this collaboration is a step towards detecting fake images and undoing facial warping.

MIT & IBM Watson AI Lab

Researchers from MIT and IBM had trained AI to generate images from scratch and to smartly edit this image to give it more depth. While this can offer valuable insights into how neural networks operate, it can also be developed to detect fake images.

Called GANPaint Studio, the tool enables you to add objects to a scene without manually having to do so yourself. You can also erase existing objects from the scene using this tool.

Its core part is a neural network that can produce its own images that belong to a certain category. This image can now be modified with semantic brushes that produce – or erase – units such as chairs or domes.

While this enables the creation of fake images, it can also help people learn to spot them.

As they were building the tool, the makers discovered that the system quickly adapted itself to the relationships between objects. For instance, it knew that a fruit cannot be placed in the sky, and clouds do not belong on the table.

Since the tool uses GAN, it has to expose its internal reasoning for decisions reflected in the above examples. This could help researchers get a clearer picture of how neural networks learn context.

Although it is still work in progress, the MIT-IBM team hopes it could edit video clips someday. This could especially be useful for filmmakers. For instance, if an important item was left out of a scene, they could use AI to insert it later.

More Great AIM Stories

Anu Thomas
Anu is a writer who stews in existential angst and actively seeks what’s broken. Lover of avant-garde films and BoJack Horseman fan theories, she has previously worked for Economic Times. Contact:

Our Upcoming Events

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM