Republic, News 18 and Others Break Fake AI News

A fake AI photograph of an explosion near the Pentagon surfaced on the internet and media houses circulated the hoax image on their channels
Listen to this story

A fake AI-generated photograph of an explosion near the Pentagon in Washington DC surfaced on social media last night, causing collateral damage to the US stock market. Prominent Indian media houses including Republic and News 18 unwittingly fell prey to disseminating the image, propagating misinformation through their channels. The Arlington police officials have trashed the photo, calling it AI-generated. 

The event also highlights the dangers in the pay-to-verify system. A fake, but verified Bloomberg Feed had tweeted the story. Jumping the bandwagon, dozens of accounts on social media circulated the misinformation without verifying its authenticity. The account, which released the photo has currently been suspended from Twitter. 

The image shows a massive cloud of smoke billowing near a building but there are no people in the image who can confirm the source. While we still don’t know which AI tool was used to create the image, it has some of the hallmarks of being AI-generated. For example, the columns on the building in the image are in different sizes, and the fence blends into the sidewalk at certain points.

A Band-Aid Solution 

Since the dawn of time, stock image companies such as Getty Images have been using watermarks to protect their images. A watermark is often a logo or text overlay, serving as a distinctive marker on an image. Such a solution has gained attention and companies like Google have been venturing into this territory. 

Last month at Google I/O, the company introduced two new features for its image search to stop the spread of misinformation. The Alphabet Inc company’s first feature, ‘About this image‘, provides additional context — when the image was first indexed by Google, where it first appeared and where else it is available online. The feature will show the original source and also contextualise an image with debunking evidence provided by fact checkers.

Google also announced its own AI-generated images would encompass metadata with each picture to indicate it’s AI-created. Other creators can also lable images using the same feature which will roll out in the upcoming months. Google’s blog post says Midjourney, Shutterstock, and others will roll out the markup in “the coming months”.

Typical AI Errors 

The Pentagon incident makes a stronger case why the community needs to have better ways to differentiate between AI-generated contents and facts. Though the companies are still figuring out ways to do so. Here is how you can identify AI-generated images. 

Firstly, look closely at the picture. To do so, search for the highest-possible resolution of the image and then focus on the details. Zooming in can show inconsistency and errors that one might have missed at first glance.

One can also look for the image source or carry out a reverse image search. Lastly, AI image generators are infamous for their nightmarish ‘six-fingered hands’ and ‘mismatched toes’. So pay attention to hands and other body features which are likely to be disproportionate.

Generated Reality

Since text-to-image models gained popularity on the internet, many hoax images have surfaced — Former US President Donald Trump apparently being arrested, or Tesla CEO Elon Musk holding hands with GM CEO Mary Barra. Also, who can forget Pope Francis wearing a stylish white puffer jacket walking around with coffee in one hand?

The details of these images made them appear life-like. The bombing image quickly gained traction online, prompting several news outlets to decry the incident as “one of the first instances of wide-scale misinformation stemming from artificial intelligence”. While some of the fun Balenciaga-like generation manages to woo the internet, the rest can cause major socio-political uproar. 

The Pentagon event highlights how difficult it is going to be to separate AI-generated content from facts. Hence, it is high time for mediahouses globally to figure out a way to fact check their “news” before broadcasting it through their channels.

Download our Mobile App

Tasmia Ansari
Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Recent Stories

Our Upcoming Events

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox

6 IDEs Built for Rust

Rust IDEs aid efficient code development by offering features like code completion, syntax highlighting, linting, debugging tools, and code refactoring

Can OpenAI Save SoftBank? 

After a tumultuous investment spree with significant losses, will SoftBank’s plans to invest in OpenAI and other AI companies provide the boost it needs?

Oracle’s Grand Multicloud Gamble

“Cloud Should be Open,” says Larry at Oracle CloudWorld 2023, Las Vegas, recollecting his discussions with Microsoft chief Satya Nadella last week.