Listen to this story
|
A fake AI-generated photograph of an explosion near the Pentagon in Washington DC surfaced on social media last night, causing collateral damage to the US stock market. Prominent Indian media houses including Republic and News 18 unwittingly fell prey to disseminating the image, propagating misinformation through their channels. The Arlington police officials have trashed the photo, calling it AI-generated.
The event also highlights the dangers in the pay-to-verify system. A fake, but verified Bloomberg Feed had tweeted the story. Jumping the bandwagon, dozens of accounts on social media circulated the misinformation without verifying its authenticity. The account, which released the photo has currently been suspended from Twitter.

The image shows a massive cloud of smoke billowing near a building but there are no people in the image who can confirm the source. While we still don’t know which AI tool was used to create the image, it has some of the hallmarks of being AI-generated. For example, the columns on the building in the image are in different sizes, and the fence blends into the sidewalk at certain points.
A Band-Aid Solution
Since the dawn of time, stock image companies such as Getty Images have been using watermarks to protect their images. A watermark is often a logo or text overlay, serving as a distinctive marker on an image. Such a solution has gained attention and companies like Google have been venturing into this territory.
Last month at Google I/O, the company introduced two new features for its image search to stop the spread of misinformation. The Alphabet Inc company’s first feature, ‘About this image‘, provides additional context — when the image was first indexed by Google, where it first appeared and where else it is available online. The feature will show the original source and also contextualise an image with debunking evidence provided by fact checkers.
Google also announced its own AI-generated images would encompass metadata with each picture to indicate it’s AI-created. Other creators can also lable images using the same feature which will roll out in the upcoming months. Google’s blog post says Midjourney, Shutterstock, and others will roll out the markup in “the coming months”.
Typical AI Errors
The Pentagon incident makes a stronger case why the community needs to have better ways to differentiate between AI-generated contents and facts. Though the companies are still figuring out ways to do so. Here is how you can identify AI-generated images.
Firstly, look closely at the picture. To do so, search for the highest-possible resolution of the image and then focus on the details. Zooming in can show inconsistency and errors that one might have missed at first glance.
One can also look for the image source or carry out a reverse image search. Lastly, AI image generators are infamous for their nightmarish ‘six-fingered hands’ and ‘mismatched toes’. So pay attention to hands and other body features which are likely to be disproportionate.
Generated Reality
Since text-to-image models gained popularity on the internet, many hoax images have surfaced — Former US President Donald Trump apparently being arrested, or Tesla CEO Elon Musk holding hands with GM CEO Mary Barra. Also, who can forget Pope Francis wearing a stylish white puffer jacket walking around with coffee in one hand?
Why Pope Francis Is the Star of A.I.-Generated Photos – The New York Times
— Iain Brown, PhD (@IainLJBrown) April 8, 2023
Read more here: https://t.co/o10H9AFk0X#ArtificialIntelligence #AI #DataScience #100DaysOfCode #Python #MachineLearning #BigData #DeepLearning #NLP #Robots #IoT
The details of these images made them appear life-like. The bombing image quickly gained traction online, prompting several news outlets to decry the incident as “one of the first instances of wide-scale misinformation stemming from artificial intelligence”. While some of the fun Balenciaga-like generation manages to woo the internet, the rest can cause major socio-political uproar.
The Pentagon event highlights how difficult it is going to be to separate AI-generated content from facts. Hence, it is high time for mediahouses globally to figure out a way to fact check their “news” before broadcasting it through their channels.