“This isn’t just photoshopping things. It’s making data look uncannily realistic”Bo Zhao, Assistant Professor, Geography, University of Washington.
Recently, Bo Zhao, along with his team of researchers at the University of Washington, argued that with the widespread use of geographic information systems, Google Earth, and other satellite imagery systems, location spoofing has become much more sophisticated.
Yifan Sun, a student in the UW Department of Geography, Shaozeng Zhang and Chunxue Xu of Oregon State University, and Chengbin Deng of Binghamton University, co-authored the report.
What’s the study about
Zhao, along with his team, deployed machine learning algorithms that take in the satellite images of urban areas to learn their characteristics and then impose them into the base map of another city to produce a deep fake image as output.
“As technology continues to evolve, this study aims to encourage a more holistic understanding of geographic data and information so that we can demystify the question of absolute reliability of satellite images or other geospatial data. We also want to have more future-oriented thinking to take countermeasures such as fact-checking when necessary,” Zhao said.
Researchers took satellite images and maps of three different cities Tacoma, Seattle, from the US and Beijing. The characteristics of Seattle and Beijing cities were fed into the AI framework to create deep fake Tacoma images.
Image Credits: Zhao et. al
(a) Tacoma in the mapping software and (b) Tacoma’s satellite image (c) Seattle’s characteristics imposed on Tacoma (d) Beijing’s characteristics imposed on Tacoma.
The techniques were already there. We’re just trying to explain the possibility of using the same techniques and of the need to develop a coping strategy for it, said Zhao.
The study aimed to learn how to spot fake images so that geographers can start developing data literacy resources for the public good.
Counter the fake
Researchers created a lot of fake images to spot fake images. They built a generative adversarial network that pitted two AI algorithms against each other. The first AI worked to detect fake images, while the second recognised the first AI’s variables to detect the fakes. With the combined effect of both, more such perfect fakes were generated.
More than 8,000 images were generated, including the actual pictures of Tacoma, Seattle and Beijing, and the fake ones created with the imposition of characteristics of the three cities. A machine learning tool was developed and was fed with image features such as brightness, colours, edge, texture, clarity, etc., to identify between the fakes and genuine. Almost 94 percent of the fake images were identified; however, the tool took several real ones as fake. The overall reliability stood at 73 percent.
India has witnessed high-tense standoffs in the Galwan valley and the Pangong Tso lake region with China in the recent past. Imagine how the circulation of fake satellite images could push false narratives that might flare up domestic tensions. Hence it’s essential to develop systems to counter such deep fakes to curb the spread of fake news.