MITB Banner

Watch More

AI Hype is a Treacherous Double-Edged Sword

Instead of directly using the LLMs, hackers have been using the hype surrounding the technology to execute attacks
Listen to this story

Capitalising on the all-time high AI hype, malicious actors are using it as a new attack vector. A latest report shows the ways in which hackers are using the hype surrounding OpenAI’s ChatGPT and Google’s Bard to execute attacks on unsuspecting netizens. 

By claiming to have packaged the tech industry’s latest and greatest AI models into a tiny executable, hackers are delivering a powerful malware-as-a-service known as Redline Stealer to hundreds of thousands of users. The preferred medium for spreading this is Facebook groups, which are filled with non-tech-savvy users likely to fall for this attack. 

Preying on the unaware

This trend began in January, coinciding with the entry of generative AI in the mainstream. The attacks have only been increasing in intensity and volume since then as more users wish to explore the advancing AI landscape. 

Frequency of attacks. Source: Veriti 

The attack mainly takes place in three stages. First, the hackers hijack the credentials of Facebook pages with a large number of followers. Then, they engage in a paid campaign to spread the word about free downloads of ChatGPT and Bard, which in reality are thinly disguised malware. Once downloaded and executed, the malware is able to install itself on the computers of the victims, extracting large amounts of personal information and credentials. 

Researchers at Veriti, the company behind the research paper, have said, “One of the most concerning risks associated with generative AI platforms is the ability to package the AI in a file (e.g., as mobile applications or as open source). This creates the perfect excuse for malicious actors to trick naïve downloaders.”

What’s more, the malware used doesn’t require a relatively low knowledge of coding. Called Redline Stealer, this malware is sold as-a-service on various Darknet forums and Telegram groups. In Telegram groups found by the researchers, the price for RedLine Stealer ranged from $150 per month to $800 for a lifetime subscription. 

This move is a departure from the previous methods of leveraging the capabilities of LLMs to deploy attacks. Hackers have looked to ChatGPT and its alternatives to orchestrate prompt injection attacks, social engineering through impersonation, and man-in-the-middle attacks. Effectively, instead of directly using the LLMs, hackers have been using the hype surrounding the technology to execute attacks. 

Since AI is currently at one of the highest points in its hype cycle, it is reasonable to assume that these kinds of attacks will only increase in frequency. Even as AI hype brings investment, regulatory attention, and participation in the industry, these schemes represent the dark side of the AI hype.

The flip side of AI hype

AIM has spoken in the past about the hugely disruptive nature of ChatGPT and how it shattered the hype cycle. In just three months, OpenAI’s baby exceeded expectations and changed the mainstream perception of AI. At last count, the application had over 100 million users showcasing its disruptive capabilities. 

While there is no disagreement that ChatGPT was the fastest-growing application of all time, it might just be representative of the current state of AI. However, the aftereffects of this AI hype cycle have just now begun to be felt. Due to the high volume of users on the platform, hackers are targeting the less tech-savvy users of ChatGPT to spread their attacks. 

What’s more, they are capitalising on the hype surrounding Google Bard as well. This platform, which is currently being offered on a waitlist basis, is an even better target for hackers, as they claim to offer early access to the algorithm to bait more unsuspecting victims.   

As the mainstream conversation continues to evolve around AI investment, its usage, regulation, and the potential to be exploited must also be explored. For example, OpenAI has instituted a bug bounty program that pays out up to $20,000 for people who can find security vulnerabilities in their products, incentivising the discovery of bugs and exploits in LLMs.

Apart from bug bounty programs instituted by those in charge of AI development, governments and regulators also need to take stock of what should be done to prevent attacks like Redline Stealer. Whether it is building awareness through public resources or including education on AI models in the school curriculum, building awareness of the nature of LLMs will surely be a deterrent to such attacks. 

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Anirudh VK

Anirudh VK

I am an AI enthusiast and love keeping up with the latest events in the space. I love video games and pizza.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories