MITB Banner

WormGPT is a Warning for Enterprises to Upskill Their Employees

Reports have emerged that hackers have released a GPT-J powered hacking tool known as WormGPT, making it the first ChatGPT for hackers.

Share

Listen to this story

Ever since LLMs entered the mainstream, concerns have been raised over their capabilities to create large amounts of written content quickly and easily. Now, these concerns have come to fruit, as the black hat hacker community has finally tapped into the capabilities of LLMs for malicious attacks. 

Reports have emerged that hackers have released a GPT-J powered hacking tool known as WormGPT. This tool capitalises on the already pervasive attack vector known as business email compromise, which is infamous for being one of the world’s top cyber threats. 

Delving deeper into the effects of WormGPT only sheds further light on the urgent need for AI cybersecurity training. Hackers are getting their hands on more capable technology with the AI wave, putting the onus on companies to inform their workforce on the potential dangers of using AI. 

WormGPT explained

Business email compromise, or BEC, is one of the most widely-used attack vectors for hackers to spread malicious payloads. In this method, hackers impersonate a party in business with a company to execute a scam. While these emails are usually flagged as spam or suspicious by email providers, WormGPT gives fraudsters a new set of tools. By creating a new model trained on a vast array of data sources” of malware-related data models including the open-source GPT-J, attackers are able to craft a convincing fake email to sell the act of impersonation. 

According to a post on a commonly used hacker forum, WormGPT does not have any limitations like ChatGPT. It can generate text for a variety of black hat applications; a hacker term referring to illegal cyber activities. The model can also be run locally, leaving no trace on any servers as it would with an API. With the removal of safety rails from the model, the output that it can create is not regulated by any alignment method, offering an uncensored output ready for use in illegal actities. 

The main issue with an application like WormGPT is the fact that it provides the ability to create clean copy to attackers whose first language is not English. Moreover, these emails also have a better chance of passing through spam filters, as they can be customised depending on the attackers’ requirements.

WormGPT greatly lowers the barrier for entry for hackers because it’s as easy to use as ChatGPT with none of the protections. Moreover, emails generated with this model also convey a professional tool, possibly increasing their efficacy in carrying out an attack. 

For those hackers that are too cheap to pay for WormGPT, the aforementioned forum has multiple ChatGPT jailbreaks to help users extract malicious output from the consumer bot. AIM has covered the security issues of jailbreaks extensively in the past, but custom-trained models represent a new level of AI-powered attacks. 

Coping up with AI attacks

As mentioned previously, BEC is one of the biggest cyberattack avenues. In 2022 alone, the FBI received over 21,000 BEC complaints, which totalled to losses of about $2.7 billion. What’s more, 2021 was the 7th year in a row that BEC was the top cyber threat for enterprises. Companies also suffer from leakage of sensitive information through BEC, which can further open up the possibility of attacks. 

WormGPT isn’t the only way generative AI is causing problems for companies either. LLMs can be used to write malware automatically, carry out social engineering attacks, find vulnerabilities in software code, and even help in cracking passwords. Generative AI poses a threat to the enterprise as well, especially in terms of data leakage.

Generative AI has also seen a slow uptake by companies due to a lack of security infrastructure around this powerful technology. While cloud service providers have begun entering the burgeoning AI market, companies are still in need of a strong, security-first LLM offering. By educating the workforce on the dangers of generative AI, companies can protect themselves from data leakage. The dangers of AI-powered hack attacks must also be emphasized, so as to enable employees to spot potential cyberattacks. 

Companies are falling behind on cybersecurity readiness, as evidenced by this survey which found that only 15% of organisations are deemed to have a mature level of preparedness for security risks. With the rise of generative AI, companies need to pour resources into keeping their workforces up to date with the latest threats in AI-powered cybersecurity. 

Share
Picture of Anirudh VK

Anirudh VK

I am an AI enthusiast and love keeping up with the latest events in the space. I love video games and pizza.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.