MITB Banner

After WormGPT, FraudGPT Makes it Easier for Cybercriminals

The anxiety associated with Generative AI has materialised into a new threat—‘phishing as a service’

Share

Listen to this story

Recently, Netenrich security researcher Rakesh Krishnan in a blog reported that they found evidence of a model called FraudGPT.

FraudGPT has been circulating in darknet forums and Telegram channels since July 22, 2023, and is available through subscription at a cost of $200 per month, $1,000 for six months, or $1,700 for a year. 

The LLM that this model is based on, is unidentified, however, the author claims that it has garnered more than 3,000 confirmed sales and reviews. The actor behind this tool goes by the alias CanadianKingpin and claims that FraudGPT can be used to write malicious code, create undetectable malware, find leaks, and identify vulnerabilities. 

The ease of access to Generative AI Models has allowed individuals with limited technical knowledge to carry out tasks which were beyond their capabilities, increasing efficiency and reducing costs. With the advent of Large Language Models into the mainstream, a lot of use cases emerged. 

However, the threat landscape has changed drastically as well. All the anxiety has materialised into a new threat—malicious actors providing ‘phishing as a service’. While cyber-criminals were previously equipped with sophisticated coding and hacking skills, these new tools are available to all and could act as a launchpad for inexperienced attackers. This not only increases the threat but scales it manifold.

What is FraudGPT Capable of? 

Cybercriminals can use FraudGPT, and its features include generating malicious code to exploit vulnerabilities in computer systems, applications, and websites. Additionally, it can create undetectable malware, evading traditional security measures and making it difficult for antivirus programs to detect and remove threats.

Another capability of FraudGPT is the identification of Non-Verified by Visa (Non-VBV) bins, allowing hackers to conduct unauthorised transactions without extra security checks. Moreover, the tool can automatically generate convincing phishing pages, which mimic legitimate websites, increasing the success rate of phishing attacks.

In addition to crafting phishing pages, FraudGPT can create other hacking tools, tailored to specific exploits or targets. It can also scour the internet to find hidden hacker groups, underground websites, and black markets where stolen data is traded.

Furthermore, the tool can craft scam pages and letters to deceive individuals into falling for fraudulent schemes. It can help hackers find data leaks, security vulnerabilities, and weaknesses in a target’s infrastructure, facilitating easier breaches.

FraudGPT can also generate content to aid in learning coding and hacking techniques, providing resources to improve cybercriminals’ skills. Lastly, it assists in identifying cardable sites, where stolen credit card data can be used for fraudulent transactions.

FraudGPT is hot on the heels of WormGPT which was launched on 13 July 2023 and is popular amongst cyber criminals for its ability to draft BECs or Business email compromise. 

BECs are one of the most widely-used attack vectors for hackers to spread malicious payloads.

WormGPT enables fraudsters to craft convincing fake emails for impersonation by bypassing spam filters because it is trained on various malware-related data sources and the open-source model GPT-J.

Threat to Enterprises

The adoption of Generative AI by companies has been slow due to concerns surrounding robust security infrastructure around this powerful technology. Although cloud service providers are entering the AI market, there is still a demand for a secure Large Language Model (LLM) offering, which companies like Google are looking to meet.  

Educating the workforce about the potential dangers of generative AI is crucial for safeguarding against data leakage and other cyber threats.

AI-powered hack attacks pose significant risks, and it’s essential to emphasise the importance of training employees to identify and respond to potential cyberattacks. Unfortunately, many companies are lagging in cybersecurity readiness, with only 15% considered to have a mature level of preparedness for security risks, as shown by a survey.

Cyber-security Nightmare

The rapid pace of AI models has made it difficult for security experts to identify and combat automated machine-generated outputs, providing cybercriminals with more efficient ways to defraud and target victims. For instance, engineers of Samsung’s Semiconductor group inadvertently leaked critical information while using ChatGPT to quickly correct errors in their source code. In just under a month, there were three recorded incidents of employees leaking sensitive information.

While many bad actors have already been trying to jailbreak Large Language models like GPT-4, Bard, Bing and LLaMa to use to their advantage, the sophistication and automation of such models and their capabilities pose a significant threat to cybersecurity.

Nonetheless, certain safety measures can safeguard against phishing emails and cyberattacks. One could use detection tools for AI-generated text, however, the sanctity of these tools was thrown into dust as OpenAI discontinued its AI classifier. Moreover, there were other research papers like ‘Can AI-Generated Text be Reliably Detected?’ questioned whether AI-generated text could be successfully recognised.


The evolution of Generative AI has led to concerns over criminal use, exemplified by WormGPT and FraudGPT. Born from GPT-J, WormGPT creates malware without limitations, while FraudGPT crafts undetectable malware and malicious content. They produce phishing emails, SMSs, and code, with similarities to ChatGPT in non-malicious tasks. To address this threat, organizations can adopt AI-driven security tools, provide continuous training, share intelligence, monitor Dark Web, enforce strong policies, and prioritize ethical AI development. The growing interest in AI within the underground community amplifies the need for vigilance. While current capabilities may not be groundbreaking, these models signify a concerning step towards AI weaponization. Proactive measures encompassing technology, collaboration, education, and ethics are imperative to ensure responsible AI advancement and curb potential misuse.


Share
Picture of Shyam Nandan Upadhyay

Shyam Nandan Upadhyay

Shyam is a tech journalist with expertise in policy and politics, and exhibits a fervent interest in scrutinising the convergence of AI and analytics in society. In his leisure time, he indulges in anime binges and mountain hikes.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.