AI Goes Phishing

Artificial intelligence could potentially make cyberattacks more dangerous and difficult to detect

In 2020, Google claimed to block more than 100 million scam emails every day, with 18 million of them related to COVID 19.  According to Barracuda Networks, malicious mails rose by 667% with the onset of the pandemic.

Mobile devices were the most vulnerable. Verizon’s 2020 Data Breach Investigations Report (DBIR) showed hackers found a lot of success with integrated text, email and link-based phishing, especially across social media, for stealing passwords and accessing privileged credentials in cyberspace.

Now, machine learning models are evolving to understand and filter out phishing threats to internet users, governments, and companies etc. For example, Microsoft neuters billions of phishing attempts on Office 365 alone.

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

Over the years, hackers have become better at evading detection, sneaking in malicious content. The tactics manifest themselves in URLs pointing to legitimate-looking yet compromised websites and redirectors.

AI has the ability to detect spam and phishing attacks with accuracy, speed.

Automated detection

AI goes beyond signature-based detection, which hackers have learnt to evade by tweaking some elements like HTML code or image metadata. Incorporating machine learning capabilities, AI focuses on detecting characteristics/behaviours related to phishing as opposed to known signatures. An altered signature can be detected and blocked. 

Phishing attacks are constantly evolving to evade newer technologies, and cybersecurity tools need to keep up. AI is continuously learning from open source threat intelligence feeds, and the organisation’s own unique environment.

Source: Abdul Basit, et al

Studies have shown that robust ML techniques have high detection accuracy. AI uses machine learning and data analysis to examine content, context, metadata and user behaviour. 

Behavioural analysis

AI and ML algorithms can understand how users communicate. They study patterns of typical behaviour, textual behaviours and the context of messages. Communication patterns are assessed to create a baseline of normal behaviour. Characteristics like use of grammar, syntax etc creates a unique user profile. Impersonation or spear phishing like Business Email Compromise (BEC) and Email Account Compromise (EAC) scams can be detected this way although it may pass other filters. 

Challenges

AI models are only as good as the data they are fed and its trustworthiness revolves around data. Hence, data bias is a pertinent risk. Oftentimes, enterprises have been deploying these tools assuming the datasets are well represented, which may not be true. 

AI data training can be poisoned by malicious actors, compromising the secure structure that a particular organisation is relying on. Technology does not possess an inherent disposition and will act the way it is taught. However, these tools do not operate in a vacuum, rather, they are interacting with their environment all the time. AI algorithms can be exploited and even weaponised to pursue nefarious objectives. The ability to create synthetic data that mimics the human generated content could be the beginning of Deepfake spear-phishing. 

As per Europol’s report, artificial intelligence could potentially make cyberattacks more dangerous and difficult to detect. Just like organisations deploy AI to protect against malware, it is possible that hackers have begun making use of AI and ML tools too. 

AI models may suffer as adversaries begin to identify patterns and change their mode of operation rendering the existing data and AI models useless. The model will adapt to certain phishing behaviours over time which decreases its efficacy to detect novel threats.

AI/ML blind spots 

Known unknowns and unknown unknowns are still a major threat to models. Although ongoing research aims to find answers and suggestions, these unknowns may not elicit any threat response from AI/ ML tools.

Datasets may also contain errors leading to labeling flaws on part of the AI. ML is exposed to comprehensible patterns which tell the machine what to look for in datasets to predict future malware. Datasets can become obsolete and irrelevant. An MIT study found that major ML datasets had significant errors including mislabelled images. The study found a 3 to 4% average error rate in datasets and a 6% error rate for Imagenet, one of the most popular image recognition systems. 

More Great AIM Stories

Prajaktha Gurung
I am a literature, media and psychology grad which explains much of my confusion in life. I like writing, especially about music. You'll find me clicking photographs and playing music on my guitar most of the time!

Our Upcoming Events

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

Conference, in-person (Bangalore)
Cypher 2023
20-22nd Sep, 2023

3 Ways to Join our Community

Whatsapp group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our newsletter

Get the latest updates from AIM