Technology offers businesses better ways to defend their systems; however, on the other hand, the same technology also offers hackers better ways to compromise those systems. Thus, companies need to understand how AI will impact cybersecurity before they rely on it for network defence.
To get a better understanding of how AI can enhance a businesses’ defensive capabilities and how cyber threats can use AI to improve the sophistication and scope of their attacks, we got in touch with Steve Ledzian, Vice President & Chief Technology Officer, APAC at FireEye. Being a dual-use technology, artificial intelligence has the power to defend as well as attack the organisations’ critical data. Thus, to deal with it, FireEye has been leveraging machine learning technology along artificial intelligence to identify malware and safeguard businesses.
FireEye is a California headquartered intelligence-led cybersecurity company that is currently leveraging advanced machine learning-based detection and prevention engines to advance its endpoint security solution. With over 6,800 customers in multiple industries like cloud providers, government, finance, healthcare and many more across 67 countries, FireEye’s threat intelligence technology with infused AI and ML provides more context and priority to cyberattacks that helps its customers to defend against future threats proactively.
Let’s understand it better from Ledzian. Here is the edited excerpt:
What are the different attacks which are happening globally and in India specifically?
Attacks come in all shapes and sizes, from attackers with multiple and diverse motivations. Lately, we are seeing a lot of Business Email Compromise (BEC) and compromised accounts on cloud email providers. These attacks are more sophisticated than an attacker merely sending a single email, encompassing a full attack lifecycle. Additionally, ransomware is one of the more severe types of cyber-attacks, adversely impactful, and targeting organisations in India and across the globe. Aside from financially motivated attacks, organisations still need to worry about espionage-based attacks and cyber intrusions.
What are the new tactics implemented by hackers to attack assets in cyberspace?
Attackers are rapidly evolving ransomware tactics. Many ransomware attacks now include an element of extortion where the attackers steal information before encrypting it and making it inaccessible. In this way, they are imposing a double impact on their victims. First, the victim organisation experiences a disruption in operations as they cannot access the files and data they need. Second, because of the exfiltration of data, the organisation is also experiencing a data breach where attackers threaten to make their private data public information if they refuse to pay the ransom. We see a lot of these attacks happening on weekends and evening hours and have observed ransom amounts continuing to rise, often reaching millions or double-digit millions of dollars.
What’s the use of AI in cybersecurity? How can AI technology enhance businesses’ defensive capabilities? How AI Ops Helps Cybersecurity?
AI applied to cybersecurity often comes in the form of machine learning (ML). Machine learning can be used for many purposes within the cybersecurity domain. Most commonly, it’s used to detect and identify malware, but it can also be used as a tool by security analysts to perform thorough investigations, enhancing fraud mitigation capabilities.
Machine learning is also helpful in addressing routine, well-defined tasks. It is best applied to augment rather than replace human security analysts. With machine learning addressing the routine, tedious, time-consuming and repetitive tasks, human analysts are freed up to work on higher-order problems, which are complex and not well-defined, requiring more creativity to address and solve. This leads to happier analysts who can now focus their energy on more meaningful, value-added and challenging tasks.
How is FireEye participating in this revolution of artificial intelligence? How is the company using machine learning to bolster its internal defences and tools?
FireEye uses machine learning to identify malware in an engine called “MalwareGuard”, that is used in its various product lines. This year MalwareGuard as part of FireEye Endpoint Security was recognised as the winner of the US Navy’s Artificial Intelligence Applications to Autonomous Cybersecurity Challenge. Last year, FireEye made available a tool called ‘StringSifter’ which helps malware analysts perform more efficient analysis. Along with that, this year also FireEye was recognised by Forbes, in terms of using machine learning for threat attribution.
FireEye network, endpoint, and email security controls deployed across the globe are built to allow massive amounts of telemetry to flow back to a central source, where it can be centralised, standardised, automated and scaled. This approach has been the key to the company’s success, as it can use the telemetry data across global client sites to monitor the cyber threat situation across the entire world.
The big insight was the analogy of mapping specific needs to assess the similarity of cyber attack threat clusters to ML-based natural language processing methods for automatically assessing the similarity of text documents. This insight would never have occurred without the intensive back-and-forth interaction between the threat analysis domain experts and data scientists.
How can cyber threats use AI to improve the sophistication and scope of their attacks?
That’s right. It’s not just the defenders who are leveraging machine learning; attackers are also putting it to use as well. One of the most interesting applications of machine learning is to generate synthetic media for malicious purposes. This includes fake images, fake videos, or fake audio that can be used to influence unwitting victims.
As a matter of fact, detection, attribution, and response become a major challenge in scenarios where cyber threat actors can anonymously generate as well as distribute fake content using proprietary training datasets. For this, the organisations and the industry as a whole should help AI researchers, policymakers, and other stakeholders to mitigate the harmful use of open-source models. The approach of machine learning in order to create synthetic media are supported by generative models, which are massively misused to manipulate information as well as public comment websites and have even cloned voices of C-level executives to trick employees into handing over money.
Over time, the whole concept of fake news, synthetic media generation have become extremely cheaper, simpler and easier to create both financially as well as in terms of the computing power required. Along with that, the capability of generating images are moving beyond headshots and facial generations to more advanced videos — all could be attributed to the open-source, free models and advent of low-code or no-code applications. Thus, it has become imperative for the research community to aim their focus on developing such technical detection and advancing the capability of mitigating such threats of synthetic media.
What’s the scope of India fighting against cyber threats using AI? What may be the implications for a country like India with weaker cybersecurity defence and IoT standards?
AI has lots of interesting applications, but it’s important to understand that lots of successful cyber attacks don’t rely on AI at all. Social engineering, stolen credentials, cloud-based attacks, and phishing attacks are still very successful even without having to resort to AI or machine learning. Having a cyber defence that addresses the fundamentals should be of utmost priority and focus. If you think you already have the fundamentals covered, verify that by testing your defences with an externally driven Red Team Assessment. Red Teams are live-fire exercises from friendly adversaries who will attack your network in the same way a real-world attacker would, but in this case, the attacker works for you or is doing so as per your instructions, without endangering or putting your organisation and data at risk. This “cyber sparring” is one of the best ways to build muscle around cyber resilience.