21st-may-banner design

AI is More Hurtful than Helpful for Cybersecurity

Even though AI is looked upon as a messiah of the tech industry, the reality begs to differ

Share

Listen to this story

Microsoft has announced a vision to tackle cybersecurity challenges that have plagued the tech company in recent years. The newly introduced ‘Secure Future Initiative’ leans heavily on AI. 

Microsoft’s vice chairman and president Brad Smith noted, “In the recent months, we’ve concluded within Microsoft that the increasing speed, scale, and sophistication of cyberattacks call for a new response.”

Even though AI is usually lauded and often looked upon as a messiah of the tech industry, the reality begs to differ. As companies similar to Microsoft continue to scramble to understand how deeply AI can be integrated into securing systems, they appear to be digging their own computational graves. 

Manually investigating security risks is a cumbersome process but the number of issues are manageable. Rise in generative AI has given birth to problems which did not even exist before. Keeping up with the risks generated through AI is a hard nut to crack as the technology is developing at a much faster rate, leaving no time for companies to magnify upon the security weaknesses. 

Analysts have said that language models are so complex that it is nearly impossible to audit them in-depth. “The concern that most security leaders have is that there’s no visibility, monitoring, or explainability for some of those features,” Jeff Pollard, a cybersecurity analyst at Forrester Research recently told The Wall Street Journal

New Fear Unlocked

On the one hand, generative AI has given the world tons of models and algorithms to play around with. Yet on the other, it is also prone to introducing security risks due to their nature of being trained on preexisting data — including code. 

At a conference, David Johnson, a data scientist at the European Union’s law-enforcement agency Europol, pinpointed, “That code can contain a vulnerability, so if the model subsequently generates new code, it can inherit that same vulnerability.”

By signing up for generative AI, companies also unlock fears in new forms like “prompt injections,” where the bad guys use “prompts” or text-based instructions to manipulate these AI models into sharing sensitive information. In less than a year of OpenAI’s ChatGPT being released, several incidents have come forth hinting towards the deficiency of security. 

South Korean tech giant Samsung banned the use of ChatGPT after its staff accidentally leaked sensitive data via OpenAI’s chatbot. iPhone maker Apple and e-commerce giant Amazon also joined the growing list of companies cracking down on employees using the hottest AI chatbot of the year.

After these incidents, ChatGPT itself faced a data breach during a nine-hour window on March 20. The creators of ChatGPT at OpenAI issued a statement which noted that approximately 1.2% of the ChatGPT Plus subscribers who were active during this time period had their data exposed. While the percentage seems minuscule, the number was not small as over a million users’ data was breached during the event. 

No Quick Fix 

Getting an accurate accounting of total global economic losses due to cybercrime and cyberattacks is difficult, but Microsoft believes that total losses have been greater than $6 trillion and could close in on $10 trillion by 2025.

Two months ago, Brian Finch, co-leader of the cybersecurity, data protection & privacy practice at law firm Pillsbury Law told CNBC, “Given the economics of cyberattacks — it’s generally easier and cheaper to launch attacks than to build effective defenses — I’d say AI will be on balance more hurtful than helpful.“

As companies adopt AI internally, it is clear that the human-in-loop architecture is the key for security. Consequently, companies have started shifting towards “zero trust” models where defenses are set up to constantly challenge and inspect network traffic and applications in order to verify that they are not harmful.

As of now AI systems are not capable enough to outsmart hackers behind computer screens. So, co-existing is a critical factor till the time AI becomes dependable enough.

Share
Picture of Tasmia Ansari

Tasmia Ansari

Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.