MITB Banner

Branded Content

Guardians of the Syntax: Securing Enterprise LLM Systems against Emerging Threats

Chinmaya Jena highlights that the problem with LLM is that there is no differentiation between the data plane and the control plane.
Listen to this story

Large language models, such as OpenAI’s GPT and all the new ones cropping up in the AI world, bringing significant risks to enterprise systems. While traditional threats like password hacking, SQL injection, and malware attacks persist, new threats unique to LLMs are now raising their head. 

This has given rise to a new field of cybersecurity within LLMs.

Chinmaya Kumar Jena,  Senior Director of Studio at Tredence, offers insights into the specific threats faced by enterprise-level AI systems today and how they address them. With extensive experience in the field, Jena has observed the rapid evolution of AI systems closely and noticed the increasing need for improved cybersecurity. 

“It all started with ‘Attention is All You Need’,” said Jena. “But over the past year, there has been a significant surge in experimentation with various APIs,” he added, saying that companies have gradually started trying out multiple open-sourced large language models and  building enterprise systems (e.g., crawling data and building semantic search engines, text- to-SQL etc).

“It takes a lot for an enterprise-grade system to be called reliable, resilient and responsible,” Jena said. He explained that reliability means it should be robust, monitored, and explainable. Resilience means it should be safe and private, and being responsible means it must be inclusive, ethical, sustainable, and fair when giving the correct answers. 

Jena highlights that the problem with LLM is that there is no differentiation between the data plane and the control plane. “There is no isolation of data,” he adds.

The problems in the systems

Jena explained that there are mainly five problems when it comes to enterprise data when using LLMs, and broadly generative AI.

  • Data exposure: When interacting with LLMs, sensitive data, such as trade secrets, can be exposed. For example, sending a PowerPoint presentation for summarisation could lead to the exposure of confidential information.
  • Code exposure: Source code could be inadvertently shared with LLMs, putting proprietary code at risk.
  • Mutating malware: These are malware threats that evolve during runtime, making them challenging to detect and mitigate, which Jena says are ‘zero day vulnerabilities’.
  • Data poisoning: This occurs when LLMs are trained on biassed or manipulated data, leading to incorrect or harmful outputs.
  • Prompt injection: This is a security vulnerability where an attacker manipulates the input prompt to a language model to obtain unauthorised sensitive information or cause unintended behaviour. 

Jena said that Tredence did not use ChatGPT directly, but started using it through Microsoft’s Azure OpenAI Service. This allowed them to assess all security risks for better data governance. Tredence has implemented in-house security, which is Microsoft’s VNet, and data governance using vector databases. 

“The usage of vector databases allows every user to access the data according to their roles while reducing the costs,” said Jena, giving examples of finance and HR departments, both of which cannot access each others’ data.

Apart from this, Tredence also uses NVIDIA’s NeMo guardrails and Guardrails AI. “We do input and output filtering, monitor the prompts and have a feedback mechanism to check if the response makes sense,” Jena added.

Generative AI policies

Jena said Tredence has a unique generative AI policy to ensure robust security. This includes setting up a generative AI working group, a risk framework system, and generative AI security control system. “Our generative AI policy is setting the tone for how the technology would be controlled and how it would make the user accountable,” said Jena.

“We also have a working group that decides which kind of data will go into the system or not,” he explained. The team also defines risk mitigation strategies and threat modelling, and assesses the existing system on whether they are ready to be generative AI-enabled or not. It also mitigates the existing or unique risk arising from the models. 

“Azure assures us that the code and the embeddings that we put while using the Copilot is not accessed by them or shared with any other enterprise,” highlighted Jena, adding that it is not used for improving the models as well. “We have signed an agreement which ensures data and source code privacy,” he added.

The evolving threats and adaptations

Tredence also employs network security and firewalls to block DDoS attacks and external unauthorised access. To adapt to evolving threats, Tredence continuously adjusts its approach to cybersecurity. Recent developments include adhering to standards such as OWASP Top 10 for LLMs. 

With threats continuing to evolve rapidly, Jena believes that the future of LLM cybersecurity is dynamic. Adhering to new standards, such as ISO 42001, which is specifically tailored for AI, and continuously adjusting security practices will be crucial. There is also a shift towards hosting models on-premises to improve security and reduce costs.

Tredence remains at the forefront of cybersecurity for LLMs, helping Fortune 500 companies develop secure end-to-end systems. By staying informed on regulatory changes and threat modelling, the company aims to keep pace with the rapidly evolving landscape of AI security.

Security compliance, regulatory compliance, and threat modelling help Tredence combat fresh security threats created by new LLM models. “It will continue to evolve in the upcoming years, and we must keep pace with it. That’s our mantra,” concluded Jena.

Contributed as part of AIM Branded Content. Know more here.

This article is contributed by
Picture of Mohit Pandey

Mohit Pandey

Mohit dives deep into the AI world to bring out information in simple, explainable, and sometimes funny words. He also holds a keen interest in photography, filmmaking, and the gaming industry.
More from AIM

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.