Listen to this story
|
Amazon is warning its employees against sharing sensitive information with ChatGPT. Similarly, earlier, in conversation with AIM, Infosys’ Gary Bhattacharjee discussed that companies have much better chances to protect their IP with models like HuggingFace’s CodeGen, which is trained specifically on open-source codes, than with GPT which pretty much scrapes everything from the web.
Amazon and Infosys are not alone. Plenty of companies are scared of exposing their data to ChatGPT, since the AI utilises the input prompt data to further train the model. On the other hand, to be able to make these AI models better, there is no other way than training it on as large a dataset as possible.
And if we look at finance, healthcare, government, and other highly-regulated industries, ChatGPT-like technologies are a complete no-go. However, in order to increase access across these industries, cloud companies are working with silicon vendors to ensure data security through confidential computing.
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
Confidential computing and LLMs
Recently, OpenAI released its chatbot’s ChatGPT API for users to integrate into their apps and products. The data submitted to OpenAI API, CEO Sam Altman said, will not be used for training.
Meanwhile, NVIDIA’s blog indicates that they are bringing GPU acceleration to VM-style confidential computing to market with its Hopper architecture GPUs. Hopper will be able to scale AI workloads in every data centre, from small enterprise to exascale high-performance computing (HPC) and trillion-parameter AI.
Download our Mobile App
According to a recent report by Tom’s Hardware, NVIDIA has begun shipping the H100 (Hopper), which is slated to be the successor to the A100. While media reports did highlight the significant performance improvements and higher AI training throughputs, what went amiss was H100’s integration of confidential computing.
For OpenAI to fulfil its commercial ambitions, going deep into the data privacy aspect of it – despite the heavy cost bearing (it would need about 30,000 of these GPUs to run the model) – was absolutely critical. “Confidential computing is a more scalable way to solve for data security or privacy challenges related to ChatGPT as opposed to tokenisation or application-level encryption techniques,” says Kishan Patel, area vice president of sales at Anjuna Security.
How does it work?
Confidential VMs is a hardware-based security solution that allows organisations to safeguard their most sensitive data even while it’s being processed. The technology leverages a hardware-based trusted execution environment (TEE), which is basically a hardware-enforced security enclave completely isolated from the rest of the system.
A two-step process explained by The Register gives an understanding: First, to enable confidential computing on GPUs, data is encrypted before being transferred between the CPU and GPU across the PCIe bus, using secure encryption keys which are exchanged between NVIDIA’s device driver and the GPU. Secondly, once the data is transferred to the GPU, it gets decrypted within a hardware-protected, isolated environment inside the GPU package, where it can be processed to generate models or inference results.
The isolation ensures that the applications and data remain protected from various types of attacks that could potentially come from firmware, operating systems, hypervisors, virtual machines, and even physical interfaces like USB ports or PCI Express connectors on the computer.
Source: NVIDIA
NVIDIA vs AMD vs Intel
NVIDIA isn’t the only player in town – Intel and AMD are also in the game making steady strides.
Last year, Intel introduced Project Amber, which aimed to provide a security foundation for confidential computing, especially when it comes to training and deployment of AI models. And most recently, the company also added Trust Domain Extensions (TDX), which is based on the same VM isolation technology, to its 4th Gen Xeon processors.
Similarly, AMD has partnered with Google Cloud to provide an additional layer of security to the chip designer’s Epyc processors. This was because at the time AMD was the only one providing confidential computing capabilities in mainstream server CPUs.
The Register notes that there are greater incentives for chip companies to work with cloud providers like Microsoft, Google, IBM, Oracle, and others, for them to be able to buy a substantial amount of its processors. Security researchers in these cloud companies will be able to scrutinise every detail of the device implementation and its custom tests. Especially this, since independent researchers have in the past uncovered several flaws in both Intel SGX and AMD SEV many times.
Additionally, there are also ongoing efforts from open-source communities like RISC-V to implement confidential computing in an open-source project called Keystone.
However, while the above efforts have been towards ensuring security at the CPU level, NVIDIA’s Hopper architecture provides VM-style confidential computing to GPUs. Considering that NVIDIA has been going all in on AI, bringing confidential computing to GPUs gives it a further edge.
According to one research, the confidential computing market could grow 26x in five years, making it up to $54 billion by 2026. Therefore, it will not be an overstatement to say that cloud security will be one of the biggest drivers in the AI chip race.