MITB Banner

OpenAI Introduces New Enterprise-Grade Features for API Customers

OpenAI's new API features boost enterprise security and cost management, streamlining AI adoption across industries. 

Share

What’s Up with ChatGPT Enterprise
Listen to this story

OpenAI has announced the introduction of new enterprise-grade features for its API customers. The new features include enhanced security, better administrative control, improvements to the Assistants API, and more options for cost management.

This latest announcement builds upon previous enterprise offerings, with a focus on API customers. The new features include Private Link for secure communication between Azure and OpenAI, native Multi-Factor Authentication for access control, and a new Projects feature for granular control and oversight over individual projects within the organisation.

OpenAI also introduced updates to its Assistants API, including improved retrieval with ‘file_search’, streaming support for real-time responses, and new ‘vector_store’ objects for simplified file management and billing.

To help organisations manage costs, the company now offers discounted usage on committed throughput for GPT-4 and GPT-4 Turbo, as well as reduced costs on asynchronous workloads through its new Batch API.

The company works with a wide range of enterprises, including Morgan Stanley, Salesforce, Healthify, Stripe, Khan Academy, Duolingo etc. According to the blog, it plans to add more features focused on enterprise-grade security, administrative controls, and cost management to support the safe and effective deployment of AI across various industries and use cases.

In addition to enhancing enterprise API capabilities, OpenAI introduced an instruction hierarchy to protect language models (LLMs) from vulnerabilities such as prompt injections and jailbreaks. This new security layer ensures that when faced with multiple instructions, the model prioritises those that are higher-privileged or align with them, enhancing robustness and safety. Misaligned instructions, which conflict with primary directives, will be disregarded by the model, thereby preventing manipulation and unauthorised actions.

Share
Picture of K L Krithika

K L Krithika

K L Krithika is a tech journalist at AIM. Apart from writing tech news, she enjoys reading sci-fi and pondering the impossible technologies, trying not to confuse it with reality.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.