How APIs Can Save AI Research Labs: Lessons From OpenAI

“It is not a dream, it is a simple feat of scientific engineering, only expensive — blind, faint-hearted, doubting world!”

Nikola Tesla

Discovering a new medicine is a billion-dollar research endeavour. At least, it can draw in the money as the results are kind of self-explanatory; life-saving. But, in case of AI, which is usually riddled by speculations and scepticism, it is an uphill task for the researchers to sell their idea or to churn profits to keep fueling their AI labs. For example, OpenAI, which started as a non-profit research lab, changed its stance when it partnered with Microsoft. A year later, they have announced that they are making all their exotic deep learning innovations available to the public through an API that comes with a price tag. Now, customers can access state-of-the-art machine learning models without the headaches of training from scratch; GPT-3 training costs over $4 million!

Today, the API can run models with weights from the GPT-3 family with speed and throughput improvements. In the next section, we take a look at OpenAI’s plan to unfold their new strategy and the key takeaways for other AI R&D labs and budding researchers.


Sign up for your weekly dose of what's up in emerging technology.

The Simpler The Better

OpenAI team made sure that their API, unlike most AI systems which are designed for one use-case, is built to be both simple and flexible enough to make machine learning teams more productive. 

In fact, many of our teams are now using the API so that they can focus on machine learning research rather than distributed systems problems. 

OpenAI’s API is designed to provide a general-purpose “text in, text out” interface, that allows users to try it on any English language task virtually . 

We’ve designed the API to be both simple for anyone to use but also flexible enough to make machine learning teams more productive.


Modern day ML models are large and smaller organisations cannot afford them. With APIs, OpenAI tries to bring the advantages of their mega models to smaller businesses and organisations.

Watching Out For Malicious Players

Ever since the release of GPT, the text generator, OpenAI has been at the receiving end of criticism. OpenAI knows the adverse effects of their technology and have admitted the same in their latest paper on GPT-3. Now, these controversial machine learning models will be available to the public. To keep an eye on the consequences, their API was launched in a private beta rather than for general availability. In this way, the team believes that the users can control their content better with API.

We cannot anticipate the consequences of a rapidly evolving technology. We can only deploy checkpoints. OpenAI states that they will terminate API access if the users use it for applications such as harassment, spam, radicalization, or astroturfing.

Apart from this, they are also conducting research into the potential misuses of models served by the API, including with third-party researchers via an academic access program.

Research Needs Revenue

Today, we consider Marconi to be the father of wireless technology. However, Nikola Tesla was pursuing similar endeavours at the same time as Marconi. The difference between their successes was something as fundamental as funding!

This is true even today. R&D department usually takes the first hit when an organisation is facing an economic downturn or a pandemic. For example, last month Uber announced that it would be winding down its AI wing. So, it is extremely important for any AI labs to maintain the cash flow. 

For example, the models developed by OpenAI are very large, taking a lot of expertise to develop and deploy, which make them very expensive to run. So, OpenAI will be soon announcing a pricing plan for its API customers, which it believes, in addition to being a revenue source, will also help them cover costs in pursuit of their mission.

Leaving Room For Innovation

What OpenAI API got right with their new strategy is that their tools are accessible to a wide range of users who will be willing to invest to access the top technology in the most simplistic manner. The OpenAI team also left some space for users to improve on the existing tools. This is a win-win scenario for both parties. While luring the customers with their technology, the team also has been vocal about the misuses of their technology and the steps they will be taking to stop them.

The ultimate objective of all AI efforts is to achieve minimum or null human interference — AGI. And, for this to happen, research labs should devise strategies to give a commercial twist to their ideas.

More Great AIM Stories

Ram Sagar
I have a master's degree in Robotics and I write about machine learning advancements.

Our Upcoming Events

Masterclass, Virtual
How to achieve real-time AI inference on your CPU
7th Jul

Masterclass, Virtual
How to power applications for the data-driven economy
20th Jul

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, Virtual
Deep Learning DevCon 2022
29th Oct

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM

What can SEBI learn from casinos?

It is said that casino AI technology comes with superior risk management systems compared to traditional data analytics that regulators are currently using.

Will Tesla Make (it) in India?

Tesla has struggled with optimising their production because Musk has been intent on manufacturing all the car’s parts independent of other suppliers since 2017.