“It is not a dream, it is a simple feat of scientific engineering, only expensive — blind, faint-hearted, doubting world!”Nikola Tesla
Discovering a new medicine is a billion-dollar research endeavour. At least, it can draw in the money as the results are kind of self-explanatory; life-saving. But, in case of AI, which is usually riddled by speculations and scepticism, it is an uphill task for the researchers to sell their idea or to churn profits to keep fueling their AI labs. For example, OpenAI, which started as a non-profit research lab, changed its stance when it partnered with Microsoft. A year later, they have announced that they are making all their exotic deep learning innovations available to the public through an API that comes with a price tag. Now, customers can access state-of-the-art machine learning models without the headaches of training from scratch; GPT-3 training costs over $4 million!
Today, the API can run models with weights from the GPT-3 family with speed and throughput improvements. In the next section, we take a look at OpenAI’s plan to unfold their new strategy and the key takeaways for other AI R&D labs and budding researchers.
The Simpler The Better
OpenAI team made sure that their API, unlike most AI systems which are designed for one use-case, is built to be both simple and flexible enough to make machine learning teams more productive.
In fact, many of our teams are now using the API so that they can focus on machine learning research rather than distributed systems problems.
OpenAI’s API is designed to provide a general-purpose “text in, text out” interface, that allows users to try it on any English language task virtually .
We’ve designed the API to be both simple for anyone to use but also flexible enough to make machine learning teams more productive.OpenAI
Modern day ML models are large and smaller organisations cannot afford them. With APIs, OpenAI tries to bring the advantages of their mega models to smaller businesses and organisations.
Watching Out For Malicious Players
Ever since the release of GPT, the text generator, OpenAI has been at the receiving end of criticism. OpenAI knows the adverse effects of their technology and have admitted the same in their latest paper on GPT-3. Now, these controversial machine learning models will be available to the public. To keep an eye on the consequences, their API was launched in a private beta rather than for general availability. In this way, the team believes that the users can control their content better with API.
We cannot anticipate the consequences of a rapidly evolving technology. We can only deploy checkpoints. OpenAI states that they will terminate API access if the users use it for applications such as harassment, spam, radicalization, or astroturfing.
Apart from this, they are also conducting research into the potential misuses of models served by the API, including with third-party researchers via an academic access program.
Research Needs Revenue
Today, we consider Marconi to be the father of wireless technology. However, Nikola Tesla was pursuing similar endeavours at the same time as Marconi. The difference between their successes was something as fundamental as funding!
This is true even today. R&D department usually takes the first hit when an organisation is facing an economic downturn or a pandemic. For example, last month Uber announced that it would be winding down its AI wing. So, it is extremely important for any AI labs to maintain the cash flow.
For example, the models developed by OpenAI are very large, taking a lot of expertise to develop and deploy, which make them very expensive to run. So, OpenAI will be soon announcing a pricing plan for its API customers, which it believes, in addition to being a revenue source, will also help them cover costs in pursuit of their mission.
Leaving Room For Innovation
What OpenAI API got right with their new strategy is that their tools are accessible to a wide range of users who will be willing to invest to access the top technology in the most simplistic manner. The OpenAI team also left some space for users to improve on the existing tools. This is a win-win scenario for both parties. While luring the customers with their technology, the team also has been vocal about the misuses of their technology and the steps they will be taking to stop them.
The ultimate objective of all AI efforts is to achieve minimum or null human interference — AGI. And, for this to happen, research labs should devise strategies to give a commercial twist to their ideas.
If you loved this story, do join our Telegram Community.
Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.
What's Your Reaction?
I have a master's degree in Robotics and I write about machine learning advancements. email:email@example.com