MITB Banner

OpenAI Saga Shows Why Open Source is Necessary

This reiterates that such technology should not be limited to a select few, warranting a shift towards more decentralised frameworks.

Share

Listen to this story

The recent turn of events at OpenAI has prompted enterprises to reconsider their dependency on the company, sparking a fervent debate on the need for the imperative role of open-source communities in AI development. Tech experts, CEOs, industry insiders, and open-source proponents like Hugging Face CEO, Clem Delangue, and Stability AI’s Emad Moshtaque amongst others have rallied for a shift toward more decentralised and open-source AI frameworks.

Recently, 70 signatories, including Meta’s chief AI scientist Yann LeCun called for more openness in AI development in a letter published by Mozilla, amid concerns that a few companies could monopolise AI, with some firms lobbying against open AI R&D. 

That is precisely what happened, the leadership fiasco at one company threatened service disruption for all the GPT-based businesses—almost 80% of Fortune 500 companies across industries. While support from Microsoft ensured smooth sailing—a mass exodus, as threatened by majority OpenAI employees would’ve been disastrous.

This reiterated the dangers of excessive reliance on a single company’s proprietary models, echoing past concerns seen in the cloud computing industry. Hence, more than 100 startups and growing businesses built around OpenAI’s large language models turned to its competitors like Cohere to safeguard their interests.

The Balance Tips Towards Open Source 

Interestingly, amidst OpenAI’s leadership crisis, Andrej Karpathy who has been very quiet tweeted cryptically about contemplating centralisation and decentralisation recently.

“Thinking a lot about centralisation and decentralisation these few days,” Karpathy posted on X. This could also stem from the fact that OpenAI, which was meant to be an open AI research lab turned into a tight-knit closed company with ambitions for profit.

However, it is no surprise because Karpathy is an active contributor to the open-source ecosystem and while holding an important place within OpenIAI had also built a ‘Baby Llama’ model based on Meta’s Llama 2.  

The recent shakeup could cause the adoption of this very approach—the balance between closed and open-source models to become more relevant. 

Enterprises will need to rethink their strategies and bring in a more multi-approach strategy consisting equally of open-source and closed-source models. Balancing the convenience of proprietary models with the transparency offered by open-source alternatives becomes increasingly integral to shaping the future of AI. 

Meta’s collaborations with cloud players like AWS and Microsoft to release its LlaMa 2 open-source model indicate ease of mass adoption because of established and reliable infrastructure from cloud giants. Hence, companies like Meta which released LlaMa and LlaMa 2; Hugging Face which hosts thousands of open-source models; and Cohere with its Sandbox–a library of open-source models; and TII which came up with Falcon, stand to gain from this increasing appeal of Open-source. 

Why Open Source is the Best 

The cost structures diverge significantly between proprietary and open-source models. Proprietary models often operate on usage-based pricing or subscription tiers, imposing specific costs for tasks or monthly utilisation. In contrast, many open-source models are freely distributed, although fine-tuning or customisation might require additional resources. Understanding these cost dynamics is integral to assessing their impact on profit margins and operational budgets. 

Thus, Meta’s LlaMa2 has been wildly successful, topping several charts, competing in capability and being adopted by firms like IBM to build its WatsonX model. This situation would also warrant a lot more fanfare and adoption of Meta’s upcoming Llama 3 which is said to be at par with OpenAI’s GPT-4.

Latency is another critical concern, especially for real-time applications. Larger proprietary models, such as GPT-4, might suffer from longer response times because of API inference with a response time ranging up to 20 seconds, potentially affecting user experiences. In contrast, tailored open-source models designed for specific tasks can offer faster response times, providing a competitive advantage in time-sensitive scenarios.

The aspect of flexibility and transparency underscores a fundamental contrast between proprietary and open-source models. Proprietary models often lack visibility into their underlying code, hindering user understanding and compromising consistency in user experiences. On the contrary, open-source models prioritise transparency and flexibility, empowering users with insights into model behaviours and enabling alignment with specific business objectives.

Security and governance present another dichotomy. Proprietary models often tout enhanced security features and built-in content moderation, reassuring users of data protection and adherence to content policies. However, concerns regarding data compliance and potential leaks often accompany reliance on external proprietary models. Open-source models, lacking out-of-the-box security measures, can be brought within secure business perimeters for local data fine-tuning, mitigating some security risks.

Balanced Board

The situation reiterates the point that such consequential technology should not be controlled by a select few, warranting consideration on the board with a balanced philosophy which mulls the larger good. 

A board structured for disagreements and deliberation would prove to be a better fit than one which is lop-sided. Disagreements among open source boards often result in partners leaving or the dissolution of the company. However, this doesn’t necessarily impact the larger ecosystem. The fate of its source code differs, ranging from being sold, or used for a new venture’s ownership, to compensating other equal partners through a fair use agreement.

Share
Picture of Shyam Nandan Upadhyay

Shyam Nandan Upadhyay

Shyam is a tech journalist with expertise in policy and politics, and exhibits a fervent interest in scrutinising the convergence of AI and analytics in society. In his leisure time, he indulges in anime binges and mountain hikes.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.