Listen to this story
For AI to reach its potential and for society to benefit from it, AI needs to be decentralised, i.e., different stakeholders in the AI community should have equal access to all resources like datasets, compute power and the source codes for different AI models. But that is not the case today.
Today, most of the breakthroughs in the field of AI come from big organisations. AI text-to-image generators such as DALL-E2 and Imagen to Large Language Models (LLM) such as GPT-3, have all come from large organisations.
Sign up for your weekly dose of what's up in emerging technology.
However, none of these AI models are open-sourced. Today, AI still remains fairly centralised. This means the inner functioning of such models is known by only a handful of people.
While large organisations such as Meta, Google and Microsoft are slowly adhering to the open-source culture and are open sourcing some of their models, many still believe that AI should remain centralised. Balaji Srinivasan, former CTO at Coinbase, believes that centralised AI itself is unethical.
The centralisation of AI happens because of the requirement of resources such as large datasets and computing power, which often lie in the hands of large organisations. The concentration of these resources with a few is often seen as unethical. Further, the lack of transparency, interoperability and limited participation of other stakeholders in AI innovation in a centralised AI system makes it unethical, according to many members of the AI community.
One Twitter user even said that centralised AI couldn’t be fully ethical without being fully transparent. For AI to reach its full potential, it is imperative that the broader community must have a good understanding of the different AI models, how they function and how they can be improved. Unfortunately, however, that is not the case.
Further, these large organisations often cite ethical concerns when asked about open-sourcing their model. “The fundamental dichotomy here lies in opening or closing these models. There are security reasons to keep them closed, but doing so maintains its construction in terms of dataset and training equally closed and therefore inaccessible,” Giada Pistilli, ethicist at Hugging Face, said.
Is Decentralised AI the solution?
Decentralised AI is when technologies such as blockchain and AI come together. In fact, blockchain is the major driver of decentralised AI. Blockchain paves the way for a decentralised ecosystem where different stakeholders can come together to create AI architectures and eliminate the need for a centralised control authority.
The very nature of decentralised AI will facilitate AI innovation in such a way that society can reap the benefits. A decentralised ecosystem means the AI model can be trained by multiple stakeholders and not just its creators. Since multiple stakeholders will have access to the inner functioning of the model, its potential can be leveraged to overcome challenges plaguing the society today.
The growing demand for Decentralised AI has resulted in the development of projects such as SingularityNET, which is a decentralised network built on the blockchain. The network aims to break the monopolistic hold of large organisations over AI.
SingularityNET allows researchers or companies to monetise their AI solutions. Since the platform is blockchain-based, they also get access to various other AI-related resources and algorithms. The platform supports data exchange and sharing across different algorithms, which helps develop multi-tier AI applications.
The very nature of centralised AI has also led to the emergence of initiatives such as BigScience. The initiative consists of independent researchers and academic and industrial researchers interested in AI, NLP, social sciences, legal, ethics and public policy.
Talking about the significance of an initiative like BigScience, Pistilli said there is a growing need for research to be conducted in a setting where stakeholders can have input and influence over the design process. This process will allow them to help shape the values and priorities of the research project and decide what data and evaluations should be used.
To counter the dominance of large organisations over LLMs, the BigScience team launched BLOOM, which is the first multilingual LLM trained in complete transparency.
BLOOM is open source, and any researcher can now download, run and study it. Any individual or institution that agrees to the terms of the model’s Responsible AI Licence can use the model on a local machine or on the cloud.
Similarly, last month, we saw the release of Stable Diffusion, an open source AI text-to-image generator, which, according to its creator- Emad Mostaque, is about 30 times more efficient and runs on a consumer graphics card for DALL-E 2-level image quality.
Emad has also launched Stability AI, the company behind Stable Diffusion, to empower researchers with funding and computing power to boost their research.