Listen to this story
With the recent advancements in the field of AI and the community embracing a more open-source culture, it becomes even more critical to promote AI’s ethical and responsible use. Last month, Stable Diffusion, the newly released open-sourced AI image generator that’s grabbing all the headlines, was leaked on the infamous discussion forum 4chan, which was quick to churn out NSFW content.
Instances like these make us rethink the importance of AI governance and this is where ‘Open & Responsible AI licences (OpenRAILs)’ come in.
Sign up for your weekly dose of what's up in emerging technology.
OpenRAILs are a new class of AI-specific licences that enable open access, use and distribution of AI artefacts while requiring a responsible use of the latter. These licences emerged from the BigScience research workshop as multiple stakeholders pushed for a transparent, open and ethical development of Large Language Models.
“When assessing how to licence the BLOOM model, we realised it was not possible to place an open-source licence due to the potential uses that this type of artefact might have. Thus, we felt obliged to come up with a new type of licence striking a balance between open access and responsible use of ML artefacts, based on our community values stated in the Ethical Charter,” Carlos Muñoz Ferrandis, tech and regulatory affairs counsel at Hugging Face, told AIM.
Open & responsible AI licences
OpenRAILs permit flexible downstream use, re-distribution, and royalty-free access to licenced content as well as the release of any derivative works created from it. Additionally, they incorporate a certain set of limitations for the application of the licenced AI artefact in the designated essential scenarios.
“Existing open source licences do not take the technical nature and capabilities of the model as a different artefact to software/source code into account, and are therefore not adapted to enabling more responsible use of ML models.
“Moreover, following the open-source definition, open-source licences cannot embed ethics-informed use-based restrictions and therefore are not designed to promote, as such, a responsible use of the licensed artefact,” Carlos said.
However, in the case of OpenRAILs, the licence will allow open access and free re-distribution of the ML artefact and/or distribution of derivatives of it by licensees.
“In this way, a responsible use of the licensed artefact is maximised by becoming a widespread legal provision included in any downstream versions and derivatives of the artefact,” Carlos said.
How impactful will OpenRAILs be?
Carlos believes OpenRAILs have the potential to become a common practice in the AI space in the years to come. However, he also points out that potential misuse of openly available ML artefacts can always happen, and this is not something specific to AI. In fact, software pirating has been out there for years, and billions are earned every year out of it.
Nonetheless, OpenRAILs might act as a pertinent tool striving for a more responsible open culture in the AI space as they can act as a deterrent for potential misusers. “OpenRAILs should be seen as value carriers, vehicles enabling the spread of a respectful use of the licensed artefact, and broadly, a responsible and open AI sharing culture. As any other licence, OpenRAILs can be enforced in case of an alleged breach of the licence. Licence enforcement will always depend on the licensor and how far he/she wants to go with it,” Carlos said.
Further, communication efforts will also be needed to educate the community about these licences as well as their interpretation. “In collaboration with the RAIL Initiative, we constantly welcome constructive feedback to improve licence’s understanding and inform future licence development. OpenRAILs should be an initiative by the ML community at large; we need the entire community involved with us,” Carlos said.
Now, when we talk about the overall AI community, many large organisations and researchers from these organisations are part of the community as well. And oftentimes, these organisations have been accused of not open sourcing their models or using AI to make a profit only.
When asked how large organisations would be able to embrace OpenRAILs, Carlos said that these companies were also part of the ML community and he clearly saw them adopting this type of licensing practices in the long run. “You can already see Meta releasing its big models, such as OPT-175 under RAIL licences, enabling restricted access for research purposes and including a responsible use clause with specific restrictions based on the potential of the licensed artefact,” he said.
Giada Pistilli, principal ethicist at Hugging Face, believes OpenRAILs is a governance tool inscribed in a broader framework.
Speaking from an ethical point of view, she told AIM, “It is not a tool that succeeds in isolation or is sufficient enough to make up for the potential risks posed by a specific model. Nevertheless, it is part of a broader framework in which stakeholders (licensors but also users) take responsibility for the ML artefact uses and misuses.
“In this framework, we would ideally find clear technical documentation (eg, data & model cards) for technical compliance, plus a code of ethics document (eg, ethical charters et simili) to set the ML artefact’s project goals and thus define its ethical compliance. Without naivete, these governance tools are only as effective as the organisations’ willingness to abide by their terms, update them and, of course, enforce them,” said Pistilli.
AI governance and OpenRAILs
Today, governments across multiple jurisdictions are looking into AI regulations. So, how will OpenRAILs fit into all this? Both Pistilli and Carlos believe AI regulations will not be the only, and optimal governance solution for AI.
Currently, the European Union is working on the Artificial Intelligence Act (AIA). In July, the UK released a paper that outlines the government’s approach to regulating AI in the country. In the US, too, different states have enacted their own rules and regulations in terms of AI; however, there is no common law.
“Notwithstanding the invaluable safeguards for consumers and pro-innovation tools for ML companies that upcoming regulations might bring, the AI space needs evidence-based governance tools now, not in 3-4 years’ time.
Besides, OpenRAIL and AI regulations should not be seen as mutually exclusive mechanisms but complementary ones, promoting a balance between open innovation and responsible use of the technology.
“Moreover, our efforts are also focused on helping national and international organisations better understand the ML environment through our open source tools, education programmes, and so on,” they said in a joint statement.
Powerful inventions typically change lives for better and worse. Two most dramatic examples of this are nuclear technology and widespread industrialization around the world leading to a hotter planet and now expensive options to backtrack. For exciting world of AI, we are just beginning to understand the positive and negatives externalities. OpenRAILs seem to be at least a step in a direction. However, a lot needs to be uncovered and are at stake for practitioners.
Education is the key. The law and policy makers will be doing the hard work of research and analyses and creating the right principles. However, it is up to the practitioners to stay updated on this knowledge and stay compliant. The temptation for a creative mind to jump into building something new and inspiring will always stay. So, while nurturing this innovative spirit, companies should invest in the right ways to keep their employees empowered with knowledge. A sub-section of the above will be to be aware of conditions that apply in the context of local laws for permissible usage in addition to what OpenRAIL’s guidance may provide.
Despite laws and knowledge, individual values and ethics for practitioners stay relevant. The licensors can enforce use-based restrictions. But what is misuse being a point of interpretation which might change over time. Hence individuals need to be aware and thoughtful of the consequences of their actions. What was told to Spider-Man applies to the data science superheroes as well – “With great power, comes great responsibility!” The forward march of AI development is inevitable. The debate is just beginning on responsible usage.