MITB Banner

OpenAI & Co Join Hands to Find Regulatory Loopholes

OpenAI with nine other companies is signing up for a new set of guidelines, focusing on building future AI-generated content responsibly.

Share

Listen to this story

Amid all the hype and discussion around the ethical implications of AI, a group of ten companies—OpenAI, TikTok, BBC R&D, Adobe, Bumble, CBC/Radio-Canada, Synthesia, D-ID, Witness, and Respeecher—have signed up for a new set of guidelines, focusing on building, creating, and sharing AI-generated content responsibly. 

The initiative is taken by Partnership on AI (PAI), an AI research non-profit, consulting 50 organisations that include big-tech companies, along with academic, civil society, and media organisations to put together the voluntary recommendations. 

This new collaboration for recommendation by PAI claims that the framework is a living document. It will evolve with the development in AI technology by including more and more companies in the future. Claire Leibowicz, the head of AI and media integrity at PAI, said, “We want to ensure that synthetic media is not used to harm, disempower, or disenfranchise but rather to support creativity, knowledge sharing, and commentary.”

Why is it needed?

The recommendations are being moulded in collaboration with different types of companies. Firstly, the builders of technology and infrastructure such as ‘OpenAI’. Secondly, the creators of synthetic data like ‘Synthesis’. And thirdly, the distributors and publishers of synthetic data like ‘BBC’ and ‘TikTok’. This sounds like good news since input of different companies will lead to more transparency about how this technology will develop going forward.

One of the most essential parts of these guidelines is a pact by the companies to research and include ways to inform users when they are interacting with something that is generated by AI, by including watermarks and disclaimers, or including traceable elements in the models’ training data. The policies are also mostly concerned with being more transparent about creation of models and usage of synthetic data

This stems from the fact that even though research firms like OpenAI and DeepMind—which are controlled by big-tech like Microsoft and Google—are putting guardrails on their technologies and being extra careful before deploying them, new players like Stability AI are open-sourcing their software and giving it out in the public domain. For example, ‘Stable Diffusion’, the open-source model, is able to generate inappropriate content and deep fakes, which is a point of concern.

Something doesn’t feel right? 

Moreover, the copyright issue about the data that these models are trained on is yet to be resolved. Hany Farid, a professor at University of California and researching in synthetic media, told MIT Technology Review that he is a little sceptical about the guidelines, further explaining that these voluntary guidelines and principles by the companies involved in building and distributing the technology would never work. Farid also insisted that all the AI-generated content should be mandatorily watermarked.

David Holz, the founder of Midjourney, in an interview with Forbes admitted that his AI-image generator was using images and assets of other artists in its dataset without their consent. This obviously led to an outrage among the artists. There seems to be no regulation in place whatsoever—even John Oliver recently mocked this

Gary Marcus, Professor of Psychology and Neural Science at NYU, also told AIM in an interview that current AI models are not good at being responsible and ethical. He suggests that regulators must treat AI just like any other technology to avoid potential misuse. This means allowing AI research to be governed under a set of rules and frameworks.

Mira Murati, CTO of OpenAI, in a recent interview with Time magazine, expressed her belief that policymakers should regulate AI, stating that it is not too early to do so. She emphasised the importance of responsible and controlled awareness of AI by OpenAI and similar companies. However, she acknowledged that their group is small and that more input is required from regulators, governments, and other stakeholders beyond just the technological aspect.

Self Regulations is as Good as No Regulation

Since AI is already pervading in the daily lives of billions of people, governments across the globe are increasingly including AI in their acts and laws. In November 2021, 193 Member States of UNESCO’s General Conference got together to adopt Recommendation on the Ethics of Artificial Intelligence. The document defines principles, policies, and values for guiding the governments of participant countries for building frameworks for ensuring that AI is deployed for the common good. The US, which is not a member of UNESCO, also unveiled its AI Bill of Rights aiming to prevent the harm caused by the rise of AI systems. 

UNESCO AI Director Mariagrazia Squicciarini, when speaking with AIM, defined AI ethics as ‘putting technologists at the service of people, and not people being just used by technologists’. She suggested that it relies on the companies’ self-assessing capabilities and accountability to fix the bias in their data. 

This suggests that the voluntary guidelines are probably one of the first ideal steps from companies. But fixing the bias still remains unaddressed in the framework. The guidelines do not mention removal of toxic content or bias in the dataset upon which the AI models are trained. This raises several serious concerns. It is probable that this consorting of the companies can be a step towards avoiding government intervention in the development of these technologies. 

For example, if we look at the social media landscape, a similar scenario emerged last year when big-tech decided to have a self-regulatory body and essentially avoid the Indian government’s intervention. However, considering the mixed stance of all the companies involved, the idea was widely criticised and did not come to fruition, even with the government continually pushing for it. Something similar is highly likely to happen in AI’s case as well.

Share
Picture of Mohit Pandey

Mohit Pandey

Mohit dives deep into the AI world to bring out information in simple, explainable, and sometimes funny words. He also holds a keen interest in photography, filmmaking, and the gaming industry.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India