MITB Banner

No, European Draft AI Act Does Not Target Open-Source Softwares 

The act have introduced a range of bans on what the European Parliament refers to as "intrusive and discriminatory uses of AI systems."
Share
Listen to this story

It’s often is said that laws are made for the lawyers. It’s their job to understand, interpret and educate the clients. However, in the age of the internet, where anyone can acquire mass readership, such interpretations take a different turn. A recent article by Technomancers.ai is gaining traction because of such interpretation. 

The internet is flooded with opinions that the draft act might target open-source software, citing the same article mentioned above. According to the article, if the recently proposed European Draft AI Act is approved, companies in the EU will face significant penalties, amounting to either €20,000,000 or 4% of their global revenue, if they offer any model without obtaining expensive licenses through an extensive process.

It further highlights that the act will have implications for open-source developers and hosting services such as GitHub, making them responsible for the availability of unlicensed models. It says that the European Union is putting pressure on big tech companies in a way that could harm small businesses (by imposing hefty penalties). 

What the Act really says

However, the draft AI act does not target the open-source ecosystem, it in fact cites research conducted by the European Commission, which says that free and open-source software has the potential to contribute substantially to the European Union’s GDP, ranging between €65 billion to €95 billion. Furthermore, the act highlights that the developers of free and open-source AI components are not obligated to the regulations. 

However, this exemption is applicable only when the components are not commercialised or put into use by a provider as part of a high-risk AI system. Again, the developers will not be required to adhere to the requirements even if the third-party uses their open-source models, but if the third party builds a new product above the open-source component, it will be liable to get certified. 

The intention is to alleviate regulatory burdens placed on the developers. However, the act does advocate for developers to adopt widely accepted documentation practices, such as model and data cards.

What is banned?

While the act does not force open source developers much, it certainly does so with the big techs. According to the draft legislation, the creators of systems like ChatGPT, Midjourney, and DALL-E will have to bear the responsibility of conducting comprehensive assessments to identify and mitigate various risks before making these tools publicly available. 

One crucial aspect of the assessment involves evaluating the environmental impact of training these energy-intensive systems. Furthermore, the legislation mandates companies to disclose any utilisation of training data that is protected by copyright law. The recent decision of Google not releasing Bard in Europe is also being linked with the requirement. 

Additionally, the regulation mandates the establishment of a public database specifically for “high-risk” AI systems deployed by public and government authorities. This database serves as a crucial tool to promote transparency and ensure that EU citizens are well-informed about the deployment and impact of such technologies. 

By providing accessible information, users can have a clearer understanding of how and when they are being affected by AI systems, fostering accountability and empowering citizens in their relationship with AI technology.

No more public experiments! 

Moreover, the act includes robust provisions that impose significant restrictions on the use of mass facial recognition programs in public spaces and predictive policing algorithms that rely on personal data to identify potential future offenders. These measures aim to protect individuals’ privacy and prevent potential misuse of AI technologies in law enforcement. 

It is believed that the measures are put to essentially end the AI experiments like predictive policing which took caused a major uproar when a particular AI algorithm identified people belonging to a specific community. 

Also, the act has introduced a range of bans on what the European Parliament refers to as “intrusive and discriminatory uses of AI systems”. These amendments, which have expanded the original list of prohibited activities, specifically target specific applications of AI technology that raise concerns. 

The newly affected use cases encompass a variety of scenarios. For instance, the act prohibits the use of “real-time” remote biometric identification systems in publicly accessible spaces, as well as the use of “post” remote biometric identification systems, with the exception of law enforcement for investigating serious crimes, subject to obtaining judicial authorisation. 

Additionally, the act forbids the use of biometric categorisation systems that rely on sensitive characteristics such as gender, race, ethnicity, citizenship status, religion, and political orientation. Predictive policing systems based on profiling, location, or past criminal behaviour are also deemed off-limits. Furthermore, the act prohibits the implementation of emotion recognition systems in law enforcement, border management, workplace etc. 

Wrapping Up

While the act still needs to be passed in the Parliament, the fact that the act will have a global impact is undeniable. Oftentimes, it is seen that if a company is following a European regulation, it is likely to follow the same globally, and hence, the issue of big techs needing to disclose the copyrighted material on which the models are trained in a big one. 

And while companies like Google are yet to bend to the regulations (as we see the recent launch of Bard), the act will provide a base for copyright activists fighting in the USA with companies like OpenAI and Stability AI.   

PS: The story was written using a keyboard.
Picture of Lokesh Choudhary

Lokesh Choudhary

Tech-savvy storyteller with a knack for uncovering AI's hidden gems and dodging its potential pitfalls. 'Navigating the world of tech', one story at a time. You can reach me at: lokesh.choudhary@analyticsindiamag.com.
Related Posts

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories

Featured

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

AIM Conference Calendar

Immerse yourself in AI and business conferences tailored to your role, designed to elevate your performance and empower you to accomplish your organization’s vital objectives. Revel in intimate events that encapsulate the heart and soul of the AI Industry.

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed