The European Parliament has taken a significant step towards regulating artificial intelligence systems by voting in favor of the EU AI Act. This act, if passed into law, would be the first of its kind globally and aims to provide a comprehensive framework for AI regulation within the European Union.
The EU AI Act has several key objectives. Firstly, it seeks to protect fundamental civil rights by ensuring that AI systems do not infringe upon privacy, dignity, and non-discrimination. It addresses concerns surrounding the potential for AI to be used in ways that violate individuals’ rights and aims to establish safeguards to prevent such abuses.
Secondly, the act aims to mitigate risks to health and safety posed by AI systems. This includes addressing concerns related to biased or discriminatory AI algorithms, which could result in unfair outcomes in areas such as hiring, lending, or criminal justice. The EU AI Act strives to ensure that AI systems are developed and used in a manner that is transparent, accountable, and unbiased.
Thirdly, the act seeks to foster innovation and competitiveness in the field of AI. It aims to create a favorable environment for AI development and deployment, promoting trust, reliability, and ethical considerations. By providing clear rules and guidelines, the EU intends to encourage responsible AI practices that benefit both businesses and society as a whole.
Under the proposed regulations, AI systems are classified into four levels of risk: unacceptable, high-risk, limited risk, and minimal risk. AI applications deemed to pose the most unacceptable risks include those that infringe upon fundamental rights, engage in subliminal manipulation, and enable social scoring based on behavior or appearance. Predictive policing tools and remote biometric identification systems in public spaces are also considered high-risk applications.
The EU AI Act places stringent requirements on high-risk AI systems. Developers of such systems will need to undergo a conformity assessment before placing them on the market. This assessment will evaluate factors such as data quality, documentation, transparency, and human oversight. Additionally, high-risk AI systems must adhere to specific requirements regarding robustness, accuracy, and cybersecurity.
On the other hand, most currently deployed AI systems, such as text generators like ChatGPT, video games,and spam filters are considered low- or no-risk applications. The EU aims to strike a balance between ensuring the safety and rights of individuals while allowing for the continued use and development of AI technologies that pose minimal risks.
To enforce compliance with the regulations, the EU AI Act empowers national supervisory authorities to carry out inspections, issue warnings, and impose fines for non-compliance. Violations may result in penalties of up to €30 million ($33 million) or 6% of a company’s annual global revenue, depending on which amount is higher.
While the EU Parliament’s approval of the AI Act is a significant milestone, the act is not yet law. It still requires final approval from member states and further discussions among EU institutions. The anticipated timeline suggests that the regulations will not come into effect until 2025, following the next European Parliament elections.
In the meantime, the EU plans to collaborate with US counterparts to establish a voluntary code of conduct for AI. This cooperation aims to align AI principles and standards between the EU and the US and potentially extend to other like-minded countries, ensuring a global approach to AI regulation.
Overall, the EU AI Act represents a comprehensive and ambitious effort to regulate AI within the EU. By addressing potential risks, protecting civil rights, and promoting responsible innovation, the act seeks to create a trustworthy and sustainable AI ecosystem that benefits individuals, businesses, and society as a whole.

