The European Union is considering a ban on the use of AI for a slew of use cases, including mass surveillance and social credit scores, according to a leaked draft proposal by the European Commission, first accessed by Politico. An official statement is expected next week.
A giant leap towards responsible use of AI. EU AI Regulation almost finished. What to expect:
— Teemu Roos (@teemu_roos) April 14, 2021
• CE markings in EU approved AI systems
• ban on biometric mass surveillance (face recognition)
• national AI authorities
• fines up to €20M/4% of annual revenue for the bad guys https://t.co/ry9qHgCiEM
One of the proposals in the draft recommends European Commission to ban certain use-cases of AI and limit its use for other applications if they don’t meet certain standards. The document recommends a ban on the use of AI for mass surveillance or developing a social credit score system.
The draft proposal also seeks special authorisation to use ‘remote biometric identification systems’ and demands an explicit notification to people when they interact with AI systems, ‘unless it’s obvious’. It also calls for oversight on “high-risk” AI systems that pose a direct threat to someone’s safety, like self-driving cars or systems that directly impact livelihood like AI systems used for hiring, assigning recidivism scores, or allocation of personal loans.
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
The new #AI regulation is market and innovation-oriented. By regulating high-risk applications only, it leaves out many other uses, including the necessary assessment of workplace risks, which go way beyond HR #algorithmic managementhttps://t.co/V0myZykenm via @Verge
— Aida Ponce Del Castillo (@APonceETUI) April 15, 2021
Member states of the EU would be required to set assessment boards to test and validate the high-risk AI systems. The draft proposal also calls for a ‘European Artificial Intelligence Board’ with representation from all member states to help the European Commission identify which systems can be classified as high-risk.
Companies that don’t comply could be fined up to €20 million or 4% of their turnover.
Download our Mobile App
While the US and China have focused their attention on developing powerful AI systems, they have fallen short in setting up an airtight regulatory framework to ensure individual safety and rights. The EU has set up a powerful GDPR (General Data Protection Regulation) to address such issues. This draft proposal is also in line with EU’s “human-centric” approach towards developing AI.
Can’t wait for the massive PR back paddling of Big Tech to rebrand their #AI technology in Europe
— Luca Foschini (@calimagna) April 14, 2021
“Noooo we’re not doing AI, we’re just using logistic regression!”https://t.co/d3YF4QZzZH
However, the leaked draft has drawn flak from policy wonks, calling for improvements in terms of the ambiguity of the language used. The experts demand more clarity on what constitutes AI and the definition of what’s detrimental or high-risk in AI use cases.
Transparency gets a nod in the leaked draft EU AI regulation. But thin on detail. Does transparency apply to all risk levels and, if so, what will be mandatory? https://t.co/X4lrFoVcqo#AI #transparency #AIAAIC #EU
— AIAAIC (@AiControversy) April 15, 2021