Artificial intelligence is advancing at a rapid pace. It has gradually made inroads into every aspect of our lives. Given the impact AI has on individuals and society as a whole, thinkers, experts, and policymakers are pushing for the regulation of this tech.
The European Commission, the executive arm of the EU, has proposed a new law to make Europe the global hub of trustworthy artificial intelligence (AI) called the Artificial Intelligence Act. It was proposed to guarantee security and fundamental rights of individuals and enterprises while strengthening wide AI takeover, investment and innovation. New machinery rules were also to be added to this approach by adapting safety rules to enhance consumer confidence in the new, versatile product generation.
Law for AI regulation
According to a report published by the Center for Data Innovation, the new law designed to regulate AI within Europe could cost the European economy EUR 31 billion ($36 billion) in the next five years. The Artificial Intelligence Act, if adopted, will be the most restrictive AI regulation in the world.
While these reports came as a surprise, the reason behind such a harsh outcome is not without logic. The Data Innovation Center argues that if the high-risk AI system were used, the small or medium-sized enterprise with a turnover of EUR 10 million would face compliance costs of up to EUR 400,000. The Commission defines such systems as systems that could affect people’s fundamental rights or safety.
Further, the Commission has stated that it wants 75 per cent of EU companies to use AI by the end of this decade. Since the AIA’s list of high-risk AI applications is lengthy and the penalties for non-compliance are severe, it is expected that the AIA’s extensive set of requirements will come at a high cost. The Commission maintains that the AIA’s purpose is to ensure that “high risk” artificial intelligence (AI) in Europe is covered by a detailed and comprehensive set of legal requirements. These requirements impose a significant financial burden on the developers and implementers of artificial intelligence and may hinder innovation.
The proposed Act will not only limit the use and development of AIs in Europe but will also impose high costs on EU businesses and consumers, the report from the Data Innovation Centre states. However, the Commission says that the findings of the report appear to be flawed.
It cannot be denied that such a finding poses a lot of pressure on both the country and the lawmaker. One cannot afford to turn a blind eye to such reports.
If the report findings are accurate, AIA will negatively impact companies, skilled workers will become less available, and companies will be unable to meet the AIA’s compliance requirements. According to the report, this will further suffocate the vitality of Europe’s digital ecosystem.
Policies are meant to effectively regulate and assist in the country’s economic growth rather than choking it. In April this year, the United States Federal Trade Commission (FTC) issued a bold set of guidelines on truth, equity and fairness in AI. While in June, the World Health Organization (WHO) laid down six guiding principles for its design and use in healthcare.
Artificial intelligence (AI) is everywhere. It is rapidly developing, progressing and contributing to the global economy. Yet it raises numerous concerns and anxieties, especially if one considers the legal and human rights issues involving AI. With the existing loopholes and biases in AI, there is no doubt that the European Commission aims to regulate AI to make it more human-centric, sustainable, inclusive, and trustworthy. But their outlook is somehow hindered by data and logic.
According to Ben Mueller, a senior policy analyst at the Center for Data Innovation and author of the report, while the European Commission has repeatedly asserted that the draft AI legislation will support growth and innovation in Europe’s digital economy, a realistic economic analysis suggests that argument is at best disingenuous.