MITB Banner

Will evolving regulations stymie AI innovations?

The AI regulations cover four categories of risk; unacceptable risk, high risk, limited risk and minimal/ no risk.

Share

“A model is as good as the underlying data,” said Jayachandran Ramachandran, SVP of Artificial Intelligence Labs at Course5 Intelligence during his MLDS talk “Will evolving regulations stymie AI innovations? What the future holds?”. He discussed how industries and governments recognise this problem and develop regulations and recommendations. He also touched on the recommendations and implications crelated to European Union’s AI regulations draft.

Today, most countries have an AI policy and strategies in place. The EU is at the forefront of AI regulations and drafts. “The EU draft in 2021 is acting as a benchmark for other countries,” Ramachandran noted. The draft seeks to ensure the AI policy is human-centric, sustainable, secure, inclusive and trustworthy. Additionally, the draft focuses on a seamless transition of AI from the lab to the market.

Extensive implementation

Any system deployed for the users based in the EU will be under the scope of this AI regulation. If the consumers are based outside the EU, they will not be held accountable. The breadth of this regulation is far-reaching because it also extends to international companies deploying their machines in the EU. 

Risk categorisation in the regulation

The AI regulations cover four categories of risk; unacceptable risk, high risk, limited risk and minimal/ no risk. 

Unacceptable risk use-cases are completely prohibited from being deployed in the EU. In contrast, high risks are permitted, but the company has to meet certain AI requirements and conformity assessments. Limited risk means the AI is permitted but subjected to information transparency obligations, and minimal risk is permitted without restrictions.

Unacceptable risk aims to prohibit subliminally manipulative applications. There could be applications that are trying to gamify things. For instance, a company could claim to provide 10-minute deliveries, thereby pressurising other delivery staffers to compete with an imaginary superhero delivery personnel. This could be physically and mentally dangerous for the deliverymen. The section on unacceptable risk also covers the exploitation of children or mentally disabled people.

Additionally, the general-purpose social scoring could lead to personal details determining needs like a student loan. Lastly, the regulations forbid remote biometric identification for law enforcement in public spaces. Many countries are now starting to indulge in mass surveillance of citizens.

High-risk systems include safety components of regulated products. For instance, AI is used to detect health issues in the medical devices industry. Such systems need to go through a third party auditing. The regulation has also classified stand-alone AI systems such as biometric identification, critical infrastructure management, law enforcement, migration and border control management, and more. The regulation has also suggested a set of risk management processes such as usage of high-quality data, established documentation for traceability and auditability, human oversight and constant validation for such systems, robustness accuracy and cybersecurity. 

Limited risk use-cases are permitted but subjected to transparency obligations such as notifying humans when interacting with an AI, especially for aspects like biometric recognition, emotional recognition or deep fakes.  

Minimal or no risk is permitted with no restrictions.

Conditions for high-risk categories 

The high-risk category is subject to a 5-step validation process:

  1. Determining the classification of high-risk system
  2. Ensuring design, development and quality systems are in place
  3. Do the conformity assessment
  4. Affix the CE marketing to the system and sign a declaration of conformity
  5. Release the model in the European market

With the impending regulations, AI model testing will be very important for upcoming systems. “In traditional software development, logic is coded using programming languages after which the data is processed, and we get the desired results,” said Jayachandran. “On the other hand, we know we dont code the logic; the logic is based on data.” Writing test cases is a challenging task in such cases, but compliance needs to be assured on multiple dimensions. 

Nine dimensions of testing

The nine testing dimensions to validate a raw AI model includes the correctness of the system, model security, data privacy, efficiency, explainability, fairness, relevance, reproducibility and drift. 

The first stage identifies the use case risk category of the AI’s function and assesses the test requirements. In the data understanding phase, data security, privacy and GDPR compliance needs to be ensured. The data preparation entails flagging synthetic data appropriately, checking for class imbalance, preparing test scenarios and documenting the findings. It is better to avoid black boxes in the modelling stage and ensure explainability and constant logging. In evaluation, assessing the model explainability is important while testing multiple dimensions and performing conformity assessment. The EU will provide a sandbox to test the application. The model should be registered in the EU database with a user manual and user training before release. Lastly, monitoring and supporting the model with human oversight is critical. 

“We all should adopt. Bringing these practices in our regular AI development lifecycle ensures some level of standardisation and documentation, so it is easy to manage systems and take it forward,” Jayachandran said. 

Share
Picture of Avi Gopani

Avi Gopani

Avi Gopani is a technology journalist that seeks to analyse industry trends and developments from an interdisciplinary perspective at Analytics India Magazine. Her articles chronicle cultural, political and social stories that are curated with a focus on the evolving technologies of artificial intelligence and data analytics.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.