MITB Banner

Will evolving regulations stymie AI innovations?

The AI regulations cover four categories of risk; unacceptable risk, high risk, limited risk and minimal/ no risk.

“A model is as good as the underlying data,” said Jayachandran Ramachandran, SVP of Artificial Intelligence Labs at Course5 Intelligence during his MLDS talk “Will evolving regulations stymie AI innovations? What the future holds?”. He discussed how industries and governments recognise this problem and develop regulations and recommendations. He also touched on the recommendations and implications crelated to European Union’s AI regulations draft.

Today, most countries have an AI policy and strategies in place. The EU is at the forefront of AI regulations and drafts. “The EU draft in 2021 is acting as a benchmark for other countries,” Ramachandran noted. The draft seeks to ensure the AI policy is human-centric, sustainable, secure, inclusive and trustworthy. Additionally, the draft focuses on a seamless transition of AI from the lab to the market.

Extensive implementation

Any system deployed for the users based in the EU will be under the scope of this AI regulation. If the consumers are based outside the EU, they will not be held accountable. The breadth of this regulation is far-reaching because it also extends to international companies deploying their machines in the EU. 

Risk categorisation in the regulation

The AI regulations cover four categories of risk; unacceptable risk, high risk, limited risk and minimal/ no risk. 

Unacceptable risk use-cases are completely prohibited from being deployed in the EU. In contrast, high risks are permitted, but the company has to meet certain AI requirements and conformity assessments. Limited risk means the AI is permitted but subjected to information transparency obligations, and minimal risk is permitted without restrictions.

Unacceptable risk aims to prohibit subliminally manipulative applications. There could be applications that are trying to gamify things. For instance, a company could claim to provide 10-minute deliveries, thereby pressurising other delivery staffers to compete with an imaginary superhero delivery personnel. This could be physically and mentally dangerous for the deliverymen. The section on unacceptable risk also covers the exploitation of children or mentally disabled people.

Additionally, the general-purpose social scoring could lead to personal details determining needs like a student loan. Lastly, the regulations forbid remote biometric identification for law enforcement in public spaces. Many countries are now starting to indulge in mass surveillance of citizens.

High-risk systems include safety components of regulated products. For instance, AI is used to detect health issues in the medical devices industry. Such systems need to go through a third party auditing. The regulation has also classified stand-alone AI systems such as biometric identification, critical infrastructure management, law enforcement, migration and border control management, and more. The regulation has also suggested a set of risk management processes such as usage of high-quality data, established documentation for traceability and auditability, human oversight and constant validation for such systems, robustness accuracy and cybersecurity. 

Limited risk use-cases are permitted but subjected to transparency obligations such as notifying humans when interacting with an AI, especially for aspects like biometric recognition, emotional recognition or deep fakes.  

Minimal or no risk is permitted with no restrictions.

Conditions for high-risk categories 

The high-risk category is subject to a 5-step validation process:

  1. Determining the classification of high-risk system
  2. Ensuring design, development and quality systems are in place
  3. Do the conformity assessment
  4. Affix the CE marketing to the system and sign a declaration of conformity
  5. Release the model in the European market

With the impending regulations, AI model testing will be very important for upcoming systems. “In traditional software development, logic is coded using programming languages after which the data is processed, and we get the desired results,” said Jayachandran. “On the other hand, we know we dont code the logic; the logic is based on data.” Writing test cases is a challenging task in such cases, but compliance needs to be assured on multiple dimensions. 

Nine dimensions of testing

The nine testing dimensions to validate a raw AI model includes the correctness of the system, model security, data privacy, efficiency, explainability, fairness, relevance, reproducibility and drift. 

The first stage identifies the use case risk category of the AI’s function and assesses the test requirements. In the data understanding phase, data security, privacy and GDPR compliance needs to be ensured. The data preparation entails flagging synthetic data appropriately, checking for class imbalance, preparing test scenarios and documenting the findings. It is better to avoid black boxes in the modelling stage and ensure explainability and constant logging. In evaluation, assessing the model explainability is important while testing multiple dimensions and performing conformity assessment. The EU will provide a sandbox to test the application. The model should be registered in the EU database with a user manual and user training before release. Lastly, monitoring and supporting the model with human oversight is critical. 

“We all should adopt. Bringing these practices in our regular AI development lifecycle ensures some level of standardisation and documentation, so it is easy to manage systems and take it forward,” Jayachandran said. 

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Avi Gopani

Avi Gopani

Avi Gopani is a technology journalist that seeks to analyse industry trends and developments from an interdisciplinary perspective at Analytics India Magazine. Her articles chronicle cultural, political and social stories that are curated with a focus on the evolving technologies of artificial intelligence and data analytics.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories