MITB Banner

Yes, AI Is Racist & China Has Proven It With Its Facial Recognition Tech To Track Ethnic Muslims

Image Source: WNYC Studios

As artificial intelligence becomes more pervasive, one country has taken substantial strides in the field is China. The country known for its AI surveillance technology is now applying it to profile ethnic Muslim population.

With AI becoming a potent tool to transform the economy, global superpowers do realise the potential of the technology to define the new world order and needless to say, countries are increasing their AI-capabilities at a breakneck speed.

From 2015, the Chinese AI market has witnessed a giant leap in technology adoption, that is, while the AI market was estimated to be at $1.6 billion in 2015, by 2020 the estimated value of the market is $14.3. According to recent reports, due to the concentration of the world’s best AI unicorns, the country is way ahead in the battle of AI-supremacy, leaving the US far behind on various levels.

However, as its race against time to become the dominating player in the world, the Chinese government sure has courted various controversies with its policy adoption.

The latest being its usage of AI and facial recognition to identify the minority Muslim population in its Western region. Uighurs are ethnically Turkic Muslims, form a minority community in the country and have been at the receiving end of at the government’s behest. The community is known to have been under constant surveillance, with the government collecting the DNA and biometrics for strengthening the security to track their movement.

An estimated 1.6 million Uighurs are believed to be in the country and the recent report states the police authority ran facial scans more than 50,0000 time for an extended period.

It also stated that the country’s police department is known to extensively rely on technology to identify Uighurs.

The system which identifies people on the basis of their skin tone and facial features were trained by feeding labelled data and many prominent startups have tied up with the government to develop the software.

“Included in the code alongside tags like “rec_gender” and “rec_sunglasses” was “rec_uygur,” which returned a 1 if the software believed it had found a Uighur. Within the half million identifications the cameras attempted to record, the software guessed it saw Uighurs 2,834 times. Images stored alongside the entry would allow the police to double check,” the report said.

However, AI experts the daily spoke to believes that the software needn’t always accurately predict the ethnicity as external factors like lighting and camera position can also affect the outcome.

AI Bias Rules In China

As the Chinese government aims to strengthen its surveillance capabilities and continues to strengthen the use cases of AI, the development brings to the limelight the ongoing debate about ethicality of AI.

Across the world, major tech companies have come under severe scrutiny for bias in AI systems which have produced racist and sexist outcomes, forcing many to pull the plug owing to its faulty outcomes.

Though one of the biggest arguments in favour  of technology is that it is not the AI systems that are biased, instead it is the developers’ intrinsic bias that is passed on to the systems,  the argument couldn’t hold water with AI critiques who have been constantly warning about the ill effects of the technology if it isn’t curbed.

Despite international rights organisations crying foul, the Chinese government has periodically refuted claims about the ill-treatment of Uighurs. In 2018, it had even gone to the extent of building re-education school, forcing millions of Uighurs to learn Chinese Mandarin and what the government claimed was a move to reintegrate the community members to the mainstream Chinese culture.

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Akshaya Asokan

Akshaya Asokan

Akshaya Asokan works as a Technology Journalist at Analytics India Magazine. She has previously worked with IDG Media and The New Indian Express. When not writing, she can be seen either reading or staring at a flower.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories