As artificial intelligence becomes more pervasive, one country has taken substantial strides in the field is China. The country known for its AI surveillance technology is now applying it to profile ethnic Muslim population.\n\nWith AI becoming a potent tool to transform the economy, global superpowers do realise the potential of the technology to define the new world order and needless to say, countries are increasing their AI-capabilities at a breakneck speed.\n\nFrom 2015, the Chinese AI market has witnessed a giant leap in technology adoption, that is, while the AI market was estimated to be at $1.6 billion in 2015, by 2020 the estimated value of the market is $14.3. According to recent reports, due to the concentration of the world\u2019s best AI unicorns, the country is way ahead in the battle of AI-supremacy, leaving the US far behind on various levels.\n\nHowever, as its race against time to become the dominating player in the world, the Chinese government sure has courted various controversies with its policy adoption.\n\nThe latest being its usage of AI and facial recognition to identify the minority Muslim population in its Western region. Uighurs are ethnically Turkic Muslims, form a minority community in the country and have been at the receiving end of at the government\u2019s behest. The community is known to have been under constant surveillance, with the government collecting the DNA and biometrics for strengthening the security to track their movement.\n\nAn estimated 1.6 million Uighurs are believed to be in the country and the recent report states the police authority ran facial scans more than 50,0000 time for an extended period. \n\nIt also stated that the country\u2019s police department is known to extensively rely on technology to identify Uighurs.\n\nThe system which identifies people on the basis of their skin tone and facial features were trained by feeding labelled data and many prominent startups have tied up with the government to develop the software. \n\n\u201cIncluded in the code alongside tags like \u201crec_gender\u201d and \u201crec_sunglasses\u201d was \u201crec_uygur,\u201d which returned a 1 if the software believed it had found a Uighur. Within the half million identifications the cameras attempted to record, the software guessed it saw Uighurs 2,834 times. Images stored alongside the entry would allow the police to double check,\u201d the report said.\n\nHowever, AI experts the daily spoke to believes that the software needn\u2019t always accurately predict the ethnicity as external factors like lighting and camera position can also affect the outcome.\n\nAI Bias Rules In China \n\nAs the Chinese government aims to strengthen its surveillance capabilities and continues to strengthen the use cases of AI, the development brings to the limelight the ongoing debate about ethicality of AI.\n\nAcross the world, major tech companies have come under severe scrutiny for bias in AI systems which have produced racist and sexist outcomes, forcing many to pull the plug owing to its faulty outcomes. \n\nThough one of the biggest arguments in favour \u00a0of technology is that it is not the AI systems that are biased, instead it is the developers\u2019 intrinsic bias that is passed on to the systems, \u00a0the argument couldn\u2019t hold water with AI critiques who have been constantly warning about the ill effects of the technology if it isn\u2019t curbed. \n\nDespite international rights organisations crying foul, the Chinese government has periodically refuted claims about the ill-treatment of Uighurs. In 2018, it had even gone to the extent of building re-education school, forcing millions of Uighurs to learn Chinese Mandarin and what the government claimed was a move to reintegrate the community members to the mainstream Chinese culture.