According to a BBC report, China is testing emotion recognition tech on Uyghurs in Xinjiang. The BBC’s source–a software engineer who claimed to have installed the camera systems in police stations in the province- said the Chinese government uses Uyghurs as test subjects for various experiments.
The unnamed source said the emotion detection camera — placed 3m from the subject–is trained to detect the slightest changes in facial expressions and skin pores. The source said it is similar to a lie detector but far more advanced. The software outputs a pie chart, and the red segment shows an anxious state of mind.
The source claimed the software is designed for “pre-judgement without any credible evidence”.
Last year, True Light College, a Hong Kong-based secondary school for girls, made headlines for using a surveillance system in online classes. The school used software called 4 Little Trees, an AI-based program that could read children’s emotions as they learn. The algorithm measures micro-movements of facial muscles to identify emotions such as happiness, sadness, anger, fear, or surprise. As per the founder of 4 Little Trees, up to 83 schools in Hong Kong have adopted this system in the past year.
Many firms are working on similar emotion recognition technology–an extension of facial recognition technology. Google and Microsoft use basic emotion analysis. Amazon claimed that its Rekognition facial recognition software assesses up to eight emotions, including fear. The company cited retail as one of the uses cases, saying stores can use the emotion recognition tech to understand the behaviour patterns of shoppers and identify trends early from the live camera feed.
In the past, Disney has used a software to test volunteers’ reaction to its films like Star Wars: The Force Awakens and Zootopia. Automobile companies such as Ford, BMW and Kia Motors use emotion recognition software to check drivers’ alertness.
According to the Market And Market report, the global market for emotion detection technology was worth $19.5 billion in 2020 and is expected to grow to $37.1 billion in 2026, at a CAGR of 11.3 percent during the forecast period. The higher adoption of IoT, AI, machine learning, and deep learning technologies, growing demand in the automation industry, and the need for socially intelligent artificial agents are the major drivers of growth, the report said.
Meanwhile, New York-based research centre AI Now Institute said in a 2019 report that emotion-detection tech is based on a ‘markedly shaky foundation’.
AI experts say emotion recognition systems are based on the assumption that humans manifest emotions in similar ways. Something as simple as a raised eyebrow may have different meanings in different cultures. Luke Stark, assistant professor in the faculty of information and media studies at the University of Western Ontario, said in an interview, “Emotions are simultaneously made up of physiological, mental, psychological, cultural, and individually subjective phenomenological components. No single measurable element of an emotional response is ever going to tell you the whole story. Philosopher Jesse Prinz calls this “the problem of parts.”
In a recent essay for Nature, Professor Kate Crawford said many such algorithms are based on psychologist Paul Ekman’s study, conducted in 1960s, on nonverbal behaviour. According to him, there are six basic emotions– happiness, sadness, fear, anger, surprise and disgust. Ekman’s work and ideas have formed the basis for emotion-detection technologies used by giants such as Microsoft, IBM, and Amazon. “Six consistent emotions could be standardised and automated at scale — as long as the more complex issues were ignored,” Prof Crawford contests. She said such technology has not been independently audited for its effectiveness.
Ekman’s study underplays the importance of cultural context. It assumes a definitive correlation between facial expressions and a person’s emotional state, despite studies indicating that facial expressions are not reliable indicators of emotion.