Now Reading
Stanford Researcher Claims His Algorithm Can Divine Your Political Leanings From Facial Features

Stanford Researcher Claims His Algorithm Can Divine Your Political Leanings From Facial Features

  • Ubiquitous facial recognition technology can expose individuals’ political orientation, as faces of liberals and conservatives consistently differ, said Stanford researcher Michal Kosinski.

The Cambridge Analytica exposé opened the world’s eyes to the overt power of Big Data on our social, political and lifestyle choices. Realpolitik found a true accomplice in data analytics to mine the minds of millions to further its agenda. Today, political parties across the world leverage technology to game the system. In political campaigns, targeted messaging is the difference between winning and losing elections. To identify the right audience, the algorithm looks for ‘tells’ in people’s social media behaviour. Advances in ML, AI and Data Science means the world has become a huge Skinner’s Box. Now, Stanford researcher Michal Kosinski has taken things to the next level by introducing a facial recognition algorithm that claims to reveal an individual’s political views using facial data.

The nefarious “Gaydar” study fame Kosinski has now showcased how artificial intelligence could be used to classify individuals based on their liberal or conservative views. “Ubiquitous facial recognition technology can expose individuals’ political orientation, as faces of liberals and conservatives consistently differ,” wrote the author. He firmly believes that this study’s findings could have critical implications in protecting privacy and civil liberties.

However, just like his study of detecting sexual orientation using deep neural networks, this paper has also opened a Pandora’s Box.

A Psychology professor at Princeton Neuroscience Institute, Alexander Todorov, has questioned  the ethics involved in conducting such a study. He has also pointed out the flaws in the paper. He said such technologies and innovations can create privacy threats and can be used for devious political ends. Prominent researchers like Timnit Gebru and Joy Buolamwini from MIT Media Lab have been continuously working towards identifying and mitigating the deleterious effects of inherent biases in AI technologies.

Also Read: Is There A Case Of Regulating Facial Recognition Technology?

How Does It Work?

Michal Kosinski’s study majorly revolved around identifying personal attributes by linking their facial appearances. According to the researcher, facial recognition algorithms, like humans, can easily deduce someone’s gender, age, ethnicity, or emotional state, along with intelligence, sexual orientation, and political inclination.

For the study, the team, headed by Kosinski, used data from more than one million participants from the US, the UK, and Canada, with their self-declared political inclination, age, and gender. This was coupled with the images of the participants from their social media profiles. 

Distribution of political orientation of the participants.

According to the team, these generic facial images contain many potential cues to their preferences and attitude towards a political view, ranging from facial expression and self-presentation to facial morphology. The researchers believe that the ethnic diversity of the dataset, along with the involvement of the conservative-liberal spectrum of participants have increased the likelihood of applying these findings to other countries, cultures, and types of images.

The researchers leveraged an open-source facial-recognition algorithm that extracts more than two thousand data points for the study. These facial data points were then cross-validated to compare the images with liberal and conservative faces.

Procedure for predicting political inclination with the help of facial images.

To omit the unwanted background and non-facial features in the images, these were tightly cropped to 224 × 224 pixels. The team then used the VGGFace2 model to convert these facial images into face descriptors, “or 2,048-value-long vectors subsuming their core features,” wrote Kosinski.

Typically, the similarity between these descriptors is used to match the people who are likely to represent the same person’s face. However, for this study, the team compared these facial descriptors with the average face descriptors of liberals and conservatives. This was done using the cross-validation method using a logistic regression model aimed at self-reported political orientation — conservative vs liberal. 

The team has also used alternative methods like a deep neural network classifier, and a simple ratio between average cosine similarity to liberals and conservatives to produce virtually identical results, added Kosinski.

The results, showcasing the accuracy of the algorithm predicting political orientation.

The results showed the facial recognition algorithm can easily classify people according to their political affiliation with 72% accuracy on a massive database of eight lakh US dating website users. As a matter of fact, the team has also noted a similar accuracy of 71% with Canada’s dating website users and 70% accuracy with the UK database. The algorithm’s accuracy hit 73% with the US Facebook users’ data. 

Wrapping Up

Kosinski said white people, older people, and males are more likely to be conservatives. However, experts believe no facial or personality traits can be a factor in deciding an individual’s political affiliation. 

Nevertheless, it would be interesting to see how these facial recognition algorithms will ever bring a positive outcome for society. Instead, researchers believe such technologies can pose a massive privacy threat and lead to dangerous social engineering strategies.

Read the entire paper here.


Join Our Telegram Group. Be part of an engaging online community. Join Here.

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top