Now Reading
Can Social Media Algorithms Ever Be Ethical?

Can Social Media Algorithms Ever Be Ethical?

  • In Google’s I/0 conference 2018, the company announced plans to introduce tools to cap binge watching.

According to Datareportal, 4.33 billion people use social media, equating to 55 per cent of the total population on Earth. In the past 12 months alone, a total of 521 million new users joined social media at an annual growth of 13.7%–an average of 16 new users per second. 

“I feel tremendous guilt,” admitted Chamath Palihapitiya, former Vice President of User Growth at Facebook, to an audience of Stanford students in 2017. “The short-term, dopamine-driven feedback loops that we have created are destroying how society works.”

Register for our upcoming Masterclass>>

Algorithms are geared towards maximising user attention. Behavioural design strategies use neurological and behavioural insights to develop customer interactions and influence user behaviour. “The success of an app is often measured by the extent to which it introduces a new habit,” said app developer Peter Mezyk in an interview with Business Insider. The more time a user spends on the app, the more advertising revenue flows into companies’ pockets; attention is currency. 

Source: Datareportal

Youtube’s recommendation algorithm knows what people want so well that an average mobile viewing session lasts a good 60 minutes. The algorithm brings YouTube much of its revenue and is worth billions of dollars. CPO Neal Mohan said recommended videos account for 70 percent of the time spent by users on Youtube. The algorithm comprises two neural networks, one for candidate generation and the other for ranking. The first algorithm narrows down the massive video library, and the second algorithm ranks these videos based on their value to the user. Youtube’s use of deep neural networks also makes Google one of the first companies to deploy production level deep neural networks for recommender systems. The candidate generation network uses the user’s activity history- user-level demographics, the IDs of videos watched, search history and gives an output of a few hundred videos that are broadly applicable to the user. Candidate generation uses collaborative filtering based on what people with similar ratings have enjoyed previously. The second network uses richer features for high recall to not miss out on the relevant videos for suggestions. The network is trained end to end.

Looking for a job change? Let us help you.

Addictive algorithms

In Google’s I/0 conference 2018, the company announced plans to introduce tools to cap binge watching. However,  Google’s research paper (2019) proposed an update of its algorithms to recommend even more targeted content to increase engagement. 

“The largest supercomputers in the world are inside of two companies — Google and Facebook — and where are we pointing them? We’re pointing them at people’s brains, at children,” said Tristan Harris, a former tech ethicist at Google and co-founder of The Center for Humane Technology, in an interview.

The Facebook algorithm decides which posts show up in the user’s feed. Facebook relies on multiple layers of ML models, and rankings developed to predict posts most valuable  and meaningful to the user. Personalisation and relevant content remain the company’s top priority. TikTok’s recommendation engine leverages AI and data mining practices to build its ‘For You’ feed.

Recommendation algorithms are developed to get users in the loop:

  • As soon as a video ends, another video begins with relevant content. Tik Tok, Instagram reels do not have the option to disable auto-playing of content making it difficult to stop watching. 
  • Endowment Effect: The more time one invests in constructing a virtual world in a game or a profile on social media, it gets harder to detach from or delete the app. 
  • Social pressure: For example, blue ticks upon viewing and grey ticks upon the message being successfully delivered.
  • Social reward and feedback

Ethical issues

Algorithms that show a user what they want to see and get better at predicting what the user consumes eventually creates an echo chamber. For example, because YouTube’s algorithm is optimised for maximum engagement, it tends to offer choices that reinforce already held beliefs, likes and dislikes shutting down other views, all of it creating an addictive experience. Reports show controversial and extreme videos are rewarded, leading to misinformation and political radicalisation. In many respects, the political discourse people engage in today is a direct product of social media. 

The purpose of algorithms is to help make choices. The user makes a choice from an array of options which is fed to train the algorithm. In time, it creates a feedback loop where the output becomes a part of the algorithm’s input. 

To a large extent, algorithms are treated as purely engineering challenges and not as socio-technical problems. Tech experts are more concerned with finding effective solutions rather than their societal impact. Algorithms pick up biases over time. Despite strong arguments and evidence that algorithms narrow options, some research studies doubt the validity. Dutch communications expert Judith Moller and colleagues used algorithms for article recommendations in a newspaper. The study reported a more diverse set of outputs as compared to human editors’ picks. 

Machine learning algorithms have mastered recommending what a user will engage with. Research is done on engineering diversity to improve the range of choices and better understand user interests. That’s as far as algorithms are concerned. Social media companies, however, need to be regulated but balancing profit and human interest is a challenge for companies.

What Do You Think?

Join Our Discord Server. Be part of an engaging online community. Join Here.

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top