MITB Banner

Why is the fairness in recommender systems required?

The system is considered fair when the recommendation is unbiased towards any group or individuals consumer or even providers.

Share

Listen to this story

We all utilise or make suggestions in our everyday lives. Machine learning helps to replicate the same suggestion mechanism, which is a system that filters out unwanted information and gives varied outputs based on distinct characteristics that vary from user to user. While recommending, these recommender systems may be biased or unfair at times; the bias might be of any form, such as a model bias or data bias. This article will be focused on discussing fairness for subjects and fairness for objects in the recommender systems. Following are the topics to be covered in this article.

Table of contents

  1. Brief about the Recommender system
  2. The necessity of fairness in the recommender system
  3. Fairness in Recommendation system
  4. Consumer Fairness
  5. Provider Fairness

From selecting books to selecting friends, recommender systems assist us in making judgments. Let’s know more about the recommendation systems.

Brief about the Recommender system

Recommender systems (RS) deliver item suggestions to consumers by utilising artificial intelligence ideas. A machine learning algorithm, for example, may be used by an online bookstore to classify books by genre and propose additional books to a customer who wishes to buy a book. 

Recommender systems are classified into three types based on the information on which they are based: collaborative, content-based, and hybrid filtering.

  • When processing information for a suggestion, a collaborative recommender system evaluates the user data. For example, by accessing user profiles on an online music store, the RS may obtain data like the age, country, and city of all users, as well as the songs they have purchased. Using this information, the system may identify individuals who have similar music tastes and then recommend tracks that similar users have purchased.
  • A recommender system that uses content-based filtering makes suggestions based on the item data it has access to. Consider the case of a person looking for a new computer in an online store. When a user searches for a certain computer (item), the RS collects information about that computer and searches a database for machines with comparable characteristics, such as price, CPU speed, and memory capacity. The results of this search are then returned to the user in the form of suggestions.
  • A recommender system that combines the two preceding classes into a hybrid filtering strategy, recommending things based on user and item data. A recommender system on a social network, for example, may recommend profiles that are similar to the user (collaborative filtering) by comparing their interests. In a subsequent stage, the system may treat the recommended profiles as things and so access their data to look for new profiles that are comparable (content-based filtering). Both sets of profiles are eventually returned as suggestions.

Aside from the standard recommendation process, in which users are shown items that they might be interested in, recommendations can be done in a variety of ways.

Context-aware suggestions are generated based on the context into which the user has been placed. A context is a collection of information regarding the user’s present condition, such as the time at their current location (morning, afternoon, evening), or their activities (idle, running, sleeping). The quantity of context information that must be analysed is large, making context-aware suggestions a difficult study topic.

Risk-aware recommendations are a subset of context-aware suggestions that take into account a scenario in which essential information, such as user vital information, is available. It is risk-aware since a bad decision might endanger the user’s life or inflict real-world damage. Some instances include advising a user on which medications to take or which equities to purchase, sell, or invest in.

Are you looking for a complete repository of Python libraries used in data science, check out here.

The necessity of fairness in the recommender system

The recommendation system is built on a feedback loop with three candidates: user, data, and model. These candidates are used in three different phases.

  • Collection: This denotes the phase of gathering data from users, which includes user-item interactions and other incidental information (e.g., user profile, item attributes, and contexts).
  • Learning: This refers to the development of recommendation models based on data collected. At its foundation, it predicts how likely a user is to embrace a target item based on prior encounters. Over the last few decades, much research has been undertaken.
  • Serving: This step delivers the suggestion results to users to meet their information needs. This stage will have an impact on users’ future behaviours and decisions.

These are the phases where biases are introduced to the system due to which the recommendation system could be unfair either to a subject or an object. Let’s deep dive into fairness in recommendations and understand the root causes and solutions to the problems.

Fairness in Recommendation system

To accomplish fairness, a common technique is to define a variable or variables that indicate membership in a protected class, such as race in an employment setting, and to build algorithms that eliminate prejudice relative to this variable. To apply this approach to recommender systems, we must acknowledge the critical importance of personalisation. The concept of suggestion implies that the finest things for one user may differ from those for another. It is also worth noting that recommender systems exist to help with transactions. As a result, many recommendation applications include many stakeholders and may raise fairness concerns for more than one set of participants.

Consider a recommender system that suggests employment openings to job searchers. An operator of such a system may aim, for example, to guarantee that male and female users with comparable qualifications receive job suggestions with comparable rank and income. As a result, the system would need to fight against biases in recommendation output, including biases caused purely by behavioural differences: for example, male users may be more prone to click optimistically on high-paying positions.

It is difficult to overcome such biases if there is no consensus on global preference ranking over goods. Personal preference is the essence of suggestion, especially in fields where individual taste is crucial, such as music, literature, and movies. Even in the work arena, some users may prefer a somewhat lower-paying job if it comes with additional perks like flexible hours, a shorter travel time, or better benefits. To achieve the policy goal of salary-based job recommendation, a site operator will need to go beyond a purely personalization-oriented approach, identify salary as the key outcome variable, and control the recommendation algorithm to make it sensitive to the salary distribution for protected groups.

Fairness varies according to the stakeholders

Different recommendation situations can be characterised by different stakeholder interest configurations. A recommender system’s stakeholders are divided into three categories: customers, suppliers, and platform or system.

  • Consumers are the ones who receive suggestions. They are the people who come to the platform because they are having difficulty making a decision or searching for something, and they anticipate tips to help them.
  • The suppliers are the organisations that supply or otherwise support the suggested objects and profit from the consumer’s decision.
  • The platform has developed the recommender system to connect customers with suppliers and has some way of profiting from the process.

The system will eventually have aims that are a function of the other stakeholders’ utilities. When multisided platforms can attract and keep critical masses of players from all sides of the market, they thrive. In our employment example, if a job seeker does not find the system’s recommendations useful, he or she may choose to disregard this component of the system or shift to a competitor platform. The same is true for providers; if a certain site does not offer its advertising as suggestions or does not supply appropriate people, a firm may pick another platform to publicise its job opportunities.

Recommendation methods on multi sided platforms might raise concerns about multi sided fairness. Specifically, there may be fairness-related criteria at work on more than one side of a transaction, and hence the transaction cannot be judged only based on the outcomes that accrue to one side. There are two types of systems defined by the fairness difficulties that occur with these groups: consumers fairness, and providers fairness.

Consumer Fairness

Fairness is a concept of nondiscrimination based on membership in protected groups, specified by a protected trait, such as gender and age. A customer-fair recommender system considers the differential impact of the suggestion on protected classes of recommendation consumers.

Group fairness is the lack of discrimination against a certain group, defined as the absence of a differential impact on the outcomes created for them. Despite the involvement of many stakeholders, fairness in recommender systems may have a particularly negative impact on individuals who get consumer suggestions. As a result, group consumer fairness should account for no disproportionate impact of recommendations on protected consumer groups. Providing assurances on this property is a critical strategic goal for the field’s responsible progress.

A credit card business recommends customer credit offerings in the motivating example. Because the items are all from the same bank, there are no difficulties with producer fairness. In systems of this nature, multistakeholder considerations do not exist. Several designs might be offered. One fascinating alternative is to create a recommender system based on the principle of fair classification. We may establish a mapping from each user to a prototype space, possibly using latent features retrieved from the rating data. Each prototype might be designed to have statistical parity with respect to the protected class. A significant aspect of this sort of system is ensuring a finite loss with regard to the input.

Some consumer fairness algorithms

  • SLIM: To decrease unfairness, it was proposed to create suggestions for a user from a neighbourhood with an equal number of peers from each category. A regularisation was added to SLIM, a collaborative filtering approach, to create a balance between protected and non-protected neighbours. Fairness was tested using a risk ratio version; this score is less or larger than 1 when the protected group is suggested fewer movies of the desired genre; on average, 1 implies perfect equity.
  • Latent Block Model: It is intended to provide fair recommendations by co-clustering people and goods while maintaining statistical parity for some sensitive features. It employs an ordinal regression model with sensitive qualities as inputs. Fairness was determined by ensuring that the proportion of users with the same choice across demographic categories was similar for any two products. 
  • NLR: Based on the degree of engagement on the platform, the developer evaluated consumer unfairness among user groups (more or less active). As mitigation, a re-ranking technique was used, with the goal of selecting items from each user’s baseline top-n list to optimise overall recommendation utility, with the model confined to minimising the difference in average recommendation performance across the groups of users.
  • Random sampling without replacement: The developer re-sampled user interactions in the training set such that the representation of user interactions across groups was balanced, and then re-trained the recommendation models with the balanced training set. The mitigation entailed developing a recommendation model by decreasing the dissimilarity between genuine ratings and anticipated ratings while also maximising the degree of independence between predicted ratings and sensitive labels. The MAE was used to calculate prediction errors. The equality of the expected rating distributions between groups was used to assess independence.

Provider Fairness

A Provider fair system is one in which fairness must be kept exclusively for the providers. Consider an online microfinance portal that collects loan requests from field partners all around the world who lend small sums of money to local entrepreneurs. The loans are sponsored interest-free by the organization’s members, the majority of whom live in the nation. The organisation does not currently provide a customised recommendation function, but if it did, one would envision that one of the organization’s goals would be to maintain the equitable distribution of money across its many partners in the face of well-known user biases. Consumers of the suggestions are simply contributors who gain no direct advantage from the system, hence there are no consumer-side fairness problems.

Where there is an interest in promoting market variety and preventing monopolistic domination, P-fairness may also be a factor. In the online craft marketplace Etsy, for example, the system may want to guarantee that new entrants to the market receive a fair proportion of recommendations despite having fewer consumers than established merchants. This sort of justice is not required by law but rather is built into the platform’s economic model. 

Provider fairness (P-fairness) systems include difficulties that Consumer fairness (C-fairness) systems do not. The producers in the P-fairness example, in particular, are passive; they do not seek out suggestion chances but must instead wait for users to come to the system and request recommendations.

Consider the preceding employment example. We want positions at minority-owned firms to be suggested to highly qualified individuals at the same rate as jobs at other types of businesses. The chance to propose a specific minority-owned firm to an acceptable applicant is uncommon and must be acknowledged as such. We will want to limit the loss of personalisation that comes with any advertising of protected providers, as we did in the C-fairness example.

Diversity-aware systems approach recommendation as a multi-objective optimization issue, with the goal of maintaining a particular degree of accuracy while also ensuring that recommendation lists are varied in terms of some representation of item content. These strategies may be repurposed for P-fairness recommendation by considering the protected group items as a separate class and then optimising for various suggestions relative to this variable.

A more dynamic approach to managing suggestion opportunities is required to achieve individual P-fairness coverage. The most similar analogue is perhaps found in online bidding for display advertising, where restricted ad expenditures serve the role of dispersing impressions across competing advertisers. Individual P-fairness is achieved in this scenario within the constraints of the customised mechanism by offering the protected group equal buying power to the non-protected group.

Conclusion

The recommendation system could be unfair from the user side or from the provider side. A recommender system is fair when it considers the differential impact of the suggestion on protected classes of recommendation consumers as well as also protects the objective of the provider of the system. With this article, we have understood the concept of fairness in the recommendation system from both the consumer and the provider.

References

Share
Picture of Sourabh Mehta

Sourabh Mehta

Sourabh has worked as a full-time data scientist for an ISP organisation, experienced in analysing patterns and their implementation in product development. He has a keen interest in developing solutions for real-time problems with the help of data both in this universe and metaverse.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.