“Netflix and Amazon compete to provide the customers with best recommendations. But algorithm feedback dynamics could lead to real problems.”
Algorithms competing to recommend a better movie is not a problem, but when they are tasked with hiring or granting loans, it can become a huge problem. Researchers at Stanford’s Artificial Intelligence department published a work enquiring how algorithms that compete for clicks and the associated user data become more specialised for subpopulations that gravitate to their sites. In a new paper, titled “Competing AI”, the researchers talk about how this phenomenon can have serious implications for both companies and consumers.
In this work, the authors have tried to demonstrate that competition leads to specialisation and show that the quality-of-service for users is diminished when there are too few or too many competing predictors.
Implications of Feedback Dynamics
“…too little or too much competition, both can hurt the quality of prediction experienced.”
The researchers explored the premise of feedback dynamics at play when companies deploy machine learning algorithms that compete for customers and at the same time using customer data to train their model.
Whenever these companies do well, they get new customers, which means a new set of data. This, in turn, might lead to inclination towards newer subpopulation. “…by updating these models on this new set of data, they’re actually then changing the model and biasing it toward the new customers they’ve won over,” said one of the researchers.
During experiments, the researchers observed that when machine learning algorithms compete, they inevitably specialise. They become better at predicting what the new subpopulation of users wants. The authors further explained that the amount of data one has doesn’t matter as one would always encounter these effects. “The disparity gets larger and larger over time – it gets amplified because of the feedback loops,” they added.
The researchers further explained how this phenomenon could be detrimental in the long run by taking the example of granting loans at a bank. Algorithms decide who gets a loan based on their credit score and other factors depending on the country they live in. In a scenario where certain members of society say 25-year-olds apply more, the algorithm gets better at deciding for this subpopulation and can underperform when it encounters a new class. This not only repels customers but also compounds structural inequality in society.
Competing ML predictors, wrote the authors, can emerge in diverse settings. It can be rival search engines that predict the most relevant web links given a user’s search query or the banks that use their ML predictors to assess client credit to offer loan packages. Companies routinely compete to increase their user base. While the nuances of the competition vary across settings, explained the researchers, competitions generate temporal dynamics and feedback loops for the learning algorithms and a predictor’s performance at one-time instance could impact the training data it observes. Furthermore, this affects the performance and bias of the predictor over time.
The researchers stated that there is an optimal number of competing predictors that provides the best quality-of-service for users. This optimal number depends on several factors. One critical factor is how well the users can individually identify the predictor that’s best suited for them.
In this work, the Stanford researchers proposed a model of competing predictors that enables both empirical and theoretical investigations. The contributions of this work can be summarised as follows:
- Authors demonstrate that too little or too much competition can both hurt the quality of prediction experienced.
- Competing predictors cause feedback loops which impact what training data it receives and leads biased predictions over time.
- In practice, companies behind ML predictors may merge, intentionally differentiate (which could lead to further specialisation), or spend money to acquire data.
Check the original paper here.