With its immense potential in the pre-and post-COVID world, artificial intelligence has now been deployed in almost all industries across the globe. However, the technology always has been scrutinised for its unethical usage. In fact, the recent case of wrongly used AI for arresting a US citizen has forced many tech giants like IBM, Microsoft, Amazon as well as Google to suspend their facial recognition technology for police authorities. Such growing concerns with AI can have a significant impact on the economy with unethical and damaging commercial choices.
One such example has been showcased in the authored paper where the technology is deployed in order to set rates for selling insurance products to customers. In this case, insurance companies tend to set different rates of insurance premiums for different customers using AI. However, certain wrong decisions by the technology can have damaging impacts on the business, such as incorrect fines, discriminatory decisions or misusing customer information. These wrong decisions can put bank and insurance companies under heavy penalties for their misconduct.
In fact, according to a report, US financial regulators have issued penalties coming to the sum of $2.29 billion, with India’s value of penalties reached up to $455,000. Therefore, in such environments where the technology takes the full control of the decision without any manual intervention, it becomes critical for businesses to know on what basis the technology tends to opt for unethical strategies.
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
In view of this, researchers, from the University of Warwick, Imperial College London, EPFL – Lausanne, Sciteb, have formulated a mathematical principle that would provide a simple formula for businesses to identify the unethical strategies of AI and understand the impact of its decisions. Considering wrong strategies cannot usually be defined in advance, it is only reliable to estimate the proportions of these questionable strategies and distribution of the profitable ones.
Authored by Nicholas Beale of Sciteb, Heather Battey of the department of mathematics at Imperial College London, Anthony C. Davison of the Institute of Mathematics at EPFL, and Professor Robert MacKay of the Mathematics Institute of the University of Warwick, the paper — “An unethical optimisation principle” was published in Royal Society Open Science showcasing how businesses need to rethink the way artificial intelligence operates, as it picks unethical strategies, and how the unethical optimisation principle can help.
Download our Mobile App
According to professor Robert MacKay from the University of Warwick, the suggested principle of unethical optimisation can help businesses to identify the strategy that is creating problems and is usually in an “infinite strategy space.” “Optimisation can be expected to choose disproportionately many unethical strategies, an inspection of which should show where problems are likely to arise, and thus suggest how the AI search algorithm should be modified to avoid them in future,” said MacKay.
Also Read: How COVID Pandemic Highlighted The Limitations Of AI
Unethical Optimisation Principle
The paper stated — “If an AI aims to maximise risk-adjusted return, then under some conditions it is, to a major extent, likely to pick an unethical strategy unless the objective function allows sufficiently for this risk.”
Explaining further, the paper stated that AI is going to search for strategies (s) that will have the maximum returns functions A(s) for the companies. However, in some cases, the technology picks unethical strategies from its strategy space (S) that aren’t acceptable to the stakeholders and may cause penalties like “fines, reparations, compensation and boycotts.”
According to the paper, if the risk-adjusted is greater than zero [C(s) > 0], then the strategies would be called unethical or termed as Red, on the other hand, if the risk-adjusted is equal to zero [C(s) = 0] then the strategies would be termed as ethical or Green.
Hence, the true risk-adjusted return [T(s)] concerning the strategy that has been adopted by artificial intelligence, may be expressed as:
T(s) = A(s)−C(s)+Q(s)
The paper further took an example of a case where top insurance companies have been scrutinised to provide lower premium quotes for individuals with traditional English names like “John.” Here, the AI worked on the data fed into it, however, the true returns of the company depended on various factors like “the behaviour of those drivers who ask for quotes.” In this case, the researchers highlighted how there is an increase in odds of choosing an unethical strategy by using AI as compared to choosing strategies randomly.
Even though the proportion of questionable strategies is less, the probability of picking the wrong strategy is always high. Therefore, unless returns are fat-tailed, business owners and regulators should be extremely careful in using AI systems to have unsupervised decision making. “If the returns are high, then only it is justified to have the risk of using an unethical strategy.”
Dependence of the asymptotic unethical odds ratio (????∗) on tail index (ν) and additional volatility (γ).
This figure shows how the tail index influences in the heavy-tailed cases. Here, due to the imperfections in the AI algorithms to make predictions accurately, the errors account for other differences between risk-adjusted return and apparent risk-adjusted return function, even when the positive risk-adjusted cost is equal to zero.
The paper defines an unethical odds ratio that helped in calculating the probability from the proportion of unethical strategies.
In fact, Heather Battey, co-author of the paper from Imperial College stated that the principle would showcase how advanced AI machines can have more chances to choose unethical strategies as compared to a less sophisticated system “that would pick a strategy arbitrarily.”
The paper with its complex mathematics lays out the idea that if there is an advantage of picking up unethical strategies (Red), then it is quite likely that the artificial intelligence is going to opt for the same. “Any advantage for unethical leads to it beating ethical ones with probability one, in the limit, because unethical returns have a higher upper limit than ethical ones,” stated in the paper.
Also Read: 7 Key Tenets Of AI Ethics To Follow
Thus, if an AI is being optimised to its maximum potential, it has more chances to treat its customer unfairly and can even be discriminating against specific communities. Alongside the principle also suggests that businesses must rethink how their AI operates large strategy spaces so that the unethical approaches are eliminated during the optimisation process.
The paper provides a more detailed exploration of the code. You can read it here.
To summarise, it can be said that as artificial intelligence is advancing to take full control of decisions without manual intervention, it is critical for regulators and businesses to create a framework for ethical usage. Thus logic, coupled with this mathematical formula can help businesses identify the proportion of unethical strategies and its impact on the business and use it to eliminate business risks.