As the scope of artificial intelligence applications in the financial sector increases, there is also a growing concern about its potential risks. The use cases in AI can be widely seen across banks, insurance companies and capital markets where it has been used for automation, supporting intelligent analysis and decision making — thereby creating new business models. In fact, AI applications will be the primary way banks will interact with their users in the future, an Accenture report said.
According to a research report by BCG Consulting, China has achieved remarkable progress in the adoption of AI in the financial sector and by 2027, 23 percent their financial job market will have changed, with AI promising wide gains in improving efficiency and automating repetitive work.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
It is perhaps the job market disruption, unpredictable and risky outcomes or the biased AI systems that led to WEF warning about AI destabilising the financial system in the future. WEF is not alone in warning about the increased risk of AI seeping in financial systems — a Thomson Reuters report also indicated that firms that are reaping huge benefit in automating large swathe of back-office work, should do so judiciously.
Let’s look at the three core functioning areas in the finance sector that AI has impacted profoundly:
This can be further classified under front office functions — for example trading and back-office operations such as risk management, market research. Special-purpose AI can solve specific problems in customer engagement, financial management or cybersecurity.
And this risk applies to both financial institutions and end consumers as well. BCG findings indicated there is a risk to consumer privacy since personals data could be misused following a hacker attack. And the data collected or even calculated by AI models could lead to biased results. FinTech companies, that work with stakeholders usually tackle a specific element of the finance work chain, build a prototype and launch a product but are fraught with regulatory concerns. Besides tech progress, Indian regulators are under increasing pressure to reassess the current legal framework, supervisory models and resources.
Risks Posed By AI In Finance
Financial Policy And Regulations: AI applications will wield considerable influence on financial regulations. For example, a report indicated that European legislation Markets In Financial Instruments Directive, known as MiFID II puts in tighter policies and regulation in the face of emergent tech. Mifid II, that got more teeth earlier in 2018 mandated firms leveraging algorithmic models based on AI and ML to have a robust development process in place. Similarly, the recently-introduced Data Protection draft Bill put the spotlight on financial players and the restricted use of financial data by institutions or FinTech firms which will impact its operations.
Consumer At Increased Risk: Professor Joachim Wuermeling of Deutsche Bundesbank mentioned in a report that users who apply for loans are rated by AI models and the data points collected are usually their transactions, social profiles and other sources, which may or may not give an accurate picture of the algorithmic decisions in terms of loan rejections.
Need For Expansive, Rigorous Testing Of AI Systems: The Reuters research pointed out that AI systems for financial use cases are trained on a very narrow set of objectives which can often lead to unpredictable outcomes. Another area that can affect outcomes is incomplete testing of AI systems. The report emphasised the need for stricter, expansive testing and quality control to become a major component of AI implementation.
Trusting AI Way Too Much: Traditionally, in consumer finance, decisions related to a customer’s creditworthiness were made through defined rules and had transparency. With the advent of AI, this transparency may not be achievable. With AI being baked into financial operations, banks run the risk of algorithms making biased decisions. The key sources of bias in AI systems are data, training and programming.
Automation Supplanting Back-Office Jobs: As AI applications like chatbots phase out customer service jobs, intelligent systems will soon supplant a swathe of jobs requiring policy-makers and stakeholders to look for measures that will build resilience to counter the impact of automation. For example, HDFC has launched Eva, an AI-based banking chatbot that provides answers in simple language in less than 0.4 seconds. The bank also has a first humanoid branch assistant, called Ira.
As financial institutions in India face competitive pressure to make strategic investments in emergent technology, business leaders face a dual challenge:
- Implementation and regulation of AI in the banking and finance sector
- Understanding the governance decisions in the adoption and implementation of AI applications
Besides, there are also legal and ethical concerns related to deploying AI technologies.
In India, FinTech startups are collaborating with banks and other stakeholders and conducting proof of concepts (POCs) of certain applications and implementing it in day-to-day operations. Though India trails behind in deployment of AI led capabilities in the finance sector, the competitive advantage is well-recognised by banks and insurance companies which are running accelerators and innovation centres in partnership with FinTech startups. Some of the most prominent use of AI applications in Indian finance sector are chatbot adoption, automating back-end processes, credit scoring, loan decisions, algorithmic trading and wealth management.