Founded in 2018, Bengaluru-based Artivatic AI uses AI to assist insurance companies in building personalised risk profiles of customers, track and understand their financial and behavioural journeys, and develop real-time intelligence based on those patterns.
“InsurTech is a specialised branch of fintech earmarked for insurance use cases by leveraging forever-evolving AI capabilities and mining multi-source big data via ML algorithms to acquire better insights of our users and offer the best advice and analysis. Artivatic is an AI firm, and we’re streamlining insurance and healthcare as our basic model via insurance tools,” said Layak Singh, CEO of Artivatic AI.
In an exclusive interview with Analytics India Magazine, Layak spoke about how the firm embeds ethics into its AI systems.
AIM: How does Artivatic leverage AI?
Layak Singh: Artivatic has gone beyond that requirement of servicing only clients to actually offering 360-degree support to all of our stakeholders, from insurance providers and TPAs to agents, underwriters, users, and any other peripheral ones. In this fashion, we are taking a system deeply rooted in legacy quicksand straight to the future.
So for consumers, we make insurance products; for agents, we put them on singular platforms that take care of all their challenges. Then the backend team composed of underwriters also finds their work easier with Artivatic’s support. We also have checks and balances in place to detect if any agent, say, is committing fraud and help providers in doing the same checks. So, we have products for claims, underwriting for agents, all of it.
AIM: Tell us about Artivatic’s AI governance framework.
Layak Singh: Prevalent AI governance techniques, especially the ones used at Artivatic, are sufficient in most instances. In the rare cases that they aren’t, Artivatic brings together academic experts, industry pundits and top-notch AI gurus to identify the risks and advice on how best to allay them since we believe nothing beats a collaborative approach.
At the company level, let’s say you apply for an insurance cover and are hiding the fact that you’re diabetic. So when you upload your documents, our AI, while scanning the document, will notice there’s something amiss about the blood test values and automatically flag issues with the application. With the documents, we also map, say, your PAN number to your signature in the documents, your photos, etc., and then our underwriters have information to decide what chances of deceit are present from case to case. If it’s 50%, the claim or application will be rejected. If it’s 80% accurate, we’ll think, let’s process this. In particular, Artivatic’s health model is useful in predicting the probability of claimants having certain diseases in the future.
AIM: What explains the growing conversations around AI ethics, responsibility, and fairness of late? Why is it important?
Layak Singh: What would be great is government guidance in the form of an international norms agreement at the global level that would loosen the burden on individual companies and countries. International treaties are the best way to clarify industry expectations and set a level playing field while serving as a gold standard in the prevention of violations and propagating responsible AI use.
The need of the hour is for world leaders to take time out to reason and fashion a collaborative stand on AI and related matters. In this fashion, the sector will receive state sponsorship and thus, there’ll be trust and buy-in from the population in bulk.
AIM: How do you mitigate biases in your AI algorithms?
Layak Singh: Yes, we need to keep in mind this underlying factor all the time. And to reduce the chances of biases creeping into our AI, we first define and buttonhole the business problem we mean to solve, keeping our end-users in mind, and then configure our data collection methods to make room for diverse, valid opinions as they keep the AI model limber and flexible.
We also ensure that we clearly understand our training data, as this is where most biases are introduced and can be avoided. With that aim, we also ensure an ML team that’s assorted as they ask dissimilar queries and thus interact with the AI models in various ways. This leads to identifying errors before the model is underway in production and is the best manner to reduce bias both at the beginning and while retraining models.
We also test and deploy models bearing in mind all feedback and keeping the feedback channel open. Leaning on forums and discussions for feedback ensures continual upgradation of our models, thus promising all-round optimal performance by way of constant audits and reviews.
AIM: Do you have a due diligence process to make sure the data is collected ethically?
Layak Singh: Artivatic usually does not believe in using third-party plugins; it’s almost all in-house platforms and procedures to add an extra layer of protection. And in case we have to use a particular API, we source it directly from the company we are dealing with.
AIM: What are your efforts in helping brands foster a trusted, transparent relationship with consumers?
Layak Singh: At Artivatic, we recognise that helping insurance brands foster long-term relationships with their consumers is the best way forward. And all our efforts are centred on that aim. Be it offering agents a platform to go about their activities quickly and efficiently; a claims portal that helps process claims without hassle; bringing providers and users on a singular; integrated platform—every one of our efforts is to cater to this underlying need for trust between our stakeholders.