INT. (Indus Net Technologies) is a full-stack software engineering solutions company focusing on the banking, insurance, financial services and pharmaceuticals industries. Over the last 25 years, the company has served nearly 500 clients with human-centric and outcome-driven solutions. INT. has a presence in India, UK, USA, Singapore and Canada.
Sign up for your weekly dose of what's up in emerging technology.
AIM: How does INT. leverage AI?
Dipak Singh: INT.’s analytics solutions largely rely on AI. In the insurance, banking, pharma, healthcare, and retail domains, we use machine learning/deep learning models for predictive analytics.
The initial action is to identify the obstacles to utilising AI to improve effectiveness. The next step is data collection from multiple locations, and lastly, AI-based solutions are developed as AI apps.
We use computer vision, natural language processing, and other AI techniques. We develop AI applications to augment the decisions of the business users and not to substitute them for enhancing business continuity in the long run.
AIM: What explains the growing conversation around Responsible AI?
Dipak Singh: At present, AI ethics have been a hot topic as companies develop and deploy AI applications at scale. We focus on serving the external and internal stakeholders. For example, in the case of an automated text production app, the idea is to make the machine capable of distinguishing between specific pejorative phrases to prevent it from producing hate speeches, racist words/sentiments, etc.
The need for an ethically sound AI is urgent to ensure the responsible use of AI technology based on the code of ethics. Identification of potential bias, handling approachability and transparency are priority items in INT’s AI ethics maintenance checklist.
AIM: How does INT. ensure adherence to its Responsible and Ethical AI policies?
Dipak Singh: Data science is an interdisciplinary field using scientific processes, methods, systems, and algorithms for extracting insights and knowledge from noisy, unstructured, and structured data.
At INT., we have established Standard Operating Procedures (SOPs) for developing and operating in the AI environment. It applies actionable and knowledge insights from major data fields across multiple domains of applications to keep the AI/ML teams streamlined.
Each team member is trained and educated on these policies/SOPs regularly to ensure they are well-versed in the framework to keep everyone and everything on the same page in terms of skills and competencies.
AIM: How do you mitigate biases in your AI algorithms?
Dipak Singh: Selection bias, reporting bias, implicit bias, and group attribution bias are common issues. We perform extensive exploratory analysis to uncover any anomalies or biases before feeding the data into the model.
AIM: Do you have a due diligence process to ensure that data is collected ethically?
Dipak Singh: We at INT. have created SOPs for AI framework development, and hence any data being utilised is thoroughly reviewed.
We adhere to the SOPs when dealing with third party data models and datasets. In addition, we have a set of instructions for assisting employees in executing routine functions. It not only facilitates accomplishing quality output, efficiency, and performance uniformity but also reduces miscommunication and failures in complying with the industry regulations.
AIM: How does your company protect user data?
Dipak Singh: INT. is an ISO 9001:2015 certified organisation, and we have been operating in this industry for the past 24 years. We are top-notch in planning, leadership, support, and organisational contextualisation. We deal with various government bodies and place a high value on data security and customer privacy through the cutting-edge policies that every internal stakeholder follows.
AIM: Did you encounter any biases or ethical issues lately within your organisation/industry/product?
Dipak Singh: When it comes to biases, AI is no exception, especially transparency maintenance. The outcome of the AI solution is heavily influenced by how we train the model.
It is highly susceptible to discriminatory outcomes, inaccuracies, and inserted or embedded bias. There have been a few instances of biased practices in our AI applications on and off. But since our data collecting and preparing methods are robust, we were able to rectify the issues quickly. Moreover, we also have good practices for gathering data and maintaining the customers’ privacy.