Founded in 2011, Vuram is a global hyper-automation services company specialising in low-code enterprise automation. “Vuram’s hyper-automation technology stack encompasses business process management (BPM), robotic process automation (RPA), optical character recognition (OCR), document processing, AI, machine learning, and analytics,” said Archit Agrawal, Product Manager, Vuram.
Sign up for your weekly dose of what's up in emerging technology.
AIM: Tell us about Vuram’s Responsible and Ethical AI frameworks.
Archit Agrawal: Making sensible judgments at every level of the organisation that does not hurt individuals or society as a whole is one of Vuram’s core principles. To ensure that teams represent a broad range of experiences and opinions, we strive to maintain a rich and varied team that spans genders, ages, races, disciplines, and backgrounds.
- Fairness and inclusiveness: AI systems should treat and represent everybody with equal engagement and empowerment.
- Privacy and security: AI systems should be safe and secure, with no sensitive information exposed.
- Accountability and interpretability: AI systems should be receptive to input and give appropriate explanations. A human should be involved in the process.
- Reliability and safety: AI systems should be created using the most up-to-date safety and security safeguards to avoid any adverse negative outcomes.
Best practices for Responsible AI:
- Design AI systems keeping users in mind.
- Engage a wide range of users and use-case situations and incorporate comments both before and throughout the development.
- Identify multiple metrics and indicators for assessing, training, and monitoring.
- Examine the raw data directly. Use aggregators and summarises in case of sensitive data.
- Understanding of data set and model limitations.
- Test against a golden data set and proactively search for unintended or wrong results – keep testing.
- Identify unintended bias before scaling.
- Continue to monitor and update the system after deployment.
- Strengthen compliance with current laws and regulations while monitoring the future ones and develop policies to mitigate risk.
AIM: What explains the growing conversation around Responsible AI? Why is it the need of the hour?
Archit Agrawal: Leaders are under increasing scrutiny to ensure their companies’ ethical use of AI systems goes beyond the word and spirit of current regulations. When it comes to high-stakes AI applications like autonomous weaponry and surveillance systems, ethical disputes about what is “good” and “wrong” are raging. And there’s a lot of worry and scepticism about how we will be able to imbue AI systems with human ethical judgement, significantly because moral norms vary by culture and are difficult to define in software.
Numerous reports of AI prejudice, discrimination, and privacy violations have already surfaced in the media, prompting leaders to wonder how they can assure that nothing goes wrong when they roll out their AI systems.
Unintentional bias in AI systems can result in incorrect outcomes, resulting in fairness issues that harm business.
AIM: How does Vuram ensure adherence to its Responsible and Ethical AI policies?
Archit Agrawal: To guarantee that team members are taught how to respect our AI ethical obligations, we use mandatory ethics training modules, toolkits, seminars, and workshops. A human-centred design-thinking session, for example, assist teams to understand our commitment to developing ethical machine learning technology.
AIM: How do you mitigate biases in your AI algorithms?
Archit Agrawal: Analyse the algorithm and data to determine where there is a high danger of unfairness.
- Check to see if the training data set is representative and large enough to avoid common biases like sampling bias.
- Calculate model metrics for specific groups in the data set as part of subpopulation analysis. This can assist in determining whether the model’s performance is consistent across subpopulations.
- Over time, keep an eye on the model for biases. ML algorithms’ output can alter as they learn or as the training data changes.
Create a debiasing approach that includes a mix of technological, operational, and organisational steps:
- Technical strategy by using tools to uncover potential sources of bias and reveal characteristics in the data that affect the model’s accuracy.
- Using an internal governance team and third-party auditors.
- Establishing a workplace where metrics and processes are transparent.
As we discover biases in training data, we improve human-driven processes. Model construction and evaluation might reveal biases that have been hidden for a long time. We discover these biases and use this knowledge to understand the reasons for bias while constructing AI models. We also improve the actual process to decrease bias through training, process design, and cultural changes.
Additionally, we must always determine beforehand when humans should be involved and when automated decision-making should be used.
Adoption of an interdisciplinary strategy: The importance of research and development in reducing bias in data sets and algorithms cannot be overstated. Eliminating bias is a multidisciplinary technique including ethicists, social scientists, and professionals who are most familiar with the complexities of each application field. As a result, businesses should seek out such professionals for their AI initiatives.
Diversity: Having a diverse AI team helps avoid unintended AI biases.
Tools to reduce bias: IBM Watson OpenScale, AI Fairness 360, Google’s What-If Tool
AIM: Do you have a due diligence process to ensure the data you use is collected ethically?
Archit Agrawal: When it comes to third-party data and models, our governance and AI teams double-check all the paperwork and underlying contracts before incorporating them into our products and initiatives.
The more data used to train systems, the more accurate and insightful the forecasts and predictions become. Our data science teams are careful about where they get the data from and how they use it.
We ensure that data sets accurately reflect all of the populations being studied, as underrepresentation of some groups might result in different outcomes Data science teams assess how they sampled data to train their models.
AIM: How does Vuram protect user data?
Archit Agrawal: We take the following steps to ensure data privacy:
- Restrict access
- Audit systems
- Plugin updates
- Patches, firewalls and encryption
- Human error and device control
- Strategising a plan B
- Digital twin
AIM: Did you come across any biases or ethical issues within your organisation? If yes, how did you address them?
Archit Agrawal: Because AI is not deterministic, we must often re-train it. We recently had similar concerns with our Personal Information Extractor model, which worked well on training data but not so well on global data, thus we opted to switch down the system after doing impact analysis. We had to re-train the model and revalidate it.