In 2018, PinnacleWorks launched SuperBot, an AI-driven omnichannel conversation platform that transformed the communication process for 100+ organisations by helping them connect with their users 24×7 over various media, including WhatsApp, Facebook, etc. Later, SuperBot evolved from a chat agent to a voice-based communication agent, managing around inbound/outbound calls a day.
“Artificial intelligence can positively impact the world if it is deployed responsibly. In the right hands, it has the potential of transforming lives in unimaginable ways by opening up new frontiers of opportunity. Companies and solutions like PinnacleWorks and SuperBot can help bring about this transformation,” said Sarvagya Mishra, co-founder and director of SuperBot (PinnacleWorks).
In an exclusive interview with Analytics India Magazine, he spoke about how the company embeds ethics into its AI-powered voice agent.
Sign up for your weekly dose of what's up in emerging technology.
AIM: How does SuperBot leverage AI?
Sarvagya Mishra: All our products are powered by AI and ML. We believe in the evolution of smart technologies and are on a mission to actively drive their development. So, it’s only natural for us to equip our products with state-of-the-art technologies. A case in point is SuperBot, our AI-powered, NLU-engine-backed agent, which resonates strongly with our vision of ‘Always Evolving’.
SuperBot is an intelligent voice agent that can drive conversations with humans via multiple telephony channels. Leveraging cutting-edge natural language understanding and machine learning algorithms, our solutions extend superlative features, including multiple human voice-overs, support for regional languages, and customisable speed, pitch, and voice modulation to our clients. The platform can manage over a million inbound and outbound calls, thereby equipping businesses with the capability to place and receive over 10,000 calls simultaneously.
Download our Mobile App
AI assists SuperBot for Healthcare with regular reminders, appointment booking, healthcare package promotions, and an around-the-clock helpline. It also proves pivotal while pushing leads, collecting payment, confirming doctor availability, making or rescheduling appointments, patient information and report fetching, etc. SuperBot for Healthcare helps customers achieve the most optimal resolution by leveraging smart heuristic algorithms trained for several unique use cases. It ensures superior accuracy, efficacy, and cost optimisation while ensuring robust customer service availability 24×7 and higher satisfaction.
Similarly, SuperBot for Education is India’s first AI-based counselling agent for educational institutions. It covers 80% of the potential queries of educational institutes with a turnaround time of 2-4 seconds. It can also be integrated with more than 20 conversation channels, including Facebook Messenger, Twitter, Skype, Google Assistant, WhatsApp, and the company website.
AIM: Tell us about SuperBot’s AI governance framework.
Sarvagya Mishra: We have adopted several governance methods to ensure that our AI-based products are developed responsibly. They include:
1. Reviewing the use cases of the platform and its capabilities.
2. Streamlining the application of machine learning algorithms to enable a more efficient and effective product.
3. Ensuring the technology is tested and validated in real-time by our internal and external stakeholders.
4. Ensuring the technology is maintained in the best possible state before making it available to customers.
5. Ensuring we constantly review our processes and how we operate to provide a transparent, accountable, and fair environment to our customers.
6. Monitoring AI’s performance and reviewing the feedback.
7. Using data that is clean, well-labelled, and most meaningful for our customers.
8. Placing a high priority on data privacy and security frameworks to ensure that the data is protected and secured at every step of our development process9. Abiding by all laws and regulations and preventing any misuse of the solution without privacy intrusion.
This framework ensures that the necessary checks and balances are in place while addressing the key issue of round-the-clock helpline availability that most industries are facing at present. It not only resolves the queries of end-customers during peak or odd hours, thereby creating superior customer satisfaction, but also improves business prospects and reduces the loss of business due to missed queries.
AIM: What explains the growing conversation around AI ethics, responsibility, and fairness? Why is it important?
Sarvagya Mishra: AI is the most transformative technology of our era. But it brings to the fore some fundamental issues as well. One, a rapidly expanding and pervasive technology powered by mass data, may bring about a revolutionary change in society; two, the nature of AI is to process voluminous raw information which can be used to automate decisions at scale; three, all of this is happening while the technology is still in the nascent stage. If we think about it, AI is a technology that can impact our lives in multiple ways – from being the backbone of devices that we use to how our economies function and even how we live. AI algorithms are already deployed across every major industry for every major use case.
Since AI algorithms are essentially sets of rules that can be used to make decisions and operate devices, they could make judgement calls that harm an individual or a larger population. For instance, consider the AI algorithm for a self-driving car. It’s trained to be cautious and follow traffic rules, but what happens if it suddenly decides that breaking the rules is more beneficial? It could lead to a lot of accidents. Moreover, as AI systems become more complex and more popular, there is a risk that it will become very difficult for us to understand them. The possibilities are endless. The lack of transparency, accountability and explainability in the approach is likely to compound this problem.
This is where ethics and fairness come into the picture. We need systems that can help us make informed decisions based on the available information while at the same time being accountable for the outcomes of the decisions made. It needs to be calibrated so that it doesn’t lead to harm or injustice. Moreover, the benefits of AI must reach everyone. Otherwise, there may be an unequal division in access to technology and its advantages, exacerbating existing inequalities.
AIM: How does SuperBot ensure adherence to its AI governance policies?
Sarvagya Mishra: We have a dedicated team that is responsible for aligning data science and ML activities within the company with our AI governance processes. These processes are based on research papers and best practices from leading global companies. We have further created an environment where teams can learn from the mistakes made by not only our agents but also other platforms using AI for their products. We also regularly drive training programs that include workshops and case studies to keep the teams on track.
AIM: How do you mitigate biases in your AI algorithms?
Sarvagya Mishra: Human biases can be introduced into an AI system in multiple ways. It could be due to the training data that is used for machine learning algorithms, or it could be because of the biases carried by humans. When it comes to addressing human-introduced biases, we follow a stringent code of ethics that ensures that it doesn’t lead to discrimination or ethical concern of any kind. Our efforts are aimed at reducing such biases. We even ensure that our AI-based systems have greater cultural awareness and sensitivity towards different ethnicities.
We have a dedicated legal and product team as well, whose sole purpose is to keep track of our data procurement processes. These teams ensure that all of the required rules, regulations, and legalities are abided by before any data procurement, be it via our activities or third-party integration.
Many ethical concerns are addressed if the data is clean and its labelling is done extensively and accurately. We have a robust training process that ensures that these factors are integral to our AI applications.
AIM: How does SuperBot protect user data?
Sarvagya Mishra: Data leakage has become a serious concern in today’s era. And we never want our users to face the same. We have ensured high-level security checks across the system, databases, and user privileges. On top of it, all the user information gets masked as soon as users enter our systems. No one within our organisation is allowed to access this user information. Only the clients using the services have the privileges to read, write, update, or delete their data on our server via their respective CRMs or other third-party integrations. The firewalls are deployed at multiple layers to avoid any potential cyberattack.
Our platform is based on the foundation of trust by giving our clients the power to act on their data and leverage machine learning technology to create customised experiences for different segments.
We sign a detailed service-level agreement that covers every aspect of the service with our customers. We even walk an extra mile by clarifying any area where there could be complexity, potential ambiguity, or lack of certainty. This forms the basis of our trusted and transparent relationship with our clients.
AIM: Did you come across any biases or ethical concerns/issues lately within your organisation/industry/product? If yes, how did you address them?
Sarvagya Mishra: Our NLU engine empowers businesses by enabling them to scale using our calling agents and even text-based services. However, people often try to misuse it for cold calling. It is something that the product is not meant for, and neither is it permissible as per the Indian Laws. Still, such attempts to save the product from such misuse is a major challenge that we face.
The solution to this problem is achieved via a multi-pronged approach. On top of training the algorithm to detect such violations, we conduct a thorough background check of the client alongside the use cases and correlate them with the consent received from their dataset. The clients must sign a legally bound MoU that saves both the solution and the client data from misuse.