Sreekanth Menon is Vice President – Data Science at Genpact and manages AI/ML Practice and Delivery for Genpact focused on creating next-generation AI/ML solutions for Fortune 500 clients. He helps manage Genpact’s strategy to bring talent from different areas of AI/ML such as data engineering, algorithm specialists, full-stack developers, and deliver solutions at global scales.
Analytics India Magazine caught up with Menon to understand more about the AI landscape and Genpact’s solutions in this space.
AIM: What are the AI-based offerings from Genpact?
Sreekanth Menon: Genpact’s AI/ML Product and Service offerings are spread across four broad requirements from clients:
AI/ML Advisory Services: Helping our clients make the transition from insights to actions using Genpact’s advisory guardrails around their analytics and AI/ML initiatives.
Genpact AI Accelerators: Domain-led pre-trained accelerators for analytics augmentations leading to a differentiated system of engagement. Ready-to-use and modular AI/ML solutions that accelerate time to value in AI/ML engagements for clients by more than 2x.
Genpact Full Stack & Implementation Services: Using our cross-functional team experience to design, build and deploy domain-led AI/ML solutions, our full-stack & and implementation services enable high-risk experimentation and accelerated go-to-market models for AI/ML engagements.
Genpact Pragmatic Business Focused AI: Harnessing Genpact’s lineage of subject matter expertise to build and deliver pragmatic AI/ML models targeted to solve business problems specific and unique to industry verticals and functional service lines.
AIM: What are the biggest challenges facing AI today?
Sreekanth Menon: One of the bigger challenges we face today is the education about what AI/ML solutions can and cannot do for businesses today. Often, we find that there are many misinterpretations about how to develop and deploy AI/ML solutions and what kind of measured returns can be expected.
Genpact solves this problem by having regular conversations with our clients in the form of assessments, blueprinting, workshops, etc.; in essence, collaborating with clients to help identify which parts of their business can benefit from AI/ML interventions.
AIM: What is the current state of XAI, and what promise does the future hold?
Sreekanth Menon: Businesses are now entering a period of trust crisis — there are concerns about data privacy, security, and traceability. Additionally, compliances and regulatory measures require businesses to be more transparent with the way data is being utilised. Under such market conditions, explainable AI is critical to ensure transparency. Experimentation is an integral part of developing robust AI models. The complex mathematics and interpretability issues make decision-making challenging for both Data Scientists and Business Leaders.
The AI community has responded with principle-based frameworks and guidelines for transparency and responsibility in the use of data and AI/ML solutions. Several supervisory committees have suggested either too stringent or too lenient frameworks, making it difficult to operationalize abstract transparency and ethical principles successfully.
Businesses that take a proactive approach towards explainable and responsible AI/ML development will likely find themselves leading these conversations in the future.
AIM: How do you address the issue of AI explainability?
Sreekanth Menon: Most of the push-back from the industry regarding the implementation of AI/ML solutions comes from the perception that the models behave as ‘black boxes’ – where the decision and predictions cannot be explained in language which humans or businesses can understand. This necessitates the need for “explainable AI”, which refers to methods and techniques in the application of artificial intelligence technology (AI) such that the results of the solution can be understood by human experts and written in business language for decision making. Explainability is a topic consisting of many faces. It contains individual models and the ecosystem where it would be deployed. It refers not only to whether the decisions model outputs are interpretable but also whether the whole operational business process can be properly accounted for.
Genpact’s CORA platform was designed based on the following seven guiding principles of Responsible AI to form a foundation for deploying AI/ML solutions that are safe, reliable, and non-discriminatory:
- Domain Infused Business Metrics Evaluation
- Fairness and Legal Compliance
- Interpretability and Explainability
- Data Pattern Change Mitigation
- Reliability and Safety
- Privacy & Security
At Genpact, we build machine learning-based models that can be translated into simple business rules for quick and efficient decisions and build trust in models. For example, suppose a decision tree-based model is deployed for every prediction. In that case, we provide the tree branch translated in simple English as a rule to explain the effect of each variable on the final decision (i.e. probability). Another complex example is neural network models (which are widely considered as black-boxes) are now within the realm of explainability by visualizing the hidden layers and showing how features are learnt during the actual training in real-time.
We use methods such as Reversed Time Attention Model (RETAIN), Bayesian deep learning (BDL), Local Interpretable Model-Agnostic Explanations (LIME), Layer-wise Relevance Propagation (LRP), Shapley Additive exPlanations (SHAP) and Gradient-weighted Class Activation Mapping Grad-CAM (Grad-CAM) for building explainable machine learning models.
AIM: What are the current trends in data science? Do you still think it is the most sought after career path?
Sreekanth Menon: In the past decade, businesses have started to realize the potential of artificial intelligence and machine learning (AI/ML) solutions for transformational gains. For instance, market research firm IDC predicts that the global AI market is set to grow over $500 billion annually by 2024.
There is a continued strong demand for data science, and the plethora of career opportunities will only grow in the near future with a focus on new data science emerging roles such as:
AI Ethics Experts – The role covers risk and governance but also needs coordination with government agencies, non-profits, legal teams, users, and privacy groups, in addition to technology teams, to understand the rising implications of ethics in AI development and deployment. It requires a humanistic background in addition to technical literacy.
Security/ Privacy Experts – With the growing advancements in technology, there has been an increasing prominence in cyber threats. Machine learning is now being extensively used to prevent cyber-attacks by allowing cyber security systems to carry out human-like tasks and provide first-hand protection.
AI/ML ‘Translators’ – AI business analysts with a strong understanding of the client, its business model, and the business processes or product for which they are targeting the AI solutions. They are a conduit between business, data scientists and data engineers and need to be additionally conversant with tech speak.
AIOps Engineer – Referring to multi-layered technology platforms that automate and enhance IT operations through analytics and machine learning (ML). AIOps engineers implement a comprehensive analytics and ML strategy against the combined IT data. The desired outcome is automation-driven insights that yield continuous improvements and fixes.