Now Reading
Will The Recent Facebook-CA Debacle Create A New Category Of AI Jobs?

Will The Recent Facebook-CA Debacle Create A New Category Of AI Jobs?

Fears of artificial intelligence have always been ruled by job losses and with companies swiftly replacing a wide swath of human resources by automating a section of tasks, the job displacement wave is getting more real and bigger by the day. But here’s what companies and higher management have been trying to do, allay the fears of IT workforce – by carrying out in-depth studies to find out the emergence of new categories of jobs AI creates that require a range of new skills, especially soft skills and have absolutely no precedent.


Earlier this year, global business and technology consulting leader Cognizant took the lead by identifying 21 new job roles that will be created over a 10-year timeline that range from highly technical to those roles that require medium tech knowledge. Now Accenture’s new study, How Companies Are Reimagining Business Processes With IT finds out new categories of jobs that are entirely new and will complement the tasks of AI researchers and machine learning specialists.

Analytics India Magazine reviews the information in the Indian context and finds out how the country is uniquely positioned for these new jobs. Interestingly, Indian enterprises that have already taken a lead in AI and are setting up domestic Centres of Excellence in research to prove their leadership in AI technologies. A chunk of new jobs would be created by this section of enterprises and Indian startups like, Senseforth that are making a massive contribution to the AI knowledge economy.

Humans-in-the-loop needed to create good AI?

But before we dive into new roles that will spring alongside highly technical AI/ML job roles, let’s see how the recent data breach debacle raised concerns about data privacy and brings in a sharp focus on the need for regulation of the use of artificial intelligence and more importantly, the moral, societal and legal consequences of AI.  The Facebook-Cambridge Analytica fiasco once again calls for a need of frameworks that can deal with AI systems and regulate the reach of AI technologies. Given this recent scandal, it means that humans are crucial to defining the breadth and depth of AI systems and whether machines should be entrusted with all the problems.

Also, it calls for new roles in data stewardship, and citizen AI scientists who can also contribute to the design and implementation of these AI systems and lay out its ethical impact. In fact, after Facebook’s public battering, the society-in-the-loop (SITL) approach of involving various stakeholders is gaining credence. In an attempt to make AI systems unbiased and more compliant with societal norms, there is an increased need for specialists for an ethical evaluation of AI systems and not just policymakers who can embed ethics in AI systems. Enterprises and startups see a huge potential for people who can work as AI translators, interpreters and coaches or facilitators.

An Accenture study reveals three new categories of AI-related job roles that will drive innovation in the future:

1) Trainers: From writing scripts for customer service chatbots to training natural language systems, AI trainers and language translators are helping to make chatbots and AI-based voice assistant systems more conversant. But in between, startups are employing empathy trainer, who will teach digital systems to exhibit a degree of sympathy and concern for humans. An MIT Media Lab spin-off Kemoko Inc has employed specialists as empathy trainers to create more meaning in Alexa’s or Siri’s conversations. Its machine learning system teaches other AI-based systems to be more sympathetic. Meanwhile, humans in the loop training the machine learning system that eventually helps digital assistants to become more human-like.

Trainers in India: Closer home, India’s budding chatbot market is already employing an array of information modelers who go beyond just scriptwriting to build the right set of soft skills into the bot to make it more human. For example, reports hint that & Senseforth AI Research are hiring behavioural psychologists and fiction writers to build better automated conversations.

2) Explainers: Explainers or AI translators have often been mentioned as the go-between machine learning experts, the engineering team and the senior management team. Acting as an interface, AI translators or explainers would not only explain the rationale behind the algorithm but also how it was used to arrive at a certain decision. The Accenture report points out how in the future, enterprises will have to rely on analysts to explain the workings of the algorithms to non-technical professionals. In cases where there would be negative outcomes, the analyst would be responsible for auditing the performance of the algorithm.

Indian context: While India is still evolving its regulatory mechanism and producing a framework for data security and ethics review committee, there aren’t any incidents of enterprises or startups employing AI translators or explainers who can tackle the black box problem of AI. Now, in the Indian context, startups are playing an exceptional role in the industry in identifying and realizing the benefits of AI across diverse sectors.

3) Sustainers: Sustainers or as mentioned in an earlier article, AI ethics manager are those who seed more accountability and transparency in AI systems. AI policy and strategy is quickly gaining credence across the world and well-known ethics researchers are already working with IT bellwethers like Microsoft, DeepMind, Google, Amazon to weave a human element into AI systems and steer the company’s AI policy and improve digital-human interface.

For example, Microsoft’s newly minted research group called FATE – that are working on collaborative research projects that address the need for transparency, accountability, and fairness in AI and ML systems. According to well-known AI ethics researcher Miles Brundage, as the economic relevance of AI increases, there will be a need for voices from different backgrounds to ensure fairness and ethics.

Growing need for sustainers in India: With India opening up its arms to AI, there will be a growing need for AI researchers at Indian think tanks and the newly opened Wadhwani Institute for AI — India’s first AI research centre. Not only that, enterprises and India’s robotics startups, will also need to employ strategists and policymakers who can straddle the field of law, governance, ethics and policy-making. Besides external advisors, one can also pursue opportunities in global consultancies such as Nielsen, Deloitte, E&Y that dishes out advice on how to tackle the ethical challenges surrounding the emerging technology of artificial intelligence.


What Do You Think?

Join Our Telegram Group. Be part of an engaging online community. Join Here.

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top