Not a day goes by when one doesn’t come across the term ‘ethical watchdog’. With enterprises mainstreaming AI applications and consumer AI taking off, concerns around data misuse, algorithmic bias abound. The recent GDPR guidelines that put the focus on the need for ethics and governance in AI is a laudable attempt to bring together legal and technical communities to help form better policy in the future.
According to an IDC report, by 2019, 40% of digital transformation initiatives will use AI services. And by 2021, 75% of commercial enterprise apps will use AI. Developers will soon become the drivers of growth and the critical population to watch as IT organizations would hire AI engineers and data scientists to support the large number of DX initiatives that are AI dependent.
Over 100,000 people subscribe to our newsletter.
See stories of Analytics and AI in your inbox.
Given the rise of an algorithmic economy, ethics and data protection would come into sharp focus, noted Giovanni Buttarelli, European Data Protection Supervisor. There would be a proliferation of consumer AI applications powered by sophisticated algorithms – which means besides AI, another key area where AI is gaining momentum is governance. Making technology work for in the interests of human being would become a critical component.
Rise of AI Policy & Strategy in enterprises
In view of the recent trends – AI will open new avenues and career paths for professionals. While AI will remain a tech-dominated field, there will be a growing need for academicians and researchers from universities and think tanks from field like economic, sociology, philosophy and a background in emerging technology policy to work with a team of interdisciplinary experts and steer the company’s AI policy and improve digital-human interface.
Large enterprises like Microsoft and Google-owned DeepMind which have a strong position in the AI ecosystem are already putting the building blocks in place by setting up an Ethics & Impact team, focused on understanding the social effects and ethical challenges surrounding the emerging technology of artificial intelligence.
Microsoft set up FATE: Case in point – Microsoft’s newly minted research group called FATE – that are working on collaborative research projects that address the need for transparency, accountability, and fairness in AI and ML systems. Also, the group publishes in a wide array of disciplines, including machine learning, information retrieval, systems, sociology, political science, and science and technology studies.
The group addresses important ethical questions such as – the best use AI to assist users and offer enhanced insights, while avoiding exposing them to different types of discrimination in health, housing, law enforcement, and employment? In the same vein, how can AI applications balance the need for efficiency and exploration with fairness and sensitivity to users? As we move toward relying on intelligent agents in our everyday lives, how do we ensure that individuals and communities can trust these systems?
DeepMind sets up Ethics & Impact team: Earlier last year in October, London-headquartered DeepMind set up an Ethics & Society research unit to build artificial intelligence applications that works for the benefit of all. To that effect, DeepMind has hired scientists and practitioners from diverse field – academia and charitable organization. The company has brought in American economist and Columbia Professor Jeffrey Sachs, Oxford AI professor Nick Bostrom, and climate change campaigner Christiana Figueres to advise the DeepMind team and support open research and investigation into the wider impacts of their work.
Partnership of AI: Earlier in 2016, big tech companies like Amazon, Facebook, Google, Microsoft, Apple, and IBM joined hands to build a first-of-its-kind industry-led consortium that included well-known academicians and nonprofit researchers to help build ethical technologies and ensure the trustworthiness of AI. From developing best practices to advancing public understanding, the consortium regularly engages experts from various disciplines such as psychology, philosophy, economics, finance, sociology, public policy, and law to provide guidance on AI-related issues and its impact on society.
The Berkman Klein Center and the MIT Media Lab: These two institutes are conducting evidence-based research with an aim to provide guidance to key decision-makers from the public and private sectors and deliver high impact-oriented pilot projects to bolster the use of AI for the public good. The centres, through research efforts will also build up institutional knowledge base on the ethics and governance of AI and strengthen the interface with industry and policy-makers.
How to build a career in AI policy & strategy?
The recent upheaval caused by GDPR regulations which will come into effect this year forced a lot of enterprises, big and small to review and rewrite their data governance guidelines and overhauled the systems. If there is ever a time to jumpstart your career in this field – it is now. Research groups are expanding their stakeholder community and are always on a lookout for key positions – such as Director of Research, Director of Partnerships, and Program Associate. You can access the details here.
DeepMind recently announced its hiring Policy & Ethics Researcher and some of the requirements are:
- Excellent grasp of technology policy and its implications on society
- Ability to quickly assimilate complex issues and work across science areas
- Knowledge of and interest in sectors vital to artificial intelligence policy are desirable.
You can access the job description here.
Skills needed to make a career in AI Policy & Strategy
The three core areas where AI researchers from the Ethics team work on are – AI policy, AI governance & AI strategy. This work, according to Oxford’s Future of Humanity Institute touches on a range of topics and areas of expertise such as international relations, international institutions and global cooperation, international law and international political.
According to Miles Brundage, well-known AI policy researcher at the University of Oxford’s Future of Humanity Institute, there are four main roles in this area:
- direct research
- working within government think tanks & industries
Candidates work on a range of topics – improving public opinion about AI, bridging between the short-term and long-term AI policy and case studies in comparisons with related technologies.
Most of these job openings look for graduates who have a background in international relations, economics, psychology, law and have a deep interest in emergent technology and most importantly have a good grasp on complex technical and regulatory issues.
Core on-the-job requirements/skills for AI policy and strategy professionals are:
- Graduate degree
- Project management skills
- Ability to carry out both quantitative and qualitative research on emerging technology policy
- Devise forward-thinking policy position to companies and ability to work with different stakeholders
- Develop proposals for new academic projects
- Map AI trends and forecasts of how AI is progressing
- Lead external engagement with key stakeholders
Those who wish to work for AI advocacy could find jobs in big tech companies and government think tanks where the role would entail building a broader awareness about AI through conferences, literature and advocating its growth.
Some of the job titles are:
- Policy & Ethics Researcher
- AI Policy Researcher
- AI Policy Practitioner
- Research Associate
- Program Associate
Who are the best hires for AI policy and strategy roles?
Usually professionals from the government sector, non-profit and academia can land a top slot in enterprises looking for external advisors from diverse fields to contribute ideas and ensure fairness of commercial AI applications. Well, professionals who have a strong technical background in AI are well-suited to develop and oversee AI ethics committee and evaluate the recommendations posed by the team.
Now, how can these professionals sharpen their skills and hone their credibility?
- According to Miles Brundage, as the economic relevance of AI increases, there will be a need for voices from different backgrounds to ensure fairness and ethics.
- One can one can start by doing short-term policy research, Brundage proposes and staying tuned to the ever-evolving AI landscape.
- One can also brush up AI and ML skills by doing online certifications and sharpening the AI speak
- Keep a tab coming from well-known research companies like DeepMind, Google, NVIDIA
- Try working closely with AI’s technical experts and gain an understanding of conceptual frameworks that would be needed to develop an AI policy framework
- Brush up on the political science skills, develop stronger research and policy development skills
- According to Brundage, graduates and undergraduates should look back on the international cyber-conflict to draw inspiration on how devising long-term AI policy
- He also says tools like statistical analysis and game theory can prove useful for doing AI policy analysis.
- People from law, economics and social sciences background can make a shift towards this career
- Lastly, sign up for an AI ethics course – a lot of universities such as Stanford, Cornell, Harvard, University of Edinburgh & online courses offer undergraduate and graduate level programs.
Where can one expect the most job openings?
From consulting firms such as Deloitte, McKinsey, Nielsen to government think tanks (Niti Aayog, AI Task Force) to academic institutions (Wadhwani Institute for AI — India’s first AI research centre), enterprises and India’s robotics startups, there is a growing need for AI strategists and policymakers who can straddle the field of law, governance, ethics and policy-making. Besides these areas, external advisors can also pursue opportunities in for-profit organizations/government bodies that keep an eye on competing nations and the changing landscape of AI.