A new BCS Report, Priorities for the National AI Strategy, found the UK capable of leading the world in creating AI that cares about humanity. Given this, more people from non-tech and diverse backgrounds join the field of AI to create a ‘political class’ that understands AI well enough to deliver the right solutions to society.
“The public has become extremely distrustful of AI systems,” said Bill Mitchell, lead report author and director of policy at BCS. “You need to prove to them that you are competent, ethical and accountable.” BCS’ latest proposal is to the UK’s government for a National Artificial Intelligence Strategy, pressing on the need for heightened AI standards. The government responded with hopes to release a plan by the end of the year.
A recent survey by YouGov found that 53 per cent of adults in the UK had ‘no faith’ in any organisation to use algorithms and make judgements about them. This comes as no surprise after the data science mishaps last year. The UK’s computer model for determining lockdown impositions received massive criticism, or the scoring system for cancelled standardised exams was called a ‘mutant algorithm’ by Boris Johnson.
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
The UK is home to some of the largest AI companies like DeepMind, Graphcore, Darktrace, BenevolentAI and more. The country is also brimming with leading universities, research centres and institutions it can tap into to become a leader in creating an empathetic AI.
BCS proposed new standards for training and ethics to ensure that data scientists are seen as professionals like doctors or lawyers. This means ensuring requirements are met before working in AI and penalties for breaking the rules in both the public and private sectors.
Download our Mobile App
They keep pace with the growing dominance of AI; there is a need for ‘AI education’ in schools and re-skilling courses for adults. The UK needs to amp up equipment, broadband access, and education programs for people in poverty and reduce the ‘digital divide’. The report emphasised the need to strive to develop the AI technologies that will be key in the fight against global climate change. Additionally, it suggested that the government review the kinds of investments needed to scale up “High-Performance Clusters and data centres for industry, academia, and public service use, which align with the recommendations on investment in the AI Council Roadmap covering future R&D and innovation.”
The report encouraged the UK to set the ‘gold standard’ in AI professionalism by focusing on these priorities through a pro-innovation, pro-competition, pro-ethical, and fair competition-based regulatory framework. Public trust is the centre stone of professionalism, founded on competency, ethical values, and accountability.
Various papers and researches have demonstrated that the UK ranks second or third after the US and China, giving the country the space to effectively lead the world for AI standards with robust and well-designed rules. This means that significant tech companies won’t leave the UK if stringent regulations are imposed and thus, would cooperate with the UK government for ethical AIs.
Reid Blackman, a philosophy professor and head of technology-ethics consulting firm Virtue, has emphasised the importance of transparent, explainable and fair frameworks that can be understood and trusted by the common person, as opposed to those that are ‘too high-level to be helpful’. He compared the ethics culture in medicine to illustrate the high level of trust in the healthcare system. The same can be achieved for AI if the culture around ethics is adopted. This includes the bare minimum as generally accepted guidelines where people can know when communicating with an AI.
But, the UK is not the only nation competing for ethical AIs. In May this year, the European Commission proposed its first legal framework on AI. The regulation takes a risk-based approach and classifies AI into four groups: unacceptable risk, high risk, limited risk, and minimal risk. Covering EU citizens and companies operating in the area, the regulation aims to develop ‘human-centric, sustainable, secure, inclusive, and trustworthy AI.’ If adopted, the proposal would be a massive step by the EU to take a solid stance against certain AI applications.
While the US and China are engaged in a tech war, the UK and EU stay focused on setting the Gold Standards.