“Self-fulfilling prophecies are a problem when it comes to auditing the accuracy of algorithms.”
Carissa Veliz
The American big tech companies are currently tackling two foes: the Chinese competition and the governments around the world hell bent on breaking them with antitrust and other legal instruments. While the competition from the Far East has been inevitable, the internet companies, with combined valuations above and beyond the GDPs of most countries, have a lot to worry about antitrust lawsuits. Allegations run rife about Google and Facebook colluding for cornering ad-tech markets. Whereas, Facebook blamed Apple’s privacy as an excuse for unfair market dominance. That said, these frenemies have three things in common: they build most of the cutting edge technology (think: AI, cloud), they are big and are religiously chased by the governments.
Ad-tech rigging, spamming reviews and other manipulations to fool the algorithms and the regulators goes against the foundation of capitalism: a fair competition. As the Big Tech metamorphoses into AI powerhouses, it’s only fair to ask if the existing antitrust legislation has the power to stop SKYNET style AGI on its tracks .
“AI engineers are much better-positioned than their clients to know whether the AI they produce is safe.”
Cullen O’Keefe, research scientist at OpenAI, in his paper last year, explored the idea of antitrust compliance in more detail. He tried to draw parallels between various domains and if there could be a viable route to an antitrust compliant horizontal agreement. The Agreement, according to him, could be part of the code of ethics of a new or existing professional organisation of engineers. He asked whether there can be an understanding amongst the research labs, between AI engineers to not produce unsafe AGI.
Unlike medicine, AI development rarely functions in an environment where the maker (researcher, coder) has an immediate obligation to serve the user. According to Brent Mittelstadt of the University of Oxford, this lack of a common goal nourishes a competitive AI development process rather than a co-operative one, which makes finding a balance between public and private interests intractable. The Oxford researcher believes that AI is still beset by:
- Common aims and fiduciary duties
- Professional history and norms
- Proven methods to translate principles into practice
- Robust legal and professional accountability mechanisms
On a similar note, Matt Levine of Bloomberg touched upon the antitrust compliance in the AI industry. He said a company might not straight away agree to build malfunctioning AI no matter the incentive, but deep down it’s a classic prisoner’s dilemma for the founders: the fear of getting outplayed. If one company refrains from building algorithms that result in deep fakes then chances are someone else might do that and take away all the glory. According to Levine, the companies might want to get together with all their competitors and agree on not building AI no matter what. “But is that an agreement in restraint of trade? Are you colluding to restrict the supply of AIs that customers want (evil enslavement AIs)?,” he asks. Levine argues that it is tricky for industries to self-regulate as they run the risk of making it look like a collusion. A third party regulator aka the government would have to step in. But, then there is the danger of lobbying.
So, can there be a regulatory framework; a law, which can hold companies liable to the actions of their algorithms without discouraging the researchers from pushing the boundaries?
Where does the buck stop
At what stage, would an AI researcher relinquish responsibility over the acts of the machine? Lance Eliot of Stanford writes that ethical and policy questions on the relationship between humans and machines are challenging as it requires closer inspection of the proprietary algorithms and vetting them for legal compliance. Such review, writes Eliot, requires consideration of the incentives, scope and more. The complexity of the algorithms’ data-processing and the possibility of the extent to which humans may truly control self-learning machines makes this an auditing nightmare. And, the resulting violations of antitrust laws can precipitate irreversible damage. Governments should respond to such antitrust behaviors, through revamping of laws and making the marketplace a level playing field. Mittelstadt of Oxford in his paper titled, “AI Ethics – Too Principled to Fail?” addresses many such challenges. He even recommends few checkpoints to incorporate an ethical AI ecosystem:
Principles to practice
Going forward, shared principles are not enough to guarantee trustworthy or ethical AI. AI research will continue to remain a competitive, not cooperative, process until there is a tectonic shift in the regulatory practices. In a transition of principles to practice, there can be conflicts and resolving these conflicts, writes Mittlestadt, is where the real work starts for AI Ethics.
Licensing AI
Imagine deep fakes or GPT creating irreversible damage to an individual or an organisation and a hypothetical regulatory board cancels the license of the AI researchers. Oxford’s Mittelstadt finds the lack of license in AI to be a “regulatory oddity”. “We license professions providing a public service, but not the profession responsible for developing technical systems to augment or replace human expertise and decision-making within them,” he wrote.
"A federal judge said #Google must face much of a lawsuit accusing the company of illegally recording and disseminating private conversations of people who accidentally trigger its voice-activated Voice Assistant on their smartphones." #privacy
— Carissa Véliz (@CarissaVeliz) July 3, 2021
https://t.co/lynOeQnQNw
Carissa Véliz of Ethics in AI at Oxford University recently wrote that AI should be tested the way FDA tests medicine. Extending the analogy, Véliz recommends adoption of randomized controlled trials, like the ones used in medical research, in an AI research set up. “Such trials could do the same for AI…we would stand a much better chance of developing both a more powerful and a more ethical AI.”