Listen to this story
|
Desperate times call for desperate measures and no other big tech company is feeling the heat more than Meta Platforms Inc. A report published by Wall Street Journal last week revealed the strict new policy it has imposed on some employees asking them to either look for new positions somewhere else within the company or face termination. Meta has announced that it plans to cut costs by 10%. In the earnings released for the previous quarter, Meta’s results looked grim. The company had lost close to 50% of its value by the second-quarter of this year. The company also reported an outlook predicting higher-than-expected losses for the third-quarter.
In a bid to rid itself of all excesses, the axe fell first on the company’s Responsible Innovation Team (RIT). The team was a crucial part of Meta’s efforts to redress the many blows that have been dealt to its reputation in the past few years. The company has had more than its fair share of scandals including Cambridge Analytica—which was recently settled—breeding political extremists and spreading misinformation during the US elections, violation of children’s privacy in Ireland and staking its money on the metaverse.
Turbulent times in Meta
In 2018, a vice president of product design with the company—Margaret Stewart—established the team to tackle the “potential harms to society” caused by Facebook’s products. Ironically, just last year, Stewart posted a blog titled, ‘Why I’m optimistic about Facebook’s Responsible Innovation efforts’, stating that she inherently believed that a lot of good could come from technology and Meta was ready to put in the work for it. “Goodness isn’t inevitable. It comes through sustained hard work, investing time in foresight work early in the development process, surfacing and planning mitigations for potential harms, struggling through complex trade-offs, and all the while engaging with external stakeholders, including members of affected communities, “ Stewart explained.
Despite dissolving the team, Meta has promised that the team which comprised two dozen engineers and ethic specialists will continue with its work albeit in a scattered way. Eric Porterfield, a spokesman with the company, said that employees from the RI team would work in safety and ethical product design with specific issues in teams. He also stated that they weren’t guaranteed new jobs.
How real are AI ethical teams in companies?
While most media reports wasted no time in underscoring Meta’s readiness to let go of its ethical division, there is a section of AI experts who question the motivation behind an AI ethics team in the first place. Is it mainly PR motivated and a ploy to distract from the actual troubles with the business?
Pedro Domingos, author of ‘The Master Algorithm,’ widely known for his work on Markov logic network, has long been critical of the activism of AI ethicists like former Google scientist Timnit Gebru. Domingos applauded Meta’s decision to disband the RI team, calling AI ethics “phony.” The University of Washington professor has often called AI ethics a unidirectional field which isn’t welcoming of differing opinions.
Domingos’ concerns aren’t entirely unfounded. For an AI startup or company to jump onto the AI ethics bandwagon is remarkably easy. The company’s management and marketing teams claim that they strictly adhere to the Ethical AI guidelines without due diligence. The practice has gained enough popularity to acquire a title for it, called ‘AI Ethics washing’ and includes having an ethical AI division as window dressing to silence knee-jerk criticism.
What is AI ethics washing?
There is a good reason for the rise of ethical washing. Building an ethical framework and incorporating it within a business is a costly process. Up until a few years back, when the concept of AI ethics was still nascent, tech company leaders expressed their reluctance openly. Ethics is a complicated minefield that isn’t necessarily navigated with ease.
In 2019, Microsoft’s president and attorney, Brad Smith, plainly said—even as a group of Microsoft employees protested against the company’s military contracts—that American tech companies had a long history of supporting the US military and that Microsoft would continue to do so. “The U.S. military is charged with protecting the freedoms of this country. We have to stand by the people who are risking their lives,” Smith said.
In 2018, Google was pulled up for providing AI solutions to build warfare to the US Department of Defense. The pilot programme called ‘Project Maven’, which involved other tech companies as well, would help the US government analyse drone footage using AI. Google eventually stepped back after a bunch of resignations and internal dissent. With such deep involvement of governments, is it even possible to have transparent AI ethics? It is these fallacies that Domingos and others want to examine.
The Wall Street Journal report mentioned Zvika Krieger, former RIT head, who revealed that the team had been effective in small ways, not the overarching beacon that it was meant to be. The team had been previously involved in Facebook’s decision to exclude a race filter in dating profiles. The feature was later copied and put into use by dating apps.
Stewart also mentioned in her blog that the RI team was behind Meta’s COVID-19 products. The team wanted to “fight misinformation about the virus and whether a feature could be unintentionally offensive or insensitive”. However, even with these positive undertakings, Meta was drowning under a pile of snafus.
In this context, is it better to simply discard pretences and put a concentrated focus on real ethical issues as Domingos says? Or is a front necessary?