Listen to this story
Soon after the launch of GPT-4, OpenAI CEO Sam Altman appeared in Congress to ‘educate’ regulators on the potential harms of AI. From taking away jobs to stating that safety is ‘vital’ to OpenAI’s work, Altman perfectly played the role of a measured AI doomer to whip regulators into a frenzy to curtail the quickly-growing AI market. While many criticised this move as a way to increase OpenAI’s lead in the ecosystem, it now seems like this has backfired.
According to reports, the FTC (federal trade commission) has opened an expansive investigation into OpenAI’s activities, mainly over concerns of personal reputations and risks of leaking personal data. Earlier this week, the regulator sent OpenAI a document detailing its concerns over the company’s products. This not only underscores the AI company’s hypocrisy, but also represents a strong regulatory threat against it, possibly putting an end to the free rein it has been enjoying in the emerging AI market.
FTC’s opening salvo
Taking a look at the document released by the Washington Post, the FTC has requested a variety of information from the company, including the whole database of third-parties using its APIs. What’s more, the regulator has even asked OpenAI to pull back the curtain on their top models, asking the tech giant to describe in detail research regarding its products.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
The FTC has also requested the training data OpenAI used, as well as information on the reinforcement learning from human feedback process. It has also requested OpenAI to shed some light on the ChatGPT security issue that occurred in March of this year, which allowed some personal information to be leaked.
Other requested information includes details on the process of retraining and refraining LLMs, risk and safety assessments, personal information protection. This is the meat of the request, as the FTC’s concerns are made clear in Section 24 of the interrogatories section.
The regulator also wants to know the capacity of OpenAI’s LLMs to generate statements about individuals, especially statements containing personal information. While the regulator has also expressed concerns over the LLMs’ capacity to make ‘misleading, disparaging, or harmful statements’, the crux of the matter resides within how OpenAI is handling personal information.
This also goes in line with the FTC’s commitment to stick to the current civil rights laws on discrimination. FTC Chair Lina Khan has specifically stated that “There is no AI exemption to the laws on the books”, suggesting that the FTC will stick to the current regulatory framework until the Biden administration creates a new one.
The FTC’s moves have put Altman on the backfoot, as evidenced by his tweet thread on the matter. While decrying the fact that the FTC’s request was leaked to the press, he stated,
“It’s super important to us that our technology is safe and pro-consumer, and we are confident we follow the law. Of course we will work with the FTC.”
This comment stands in line with the other messages Altman has been sending to regulators, speaking about how AI is risky while OpenAI’s products are built ‘on top of years of security research’. This is a narrative we’ve seen before, backed up in this instance by a reiteration that OpenAI is not ‘incentivised to make unlimited returns’ due to its capped-profits structure.
Tricking regulators no more?
Sam Altman has been on a global charm offensive to convince regulators of the potential impact of AI algorithms. Calling it a ‘diplomatic mission’, the CEO has taken it upon himself to be the champion of AI to the world’s regulators. This strategy seems to be a leaf out of lobbyists’ books, curtailing regulation for one company while constraining the market with heavy-handed laws.
Hidden behind his meetings with global regulators is a sinister agenda to expand OpenAI’s products all over the world with as little regulatory oversight as possible. Reports have emerged that Altman has lobbied the EU to water down its stringent AI Act to allow OpenAI a freer hand in the data privacy-centric EEA. What’s worse, the strategy actually worked, as the latest draft of the Act does not classify GPT as a high-risk system, in line with OpenAI’s requests for the same.
Under the new act, providers of foundational models need to only comply with a small handful of requirements, not the stringent regulation they faced as high-risk systems. Sarah Chander, a senior policy advisor at European Digital Rights, stated on the move,
“They got what they asked for…OpenAI, like many Big Tech companies, have used the argument of utility and public benefit of AI to mask their financial interest in watering down the regulation.”
While Altman outwardly has asked for the AI field to be regulated as a whole, it seems that he is ensuring exceptions can be made for OpenAI’s financial gain. This means that OpenAI will be allowed to ‘self-regulate’ where other companies bow down to the needs of regulators. Now, it seems that the FTC has caught on to this game, going after the biggest fish in the sea for its first catch.
With the inquiry into OpenAI, the FTC has indirectly revealed that they have seen through Altman’s guise, as they are striking directly into the heart of the matter. The company is currently under fire for multiple copyright violations, which the FTC has used as an inroad to raise concerns over OpenAI’s handling of personal information. All in all, it seems as though a storm is brewing on the horizon for OpenAI, and Altman is in the centre of it all.