Listen to this story
Recently, Union Minister Rajeev Chandrasekhar commented on Sam Altman’s recent push for AI regulation by saying that the US and other governments should consider imposing regulation on products “above a crucial threshold of capabilities”.
As per Chandrasekhar, Altman is a smart man. “He has his own ideas about how AI should be regulated. We certainly think we have some smart brains in India as well and we have our own views on how AI should have guardrails,” he said in an interaction. However, he also said that if there is the formation of a global regulatory body on AI at the behest of Altman, it would not stop India from doing what is right for its citizens.
The development comes at a time when Altman is being criticised for proposing a barrier on the upcoming AI startups in the name of regulations. As a Reddit user commented, “It’s naive to believe that the serial entrepreneur CEO of the most aggressively monetized and closed source AI company in the world is begging for regulations purely out of the goodness of the heart.”
Critics argue that Altman cunningly elevated OpenAI’s agenda above the welfare of the AI community, transforming the company into a star player within the Senate’s deliberations. Altman’s impeccable performance garnered him admiration and clout, leaving others like Gary Marcus in his shadow, struggling to make an impact with their questioning.
This episode ignited a fiery debate about the established powerhouses’ quest to manipulate technology through legal channels, thus securing their dominance over regulatory affairs. Altman’s smooth and strategic manoeuvres drew comparisons to the crafty tactics of a mastermind Bond villain, leaving us all shaken and stirred by the audacity of it all, as discussed earlier in our article Master Manipulator Altman Wants to be the AI Showrunner.
While Altman passionately calls for robust AI regulations worldwide, he finds himself caught between a rock and a hard place when the inevitable moment of regulation arrives. Expressing his concerns about the EU’s draft AI Bill, the CEO remarked that even ChatGPT and its bigger sibling GPT-4 could be labelled as high-risk offenders. This would then force our company to jump through some regulatory hoops. In a Time interview, the OpenAI CEO declared, “If we can comply, we will, and if we can’t, we’ll cease operating… We will try. But there are technical limits to what’s possible.”
However, threatening to cease operation in a particular country, just because the company doesn’t like the laws, is not the way to move forward. Big techs like Meta have been already practising the same for a while now, which is completely seen as disrespecting the local laws.
Many netizens also believe that if a company refuses to disclose training data, the company should have zero government protections around its IP and the AI the company builds must be fully available to anyone who wants it for any commercial purpose, for free. “If you want those protections, you need to fully document what it was trained on, otherwise, you’re probably going to be stealing and laundering other IP you don’t own,” says one Reddit user.
Stole too much data?
However, there are people who argue that even for a well-regarded nonprofit organisation with altruistic objectives, the task of simply providing information about the data source and collection methods can be far more challenging than non-data-savvy individuals realise. This is true even for commonly known aspects such as the components of standard operational key performance indicators (KPIs).
Hence, for companies like OpenAI, the task of tracing the lineage of every data point is particularly challenging. The data they use for operational purposes as well as for training their models is likely to be disorganised and less clear than even the harshest critics would assume. Moreover, the conventional practices of data governance, which are somewhat established in other industries, do not easily apply to the management of machine learning training data.
In light of these complexities, experts believe that it is highly probable that Altman, the CEO of OpenAI, has received responses from his data advisors and analysts within the company, explaining that the question at hand misunderstands the intricate context. Unless Altman himself is an expert in data or has gained comprehensive knowledge on the subject, he might not fully grasp the scale of the project involved in disclosing such information. However, he likely understands enough to realise that preparing an answer that satisfies both the data analysts and the legal team will not be a quick and straightforward task, and certainly cannot be accomplished within a couple of business days.
However, netizens are not likely to believe this insight because as per them, “We [the companies] stole so much data we don’t even know where it came from. So let us off”. The Reddit user comments that “It’d be a lot simpler to trace data provenance if they’d made any effort upfront to address these issues. But hey, profit!”