Listen to this story
Recently, several top cryptocurrency exchanges and blockchain networks were indicted in a bevy of cases filed by the United States Securities and Exchanges Commission (SEC). The regulatory body has accused parties in the crypto market of fraud, unlicensed sale of securities, and compliance avoidance. It appears that the regulation has finally caught up to the so-called ‘Wild West’ of crypto-land.
While this move has created a ripple effect across the crypto market, it seems that there is a lesson to be learnt for AI as well. Currently, the AI is at the peak of its hype cycle, boasting record investments from VCs and undertakings by multi-billion dollar companies. However, echoes of calls for regulation have already begun to be heard, as seen by the recent petition calling for pausing AI training beyond GPT-4.
Regulation for AI
Ever since ChatGPT showed the world what LLMs are capable of, the conversation around AI regulation has slowly been gaining ground. We first saw this as OpenAI CEO Sam Altman remarked in a tweet, that we “definitely need more regulation on AI”. Indeed, Altman seems to also be positive on AI regulation, stating in his recent interview with Lex Friedman,
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
“Part of the reason for deploying like this is to get the world to have time to adapt and to reflect and to think about this to pass regulation for institutions to come up with new norms.”
However, some of his peers don’t seem to align with the same intent, and are calling for a hold on AI innovation as opposed to balanced regulation. A recent Open Letter published on the Future of Life institute website has called on all AI labs to put a hold of six months on training of AI systems more powerful than GPT-4. Signatories on the letter include Elon Musk, Steve Wozniak, Yoshua Bengio, Gary Marcus and over 1000 others, but one name is suspiciously missing—Yann LeCun, who has taken a stance of his own.
Earlier today, Meta AI’s VP and Chief AI Scientist compared the six-month hold with the Catholic Church banning the printing press and movable type in the 15th century. He stated that while the press enabled protestantism and religious conflicts, it also enabled the enlightenment, thereby leading to “literacy, education, science, philosophy, secularism, and democracy”.
LeCun has also been vocal in his statements against the direction currently being taken by AI researchers, asserting that they are moving in the wrong direction. In an interview, he stated, “We see a lot of claims as to what we should do to push forward towards human-level AI, and there are ideas which I think are misdirected”.
As seen by OpenAI’s research on the subject, current AI technology is not disruptive enough to warrant regulation. However, the EU has already anticipated the possibility of more advanced AI models emerging, further apparent by their definition of General Purpose AI. In the EU’s vocabulary, general purpose AI is defined as “AI systems that have a wide range of possible uses, both intended and unintended by the developers”.
While current models like GPT-4 might not fall under this definition—as they still interact only in the medium of text—innovation will continue to spur advancements in AI to the point where systems begin to meet this definition. When that moment comes, the AI field must be ready for the incoming wave of AI, which can either mould the market or shatter it into a thousand pieces—just as it did with crypto.
What AI can learn from crypto
The cryptocurrency market was constantly rocked by bad news and announcements of lawsuits last week, as the SEC finally got around to cracking down on it.
Crypto exchanges Coinbase, Finance, Beaxy, Kraken, and Gemini Trust, along with personalities like Tronix founder Justin Sun, Luna founder Do Hyeong Kwon, and CEO of Binance Chanpeng Zhao were merely some of those who got caught in the SEC’s wide net. In a matter of days, the crypto market was brought to its knees by regulators, undoubtedly hampering future innovation in the field and setting back current progress considerably.
US regulators have also begun to eye AI with a similar intention, as seen by the Federal Trade Commission’s (FTC) views on the topic. In a blog post titled, ‘Keep your AI claims in check’, the advertising regulator described pointers that aim to curtail the use of AI as a buzzword. The post asks companies to reduce exaggerations of what AI products can do, to not promise that their AI product performs better than a non-AI product, and to accurately convey the risks associated with using AI.
Now that regulators have already witnessed what a tech-fuelled, unregulated market looks like in crypto, AI might soon shape up to be the next target. The EU has already proposed the artificial intelligence act for legislation, taking on challenges of data quality, transparency, and human oversight along with ethical questions associated with the field.
There is also a burning need for regulatory bodies made specifically for artificial intelligence and its associated technologies. Much like the SEC exists to protect investors from non-transparent financial instruments, an AI regulatory body should prevent users from being taken advantage of by unregulated AI.
To gauge the impact of AI algorithms, the AI act proposes a risk tier system, classified into minimal, limited, high, and unacceptable. Under this regulation, ‘unacceptable’ systems, such as real-time biometric identification and social credit systems à la China have been banned in the EU, with ‘high risk’ systems being subject to stringent regulation.
As with any regulatory move, bodies must strike a balance between innovation and regulation. However, companies must also prepare for an eventual regulation D-Day, so as to not fall prey to overnight regulatory moves the way crypto did.