Listen to this story
Akin to a star-studded event with the crème de la crème of the tech industry, the latest AI Insight Forum held on Capitol Hill was nothing short of a billionaire conclave. The closed-door session brought together all the tech titans with a combined net worth of approximately $550 billion, to delve into the future of AI, with a particular focus on the widely debated topic of ‘AI regulation.’ Unlike past AI senate hearings that featured testimonies from OpenAI’s Sam Altman, Anthropic CEO Dario Amodei, among others, the recent session that assembled luminaries such as Elon Musk, Bill Gates, and many more, was not open to the public or media, which raises the question of what crucial or controversial discussions unfolded behind closed doors?
AI Insight Forum. Source: The Guardian
Applauds and Joy- What Next?
New York Senator Chuck Schumer, who was among the 60 senators that took part, called the meet a ‘very productive first-ever AI Insight Forum’, and the task ahead as an arduous one. The Congress is seeking to pass a bipartisan AI legislation within the next year which aims to mitigate the problems associated with AI risks.
The tech tycoons who attended the event lauded the session for promoting an open discussion on AI regulation. Sam Altman, Meta founder Mark Zuckerberg, NVIDIA Chief Jensen Huang, Microsoft CEO Satya Nadella, Alphabet CEO Sundar Pichhai, CEO of IBM Arvind Krishna, Palantir CEO Alex Karp, former Google CEO Eric Schmidt, HuggingFace CEO Clement Delangue, co-founder & Executive Director of the Center for Humane Technology Tristan Harris, were some of the biggies in attendance. Elon Musk referred to the meeting as a service to humanity, calling it a very important event for the future of civilization. Calling AI a ‘double-edged sword’, Musk also emphasised on the need for having a ‘referee’ to ensure companies take safe actions and protect the interests of the general public.
Zuckerberg also weighed in after the meeting saying that the Congress should engage with AI to promote innovation and establish protective measures, and said that it is better that the Government works with big tech companies on such issues.
Ringing a familiar tone from previous AI discussions, both the leaders and senators seemed to be in approval of having regulations that will possibly mitigate the dangers of AI. However, the question on how they would go about it remains hanging.
Licensing Behind Closed Doors
Senator Schumer said the closed forum facilitated an open discussion among the attendees foregoing the normal time and format restrictions that are part of such public hearings, and also said that some of the future forums will be open to the public.
With a legislation bill in mind, the possibility of bringing a licence for running large language models was hinted at by computer science professor of University of Washington, Pedro Domingos.
The possibility of licensing can also go two ways. With licensing, big tech companies will approach AI development with caution, and probably serve as a catalyst for responsible AI. However, if obtaining a licence is mandated, it is possible that AI development will be hindered in smaller tech companies, and they may fall behind in the race. Thus, the trickiness of manoeuvring through AI regulations cannot be blanketed across companies. Interestingly, Sam Altman in the past had mentioned that there should be no regulation on smaller companies.
Looking back at how a number of leaders signed a petition to slow the growth of AI development a few months ago, demanding OpenAI to not train advanced models such as GPT-5, if any form of regulatory bills surfaces from the Congress, the ones impacted will be large corporations. Hence, the tech honchos convened.
In the past, the US government has been indecisive of regulations around new technology. For instance, autonomous vehicles are not completed regulated in the US. Despite accidents caused by these vehicles, there is still no clear picture on any legislation for self-driving cars. Efforts by Congress to enact autonomous vehicle legislation have faced years of delay. Similarly, looking back at how cryptocurrency regulations came into picture after a few years of being in the market, and gathering up frauds and fund misplacements till then, the US government’s regulatory approach towards AI seems to be preventive.
Trying to bypass previous mistakes where the negative aftermath of certain technology/product at its nascent stage of release went unregulated, the government is taking a proactive approach towards AI regulation.
The government looks to mitigate the technology before the situation goes south. Furthermore, with the countless AI doomsday predictions by AI researchers and experts, the matter has been gravely looked at. Interestingly, in a 2016 interview, Musk told Altman about the need to democratise AI technology and how AI can be used in a bad way.
Going by how the Senate talks started four months ago, and a few companies such as OpenAI launched initiatives to democratise AI, nothing concrete has come into fruition. In the process, companies, not just in the US but across the globe are continuing to release advanced models. Looks like, if any regulation has to happen, it better happen fast.