MITB Banner

Biden’s AI Executive Order Faces Backlash 

The swift implementation of laws around AI in the US has been met with criticism from the open source and research community.

Share

Listen to this story

Biden has signed the executive order which carries the weight of the law without requiring congressional approval. The swift implementation of laws around AI has been met with criticism from the open source and research community.

Clem Delangue, Co-founder and CEO of Hugging Face, posted on X saying, “Compute or model size thresholds for AI building would be like counting the lines of code for software building.”

Richard Socher, CEO of you.com also said that regulation shouldn’t apply to foundational models and research but on the applications of AI. The applications of AI that pose risks that include privacy, legal, security concerns which are not addressed in the executive order.

Andrew Ng said, “There are definitely large tech companies that would rather not have to try to compete with open source, so they’re creating fear of AI leading to human extinction.” He accused big tech companies of grossly exaggerating the risks of AI, while saying that it was, “because they want to dominate the market.”

This reasoning is echoed by Yann LeCun as well. He recently posted on X saying that some of them were lobbying in an attempt to capture the regulations and the AI industry. He said that it isn’t AI research or development that needs to be regulated but its applications. 

What’s in it for the Big Tech

In contrast to the bill, 15 major tech companies have agreed to implement voluntary AI safety commitments. But on the other hand, the government regulatory body has said, ‘it is not enough.”

Biden signed the order and said, “To realise the promise of AI and avoid the risk, we need to govern this technology,” He claimed that, “In the wrong hands, AI can make it easier for hackers to exploit vulnerabilities in the software that makes our society run.” However this isn’t expected to last for a long time until the congress formulates long term legislation on this novel technology. 

Further, the Act mandates that developers of powerful AI must disclose data of the safety test results, ensuring secure deployment. Simultaneously, the National Institute of Standards and Technology will set standardised rules to guide AI system development.

An ‘AI Bill of Rights’ will protect against potential AI-related harms, emphasising privacy, equity, and worker support. To maintain AI leadership, significant investments are channelled into research and development. Policies are being crafted to ensure AI’s responsible and ethical use across governmental functions, aiming for societal benefit while curbing potential negative impacts, reads the bill. 

Currently, the bill targets the technologies that have already been deployed. “I think, are the ones that we’re really concerned about.” said Nicol Turner Lee, the director of the Center for Technology Innovation. The rigorous testing and impact of assessment required by the bill would potentially back small companies and might not have much effect on the larger ones.

Any AI model that required more than 1e26 floating point operations or 1e23 integer operations to build must report to the government. This is just above the largest existing model which is OpenAI’s GPT-4. This will affect only big techs for now like OpenAI and Google. However there is no clear way on how this will be monitored.

In comparison with EU’s AI act

Even though the executive order will take time to be implemented and even then it is unclear how they will be monitored and regulated, the EU AI act is a more comprehensive document formulating the guidelines on AI development.

The key difference between the two is that the EU AI Act places a strong emphasis on transparency and accountability. It requires AI developers to disclose information about their AI systems, such as how they were developed and how they work. It also requires AI developers to take steps to ensure that their AI systems are accountable and that they can be held responsible for any harm that they cause.

The Biden AI executive order also places an emphasis on transparency and accountability, but it is not as prescriptive as the EU AI Act. The Biden AI executive order encourages AI developers to adopt voluntary transparency and accountability measures, but it does not require them to do so.

The EU AI Act applies to all AI systems, regardless of their size or complexity. The Biden AI executive order, on the other hand, only applies to certain types of AI systems, such as those that are used by the government or that pose a high risk to public safety.

Share
Picture of K L Krithika

K L Krithika

K L Krithika is a tech journalist at AIM. Apart from writing tech news, she enjoys reading sci-fi and pondering the impossible technologies, trying not to confuse it with reality.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.