US Govt ‘Snubs’ Musk and Zuckerberg, Keeps ’em Out of AI Safety Board

The DHS said that Zuckerberg and Musk’s exclusion had to do with the exemption of social media sites, but many don’t seem convinced.

Share

This week, bigwigs from several leading AI companies, including OpenAI’s Sam Altman, NVIDIA’s Jensen Huang, and Microsoft’s Satya Nadella, joined the freshly minted Artificial Intelligence Safety and Security Board formed by the US government.

This follows as a countermeasure to several cases of deepfakes being used against politicians, celebrities and even children.

In Florida this week, police arrested an 18-year-old for generating sexually explicit images of women without their consent.

Likewise, the usage of “nudification” programmes and GenAI to create deepfakes as either blackmail or harassment material has been pervasive. Especially in American schools, as reported by The New York Times.

So, the formation of a federal board hosting some of the biggest names within the industry was a step in the right direction. However, there already seems to be bad blood around who gets to be on the board.

Several people pointed out that Meta CEO Mark Zuckerberg and Tesla CEO Elon Musk had not been included in the list of board members released by the Department of Homeland Security (DHS).

Theories raged on why the mighty duo was “snubbed”. Let’s take a look at what these companies have been doing in terms of safety and why the two may have been overlooked.

Why Sans Zuck and Musk?

Secretary of homeland security Alejandro Mayorkas clarified that Zuckerberg and Musk’s exclusion had to do with the exemption of social media sites. However, many don’t seem convinced.

Though YouTube and LinkedIn are seen as social media sites, the two didn’t start out as such. But it still doesn’t explain why the DHS would exclude them solely for being social media companies.

Well, the thing is that the giants, by virtue of being social media companies first and foremost, have had run-ins with several governments in the past.

Meta is set to face an EU probe for allegedly not doing enough to prevent Russian disinformation on Facebook. Other issues include a lack of curbs on ads promoting the aforementioned nudification apps.

Similarly, Media Matters had earlier released a report alleging that advertisements were appearing opposite antisemitic posts. This subsequently led to major advertisers pulling their ads from the website and Musk retaliating with a lawsuit against the platform.

This doesn’t bode well for the duo’s stance on AI safety, especially with the board focusing on advising the DHS and other stakeholders on potential AI disruptions.

Alternatively, Zuckerberg has been a big proponent of open-source AI, which is harder to regulate or advise on in terms of safety. A majority of child sexual abuse material (CSAM) and other inflammatory material comes from open-source datasets that have been crowdsourced.

Meanwhile, some believe that Musk wasn’t a first choice due to his unpredictability, which, considering his current plight with the SEC, is unsurprising.

While the companies involved with the safety board are by no means bastions of safety, they have been considerably more open to safety talks. 

In the recent child safety consortium that Meta, Google, Microsoft, OpenAI and other major players pledged themselves to, most companies apart from Meta, had issues with people using their services to generate CSAM by abusing GenAI.

This is a far more difficult issue to tackle than Meta’s, which was the lack of moderation in terms of defamatory and sexually explicit ads that are run on the site.

“We understand that people get around these safeguards all the time, and so we try to design a safe product…We’re not an advertising-based model, we’re not trying to get people to use it more and more,” Altman had said last year during his Senate hearing. 

Pro-active AI Safety Measures 

Advertising aside, these companies already have their own AI safety codes in place. OpenAI, for instance, has their Approach to AI Safety blog that states they “work to improve the model’s behaviour with techniques like reinforcement learning with human feedback, and build broad safety and monitoring systems”.

Likewise, the others included on the list have their variants of the AI safety framework. But whether they work or not is another conversation altogether.

Last year, a research paper from Stanford found that the LAION 5B dataset, which was used to train several text-to-image models, including Stable Diffusion and Google’s Imagen, included CSAM.

Similarly, big tech has had an influx of bad-faith actors who attempt to circumvent existing GenAI guardrails, which is just as hard to curb.

With a rising concern about deepfakes used to generate CSAM, there has been plenty of independent research on how to mitigate these issues, including watermarking.

Companies like Adobe and Google have adopted watermarking practices in the form of Content Credentials and SynthID. OpenAI has done the same with Dall.E 3, making use of C2PA.

However, while these provide context on whether an image is AI-generated or not, as well as providing tamper-proof metadata, these rectify only a minor concern in the overall deepfake debate. Essentially, slapping a bandaid on a gaping wound.

Not to mention that social media websites have systems in place to strip media of metadata to prevent access to a user’s location and other details.

Another paper from 2022 suggests the creation of a CSAM database to train GenAI on how to detect potential CSAM. However, with the sensitivity around CSAM content, they suggest a way to extract attributes from the dataset that can then be used to train GenAI on potentially explicit content.

That makes for an interesting concept. However, while this is another step in the right direction, it still leaves a lot to be considered.

With more emphasis in place on AI safety, a combination of independent research and big tech adoption, much like in the case of watermarking, seems to be a sound solution towards mitigating the widespread use of deepfakes across the internet.

Share
Picture of Donna Eva

Donna Eva

Donna is a technology journalist at AIM, hoping to explore AI and its implications in local communities, as well as its intersections with the space, defence, education and civil sectors.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India