MITB Banner

OpenAI Wants You to Fix Their Cybersecurity Problems

OpenAI announces $1M grant for their Cybersecurity Grant Program : another route to open-sourcing.

Share

Listen to this story

Continuing on the heels of offering $1M grants for AI democratisation, OpenAI is back again with another million grant – this time for cybersecurity. OpenAI has launched their new Cybersecurity Grant Program which aims to help and enable the creation and advancement of AI-powered cybersecurity tools and technologies. 

With a series of announcements inviting people’s contribution, OpenAI is going down the route of open source without open sourcing their product. 

In a move to unite the ‘defenders’ to form a robust system to counter cyber threats, the Cybersecurity Grant Program aims to work with individuals and organisations who can help propose such a system. By doing so, they believe that the power of cybersecurity moves from attackers to defenders. The program aims to empower defenders by giving them access to the latest and most advanced AI capabilities, and in the process develop methods and metrics to assess the capabilities of AI-based cybersecurity systems. 

Reiterating Initiatives 

This is not the first time OpenAI has invited people to help with their cybersecurity features. In April, the company ran a Bug Bounty Program that invited people to report vulnerabilities, bugs and security flaws in their ChatGPT model. The company specifically called out security researchers, ethical hackers and technology enthusiasts to come forward to participate. However, the reward money for the program ranged from $200 to $20,000 depending on the severity of the discovery. 

OpenAI has also mentioned that CyberSecurity Program shall be ‘licensed or distributed for maximal public benefit and sharing.’ This could be hinting at a future where they might offer these learnings to the broader community in any way they deem necessary. 

It is interesting to note that OpenAI is choosing a smart system of not spending resources internally to come up with mechanisms for building a secure system instead choosing an open-source- like method. By crowdsourcing crucial elements such as AI regulation frameworks and cybersecurity threats, the company is likely to seem compliant and adhering to people for their greater good. All these actions are taking form after the recent Senate hearings. The company was also rumoured to start their own open source model, however there has been no details on the same. 

Share
Picture of Vandana Nair

Vandana Nair

As a rare blend of engineering, MBA, and journalism degree, Vandana Nair brings a unique combination of technical know-how, business acumen, and storytelling skills to the table. Her insatiable curiosity for all things startups, businesses, and AI technologies ensures that there's always a fresh and insightful perspective to her reporting.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.