OpenAI declares bug bounty program to handle AI safety dangers

0
468
OpenAI declares bug bounty program to handle AI safety dangers


Join prime executives in San Francisco on July 11-12, to listen to how leaders are integrating and optimizing AI investments for fulfillment. Learn More


OpenAI, a number one synthetic intelligence (AI) analysis lab, introduced at this time the launch of a bug bounty program to assist tackle rising cybersecurity dangers posed by highly effective language fashions like its personal ChatGPT.

The program — run in partnership with the crowdsourced cybersecurity firm Bugcrowd — invitations impartial researchers to report vulnerabilities in OpenAI’s techniques in alternate for monetary rewards starting from $200 to $20,000 relying on the severity. OpenAI mentioned this system is a part of its “commitment to developing safe and advanced AI.”

Concerns have mounted in latest months over vulnerabilities in AI techniques that may generate artificial textual content, photographs and different media. Researchers discovered a 135% enhance in AI-enabled social engineering assaults from January to February, coinciding with the adoption of ChatGPT, in accordance with AI cybersecurity agency DarkTrace.

While OpenAI’s announcement was welcomed by some consultants, others mentioned a bug bounty program is unlikely to completely tackle the big selection of cybersecurity dangers posed by more and more refined AI applied sciences 

Event

Transform 2023

Join us in San Francisco on July 11-12, the place prime executives will share how they’ve built-in and optimized AI investments for fulfillment and averted frequent pitfalls.

 


Register Now

The program’s scope is proscribed to vulnerabilities that might immediately impression OpenAI’s techniques and companions. It doesn’t seem to handle broader considerations over malicious use of such applied sciences like impersonation, artificial media or automated hacking instruments. OpenAI didn’t instantly reply to a request for remark.

A bug bounty program with restricted scope

The bug bounty program comes amid a spate of safety considerations, with GPT4 jailbreaks rising, which allow customers to develop directions on the right way to hack computer systems and researchers discovering workarounds for “non-technical” customers to create malware and phishing emails.

It additionally comes after a safety researcher generally known as Rez0 allegedly used an exploit to hack ChatGPT’s API and uncover over 80 secret plugins.  

Given these controversies, launching a bug bounty platform gives a possibility for OpenAI to handle vulnerabilities in its product ecosystem, whereas situating itself as a corporation appearing in good religion to handle the safety dangers launched by generative AI. 

Unfortunately, OpenAI’s bug bounty program may be very restricted within the scope of threats it addresses. For occasion, the bug bounty program’s official web page notes: “Issues related to the content of model prompts and responses are strictly out of scope, and will not be rewarded unless they have an additional directly verifiable security impact on an in-scope service.”

Examples of issues of safety that are thought of to be out of scope embody jailbreaks and security bypasses, getting the mannequin to “say bad things,” getting the mannequin to put in writing malicious code or getting the mannequin to let you know the right way to do dangerous issues. 

In this sense, OpenAI’s bug bounty program could also be good for serving to the group to enhance its personal safety posture, however does little to handle the safety dangers launched by generative AI and GPT-4 for society at giant.  

VentureBeat’s mission is to be a digital city sq. for technical decision-makers to realize data about transformative enterprise know-how and transact. Discover our Briefings.



LEAVE A REPLY

Please enter your comment!
Please enter your name here