OpenAI supplements bug bounty program with data security

From now on, security researchers can check the ChatGPT agent for data protection issues for monetary rewards.

listen Print view
ChatGPT app on a smartphone

(Image: Tada Images/Shutterstock.com)

2 min. read

To prevent attackers from exploiting OpenAI's ChatGPT agent through prompt injection, the software company has now publicly launched another bug bounty program to enhance data security. Previously, security researchers could only participate by invitation.

According to a post, those responsible have launched the new Safety Bug Bounty Program in parallel as a supplement to the Security Bug Bounty Program for finding security vulnerabilities.

OpenAI is primarily concerned with the security of AI agents. If participants in the program find ways to extract sensitive user data from ChatGPT via prompts, this is a valid case that will be rewarded with a monetary prize.

However, security researchers must adhere to certain rules. For example, the behavior that triggers a data leak must be reproducible in at least 50% of cases. To achieve this, participants must provide detailed step-by-step instructions, among other things. Of course, no legal violations may occur.

Videos by heise

Depending on the severity of the vulnerability, prizes range from 250 to 5500 US dollars. So far, according to the project page on the bug bounty platform Bugcrowd, two cases have been rewarded – the program has only recently become publicly available. In the bug bounty program for security vulnerabilities, 416 cases are already documented, and an average of around 590 US dollars in rewards has been paid out. Here, up to 100,000 US dollars are possible. However, security researchers must also follow many rules and prerequisites for successful participation.

(des)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.