Skip to main content

ChatGPT creator launches bug bounty program with cash rewards

ChatGPT isn’t quite so clever yet that it can find its own flaws, so its creator is turning to humans for help.

OpenAI unveiled a bug bounty program on Tuesday, encouraging people to locate and report vulnerabilities and bugs in its artificial intelligence systems, such as ChatGPT and GPT-4.

In a post on its website outlining details of the program, OpenAI said that rewards for reports will range from $200 for low-severity findings to up to $20,000 for what it called “exceptional discoveries.”

The Microsoft-backed company said that its ambition is to create AI systems that “benefit everyone,” adding: “To that end, we invest heavily in research and engineering to ensure our AI systems are safe and secure. However, as with any complex technology, we understand that vulnerabilities and flaws can emerge.”

Addressing security researchers interested in getting involved in the program, OpenAI said it recognized “the critical importance of security and view it as a collaborative effort. By sharing your findings, you will play a crucial role in making our technology safer for everyone.”

With more and more people taking ChatGPT and other OpenAI products for a spin, the company is keen to quickly track down any potential issues to ensure the systems run smoothly and to prevent any weaknesses from being exploited for nefarious purposes. OpenAI therefore hopes that by engaging with the tech community it can resolve any issues before they become more serious problems.

The California-based company has already had one scare where a flaw exposed the titles of some users’ conversations when they should have stayed private.

Sam Altman, CEO of OpenAI, said after the incident last month that he considered the privacy mishap a “significant issue,” adding: “We feel awful about this.” It’s now been fixed.

The blunder became a bigger problem for OpenAI when Italy expressed serious concerns over the privacy breach and decided to ban ChatGPT while it carries out a thorough investigation. The Italian authorities are also demanding details of measures OpenAI intends to take to prevent it from happening again.

Editors' Recommendations

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
Zoom adds ChatGPT to help you catch up on missed calls
A person conducting a Zoom call on a laptop while sat at a desk.

The Zoom video-calling app has just added its own “AI Companion” assistant that integrates artificial intelligence (AI) and large language models (LLMs) from ChatGPT maker OpenAI and Facebook owner Meta. The tool is designed to help you catch up on meetings you missed and devise quick responses to chat messages.

Zoom’s developer says the AI Companion “empowers individuals by helping them be more productive, connect and collaborate with teammates, and improve their skills.”

Read more
ChatGPT is violating your privacy, says major GDPR complaint
ChatGPT app running on an iPhone.

Ever since the first generative artificial intelligence (AI) tools exploded onto the tech scene, there have been questions over where they’re getting their data and whether they’re harvesting your private data to train their products. Now, ChatGPT maker OpenAI could be in hot water for exactly these reasons.

According to TechCrunch, a complaint has been filed with the Polish Office for Personal Data Protection alleging that ChatGPT violates a large number of rules found in the European Union’s General Data Protection Regulation (GDPR). It suggests that OpenAI’s tool has been scooping up user data in all sorts of questionable ways.

Read more
ChatGPT may soon moderate illegal content on sites like Facebook
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

GPT-4 -- the large language model (LLM) that powers ChatGPT Plus -- may soon take on a new role as an online moderator, policing forums and social networks for nefarious content that shouldn’t see the light of day. That’s according to a new blog post from ChatGPT developer OpenAI, which says this could offer “a more positive vision of the future of digital platforms.”

By enlisting artificial intelligence (AI) instead of human moderators, OpenAI says GPT-4 can enact “much faster iteration on policy changes, reducing the cycle from months to hours.” As well as that, “GPT-4 is also able to interpret rules and nuances in long content policy documentation and adapt instantly to policy updates, resulting in more consistent labeling,” OpenAI claims.

Read more