Skip to main content

ChatGPT is violating your privacy, says major GDPR complaint

Ever since the first generative artificial intelligence (AI) tools exploded onto the tech scene, there have been questions over where they’re getting their data and whether they’re harvesting your private data to train their products. Now, ChatGPT maker OpenAI could be in hot water for exactly these reasons.

According to TechCrunch, a complaint has been filed with the Polish Office for Personal Data Protection alleging that ChatGPT violates a large number of rules found in the European Union’s General Data Protection Regulation (GDPR). It suggests that OpenAI’s tool has been scooping up user data in all sorts of questionable ways.

ChatGPT app running on an iPhone.
Joe Maring / Digital Trends

The complaint says that OpenAI has broken the GDPR’s rules on lawful basis, transparency, fairness, data access rights, and privacy by design.

These seem to be serious charges. After all, the complainant is not alleging OpenAI has simply breached one or two rules, but that it has contravened a multitude of protections that are designed to stop people’s private data from being used and abused without your permission. Seen one way, it could be taken as an almost systematic flouting of the rules protecting the privacy of millions of users.

Chatbots in the firing line

A MacBook Pro on a desk with ChatGPT's website showing on its display.
Hatice Baran / Unsplash

It’s not the first time OpenAI has found itself in the crosshairs. In March 2023, it ran afoul of Italian regulators, leading to ChatGPT getting banned in Italy for violating user privacy. It’s another headache for the viral generative AI chatbot at a time when rivals like Google Bard are rearing their heads.

And OpenAI is not the only chatbot maker raising privacy concerns. Earlier in August 2023, Facebook owner Meta announced that it would start making its own chatbots, leading to fears among privacy advocates over what private data would be harvested by the notoriously privacy-averse company.

Breaches of the GDPR can lead to fines of up to 4% of global annual turnover for the companies penalized, which could lead to OpenAI facing a massive fine if enforced. If regulators find against OpenAI, it might have to amend ChatGPT until it complies with the rules, as happened to the tool in Italy.

Huge fines could be coming

A person's hand holding a smartphone. The smartphone is showing the website for the ChatGPT generative AI.
Sanket Mishra / Pexels

The Polish complaint has been put forward by a security and privacy researcher named Lukasz Olejnik, who first became concerned when he used ChatGPT to generate a biography of himself, which he found was full of factually inaccurate claims and information.

He then contacted OpenAI, asking for the inaccuracies to be corrected, and also requested to be sent information about the data OpenAI had collected on him. However, he states that OpenAI failed to deliver all the info it is required to under the GDPR, suggesting that it was being neither transparent, nor fair.

The GDPR also states that people must be allowed to correct the information that a company holds on them if it is inaccurate. Yet when Olejnik asked OpenAI to rectify the erroneous biography ChatGPT wrote about him, he says OpenAI claimed it was unable to do so. The complaint argues that this suggests the GDPR’s rule “is completely ignored in practice” by OpenAI.

It’s not a good look for OpenAI, as it appears to be infringing numerous provisions of an important piece of EU legislation. Since it could potentially affect millions of people, the penalties could be very steep indeed. Keep an eye on how this plays out, as it could lead to massive changes not just for ChatGPT, but for AI chatbots in general.

Editors' Recommendations

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
GPT-4.5 news: Everything we know so far about the next-generation language model
ChatGPT app running on an iPhone.

OpenAI's GPT-4 language model is considered by most to be the most advanced language model used to power modern artificial intelligences (AI). It's used in the ChatGPT chatbot to great effect, and other AIs in similar ways. But that's not the end of its development. As with GPT-3.5, a GPT-4.5 language model may well launch before we see a true next-generation GPT-5.

Here's everything we know about GPT-4.5 so far.

Read more
Newegg wants you to trust ChatGPT for product reviews
AI-generated review on Newegg's website.

Newegg, the online retailer primarily known for selling PC components, has pushed AI into nearly every part of its platform. The latest area to get the AI treatment? Customer reviews.

On select products, Newegg is now showing an AI summary of customer reviews. It sifts through the pile, including the review itself and any listed pros and cons, and uses that to generate its own list of pros and cons, along with its own summary. Currently, Newegg is testing the feature on three products: the Gigabyte RTX 4080 Gaming OC, MSI Katana laptop, and Ipason gaming desktop.

Read more
In the age of ChatGPT, Macs are under malware assault
A person using a laptop with a set of code seen on the display.

It's common knowledge -- Macs are less prone to malware than their Windows counterparts. That still holds true today, but the rise of ChatGPT and other AI tools is challenging the status quo, with even the FBI warning of its far-reaching implications for cybersecurity.

That may be why software developer Macpaw launched its own cybersecurity division -- dubbed Moonlock -- specifically to fight Mac malware. We spoke to Oleg Stukalenko, Lead Product Manager at Moonlock, to find out whether Mac malware is on the rise, and if ChatGPT could give hackers a massive advantage over everyday users.
State-sponsored attacks

Read more