Security
Headlines
HeadlinesLatestCVEs

Headline

ChatGPT accused of breaking data protection rules

An Italian investigation into privacy concerns has given ChatGPT 30 days to defend itself.

Malwarebytes
#intel#auth

Italy’s Data Protection Authority (GPDP) has uncovered data privacy violations related to collecting personal data and age protections after an inquiry into OpenAI’s ChatGPT. OpenAI has 30 days to respond with a defense.

ChatGPT is an artificial intelligence (AI) chatbot that can engage in conversations with users, and answer their questions. It does this using natural, human-like language, a trick which is accomplished by training the underlying algorithm with large amounts of data from the internet.

Italy and ChatGPT historically have a troubled relationship. In April, 2023 ChatGPT was happy to announce that the artificial intelligence chatbot was available again in Italy after a month long block. To accomplish this the company had to meet the demands of Italy’s regulators who temporarily blocked ChatGPT over privacy concerns.

But Italy didn’t rest the case at that point. The GPDP started an investigation into data privacy violations. Around the same time the European Union’s European Data Protection Board set up a special task force to monitor ChatGPT.

The Italian authority has since concluded that elements indicate one or more potential data privacy violations without providing specific details, but added its investigation would take into account work done by the European task force.

In December of 2023 the European Union (EU) reached a provisional deal on landmark EU rules governing the use of artificial intelligence. This agreement requires Large Language Models (LLMs) such as ChatGPT and more general purpose AI systems (GPAI) to comply with transparency obligations before they are put on the market.

The GPDP noted two major concerns which are only voiced in general terms:

  • The available evidence pointed to the existence of breaches of the provisions contained in the EU.
  • It is concerned that younger users may be exposed to inappropriate content generated by the chatbot.

For those reasons Italy’s watchdog said it would like to see OpenAI:

“implementing an age verification system and planning and conducting an information campaign to inform Italians of what happened as well as of their right to opt-out from the processing of their personal data for training algorithms.”

ChatGPT is already blocked in a number of other countries, including China, Iran, North Korea, and Russia. Which might turn out to be their loss. Time will tell.

We shouldn’t forget that we are talking about a revolutionary technology that is only just scratching the surface of its potential and going through some growing pains like a child would—a child with the knowledge and vocabulary of a library, but no idea what a secret is, let alone how to keep one.

To demonstrate the fact, Ars Technica found that ChatGPT is leaking private conversations that include login credentials and other personal details of unrelated users.

In November, 2023, researchers published a paper reporting how ChatGPT could be prompted into revealing email addresses and other private data that was included in training material. Those researchers warned that ChatGPT was the least private model they studied.

So, yes, it’s good to keep a keen eye on the developments and pace them where needed. But we should not do this to the extent that we push it away from the public eye. A child needs to be supervised, not locked in a basement.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Malwarebytes: Latest News

Spotify, Audible, and Amazon used to push dodgy forex trading sites and more