EU data protection poses major challenges for OpenAI and ChatGPT



summary
Summary

Italy sets a deadline for OpenAI and conditions for the reinstatement of ChatGPT in Italy. Some of these conditions could pose significant challenges for OpenAI.

OpenAI has until April 30th to meet the conditions set by the Italian data protection authority “Garante”, or provide the prospect of meeting them.

This includes implementing age verification at registration by the end of May and general age verification by September 30, filtering out users under 13 and users between 13 and 18 who do not have parental consent.

In addition, OpenAI must transparently explain how and for what purpose data will be processed and obtain users’ consent to do so. OpenAI should be able to meet both of these requirements. Two additional requirements are much more challenging.

ad

OpenAI must run an awareness campaign about the use of personal data for AI, and correct or remove inaccurate information

The Italian Data Protection Authority is requiring OpenAI to run an awareness campaign on TV, in newspapers and online about the use of Italian citizens’ personal data to train algorithms. The campaign is scheduled for May 15 and must be approved by Garante.

The biggest headache for OpenAI is likely to be Garante’s requirement that generated personal data containing false information must be either corrected or deleted at the request of the data subject. This applies even if that person is not using ChatGPT. Deletion is required when correction is “technically unfeasible”.

A set of additional measures concern the availability of tools to enable data subjects, including non-users, to obtain rectification of their personal data as generated incorrectly by the service, or else to have those data erased if rectification was found to be technically unfeasible.

OpenAI will have to make available easily accessible tools to allow non-users to exercise their right to object to the processing of their personal data as relied upon for the operation of the algorithms. The same right will have to be afforded to users if legitimate interest is chosen as the legal basis for processing their data.

Guarantor

Currently, if you ask ChatGPT for the biography of a lesser-known person, for example, when I ask for mine, the output is full of incorrect information – and also different every time it is generated. Because of the way the model works, predicting sentence and word components (called tokens), it will probably be easier for OpenAI to filter out such requests than to correct them.

If the Italian DPA decides that personal data cannot be used to train algorithms, OpenAI would face a Herculean task. Personal data from training material exists only as abstract representations in the model; it is unlikely that OpenAI could easily detect and remove it with reasonable effort. Presumably, the company would need to train new models with special EU-compliant datasets or develop new methods for handling data within large, complex AI models.

Italy could become an AI privacy role model for Europe, and that could be bad for OpenAI

If, as is already apparent, other European countries follow Italy’s lead and ask OpenAI to take similar action, OpenAI faces a turbulent few weeks with many uncertainties. A (temporary) withdrawal from Europe seems possible.

Recommendation

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top