On 25 June 2020, the European Parliament published a study on the relationship between the General Data Protection Regulation (GDPR) and artificial intelligence (AI). Authored by the Panel for the Future of Science and Technology, the study considers the opportunities and challenges for individuals and society, and the ways in which risks can be countered and opportunities enabled through law and technology.
The report urges controllers engaging in AI-based processing to endorse the values of the GDPR, and to adopt a responsible and risk-oriented approach. Further, controllers should be able to do so in a way that is compatible with the available technologies and with economic profitability or the sustainable achievement of public interest.
Furthermore, the study calls on data protection authorities to promote a broad social debate on AI applications and to provide high-level guidance. In this regard, data protection authorities need to actively engage in a dialogue with all stakeholders, including controllers, processors and civil society, to develop appropriate responses based on shared values and effective technologies.
The study includes the following recommendations:
- Controllers and data subjects should be provided with guidance on how AI can be applied to personal data consistently with the GDPR, and on the available technologies for doing so. This can prevent costs linked to legal uncertainty while enhancing compliance.
- A broad debate is needed, involving not only political and administrative authorities but also civil society and academia. This debate needs to address the issues of determining what standards should apply to AI processing of personal data, particularly to ensure the acceptability, fairness and reasonableness of decisions on individuals.
- Data protection authorities should provide controllers with guidance on the many issues for which no precise answer can be found in the GDPR, which could also take the form of soft law instruments designed with a dual legal and technical competence.
- The fundamental data protection principles – especially purpose limitation and minimisation – should be interpreted in such a way that they do not exclude the use of personal data for machine learning purposes.
- Guidance is needed on profiling and automated decision-making. Controllers should also be under an obligation to provide individual explanations, to the extent that this is possible according to the adopted AI technology and reasonable according to costs and benefits.
- The content of the controllers’ obligation to provide information (and the corresponding rights of data subjects) about the ‘logic’ of an AI system need to be specified, with appropriate examples, with regard to different technologies.
- Strong measures need to be adopted against companies and public authorities that intentionally abuse the trust of data subjects by misusing their personal data.
- Collective enforcement in the data protection domain should be enabled and facilitated.
The study concludes that the consistent application of data protection principles, when combined with the ability to use AI technology efficiently, can contribute to the success of AI applications by generating trust and preventing risks.
The study is accessible here.
Please note: The information contained in this note is for general guidance on matters of interest, and does not constitute legal advice. For any enquiries, please contact us at [email protected].