For Further Information Contact:
France Update: AI and Health: What About the Protection of Personal Data?
08/09/2023New technologies and artificial intelligence (AI) have the power to significantly improve doctors’ daily lives and patients’ prognosis when they rely on patients’ personal health data. The rise of assisted surgical procedures, companion robots, smart prostheses and personalized treatments, thanks to the cross-checking of this personal data, testifies to this.
However, the use of AI in healthcare also raises important legal and ethical issues, including the management of patients’ personal data, their confidentiality and the transparency of algorithms. The challenge is then to combine the use of AI with a responsible and ethical approach.
ARTIFICIAL INTELLIGENCE AT THE SERVICE OF MEDICAL DIAGNOSIS
One of the most important applications of AI is medical diagnostic aid. AI can indeed be trained to recognize the warning signs of diseases. In medical imaging in particular, healthcare professionals can use machine learning algorithms to analyze medical images to detect abnormalities and diagnose pathologies early.
AI can also be used to analyze patients’ biological data and medical history, comparing it to a knowledge base to provide recommendations or treatment suggestions. AI can also analyze large amounts of clinical data and help predict the outcomes of treatments or interventions.
Finally, it can be used to extract relevant information from electronic medical records, in an automated way, speeding up the analysis process and allowing healthcare professionals to make informed decisions faster.
In this context, several projects have emerged, such as the French project “Automatic treatment of emergency room summaries”, known as “TARPON” aimed at analyzing the origin of the trauma suffered by patients presenting to the emergency room, in order to inform them about their possible risks, such as those related to taking certain medications. AI analyzes annotated information on patients’ clinical reports, in order to classify trauma emergency room visits, and ultimately to set up a near-comprehensive trauma surveillance system.
In the United States, a predictive model, NYUTron, was developed using millions of medical observations from patient records treated in hospitals affiliated with New York University (medical reports, notes on the evolution of the patient’s condition, radiological images, etc.) between January 2011 and May 2020. NYUTron was able to identify in advance 95% of patients who died in hospitals as well as 80% of those who were readmitted within a month of discharge.
THE LEGAL CHALLENGE OF ARTIFICIAL INTELLIGENCE IN THE FIELD OF HEALTH: THE PROTECTION AND SECURITY OF PATIENTS’ PERSONAL DATA
One of the main challenges of AI in health concerns the massive management of the health data it uses.
According to the General Data Protection Regulation (GDPR)1, personal data concerning health is ‘all data relating to the state of health of a data subject which reveal information about the past, present or future state of physical or mental health of the data subject‘.
Given the mass of health data processed by AI as part of the various existing projects, it is essential to ensure the application of the GDPR.
When AI involves the collection and use of personal health data for research or algorithm improvement purposes, then it is crucial to obtain informed consent from patients for the use of their data in such projects. As such, in accordance with Article 32 of the GDPR, the controller (e.g. the healthcare institution) is required to implement, from the design phase of the AI system, all appropriate technical and organisational measures to ensure a level of security of health data adapted to the risk. In addition, a Data Protection Impact Assessment (DPIA) is mandatory when the processing of personal data is likely to result in a high risk to the rights and freedoms of data subjects. It is then a question of studying the risks to data security (confidentiality, integrity and availability) as well as the potential impacts for the persons concerned, in order to determine the appropriate measures of protection and risk reduction.
The processing of health data by AI involves an indisputably high risk: the data is sensitive, it is collected on a large scale and used by algorithms whose reliability is not always known. Impact assessment is therefore not only mandatory but indispensable.
Aware of the importance of these issues, the CNIL recalled in a communication entitled “AI: how to be in compliance with the GDPR?” of April 5, 20222, the main principles of the Data Protection Act and the GDPR to follow, as well as its positions on certain more specific aspects. A joint declaration and action plan on generative AI was recently adopted by the data protection authorities of the G7 countries meeting from 19 to 21 June 2023, in Tokyo, to contribute to the development of AI while respecting fundamental rights3.
The French National Assembly also set up an information mission on AI and the protection of personal data in May 2023, under the direction of the rapporteurs, Philippe Pradal and Stéphane Rambaud.4.
While AI offers extremely promising prospects for improving health services and patients’ daily lives, it remains crucial to combine its benefits with a responsible and ethical approach, in order to guarantee the protection and security of data while ensuring the transparency of algorithms and the absence of discriminatory effects they are likely to generate.
1 – Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)
2 – CNIL, AI: how to comply with the GDPR?, 5 April 2022
3 – CNIL, generative AI: the G7 of data protection authorities adopts a joint declaration, 23 June 2023
4 – National Assembly, Committee fact-finding missions, Artificial intelligence and data protection, Mr Philippe Pradal, Mr Stéphane Rambaud
Ginestié Magellan Paley-Vincent, France, a Transatlantic Law International Affiliated Firm.
For further information or for any assistance please contact france@transatlanticlaw.com
Disclaimer: Transatlantic Law International Limited is a UK registered limited liability company providing international business and legal solutions through its own resources and the expertise of over 105 affiliated independent law firms in over 95 countries worldwide. This article is for background information only and provided in the context of the applicable law when published and does not constitute legal advice and cannot be relied on as such for any matter. Legal advice may be provided subject to the retention of Transatlantic Law International Limited’s services and its governing terms and conditions of service. Transatlantic Law International Limited, based at 42 Brook Street, London W1K 5DB, United Kingdom, is registered with Companies House, Reg Nr. 361484, with its registered address at 83 Cambridge Street, London SW1V 4PS, United Kingdom.