Introduction
The introduction of new Artificial Intelligence (“AI”) technology creates new challenges and concerns, especially in relation to data protection. It is important to understand the intersection between the use of AI and data protection as there are risks that need to be mitigated to comply with relevant data protection legislation.
In March, the Information Commissioner’s Office (“ICO”) in the United Kingdom (“UK”) released an updated guidance note on AI and Data Protection to clarify the requirements for fairness in AI. The update relates to accountability and governance of AI, transparency, lawfulness, and fairness in AI and the AI lifecycle. The guidance note is useful to your organisation if you intend on incorporating AI tools to data sets.
Accountability and Governance Implications of AI
The guidance note points out that it is important to demonstrate on an ongoing basis how risks are managed from processing personal data in AI systems. Organisations must consider undertaking a Data Protection Impact Assessment (“DPIA”) when incorporating AI tools, as some uses of AI will likely result in a high risk to individuals’ rights and freedoms. The assessment must be made on a case-by-case basis. The outcomes of the DPIA must comprehensively be recorded, outlining the approach taken, including:
- a description of the nature, scope, context, and purposes of processing personal data;
- impact of the processing on individuals;
- an assessment of necessity and proportionality;
- an assessment of the risks to individuals; and
- identification of mitigating measures.
Transparency in AI
The guidance note requires that organisations be transparent about the personal data that is processed in the AI system to comply with the principle of transparency. The controller must inform individuals about:
- the purposes of processing personal data;
- retention periods for personal data; and
- with whom it is shared.
Lawfulness in AI
The updated content in this chapter relates to the use of AI to make inferences about people or affinity groups[1] and the use of AI in the processing of special category data. If AI systems are used to guess, make predictions, analyse or find correlations between datasets, the outcomes may be considered to be personal data. The same would apply to affinity groups if such inferences guessed details that would be classified as special category data. For example, AI software may be used to assist doctors to make a medical diagnosis of a patient. During this process, the AI system may make inferences to recognise patterns in a data set such as identifying that a combination of symptoms is associated with a particular medical condition. The inference that the AI software makes to produce a medical condition may be considered as personal data and even special category data.
Essentially, lawfulness requires that appropriate steps be taken to effectively implement data protection principles and integrate safeguards into the processing both at the design stage and throughout the lifecycle of the processing of the data. It is therefore important to consider applying the data protection principles to the output data.
Fairness in AI
The guidance note emphasises that fairness is a key principle of data protection when processing personal data. Fairness requires that organisations should only process personal data in ways data subjects would reasonably expect and not in any other manner. AI models should also be trained in a manner that does not discriminate between people. As an example, facial recognition systems used for surveillance often contain racial biases because the datasets used to train the algorithms lack diversity which may result in misidentification of black people or ethnic minorities and lead to false positive matches.
The above is a brief summary of the recent updates to the guidance. Please click on the link above for further reading.
Contact us for more good, clear, precise advice.
[1] Affinity groups are groups of people linked by a common interest or purpose.