Updates to the ICO’s guidance note on AI and Data Protection – a summary

This article summarises the March 2023 updates to the Information Commissioner’s Office guidance note on Artificial Intelligence and Data Protection.

Introduction

The introduction of new Artificial Intelligence (“AI”) technology creates new challenges and concerns, especially in relation to data protection.  It is important to understand the intersection between the use of AI and data protection as there are risks that need to be mitigated to comply with relevant data protection legislation.

In March, the Information Commissioner’s Office (“ICO”) in the United Kingdom (“UK”) released an updated guidance note on AI and Data Protection to clarify the requirements for fairness in AI.  The update relates to accountability and governance of AI,  transparency,  lawfulness, and fairness in AI and the AI lifecycle.  The guidance note is useful to your organisation if you intend on incorporating AI tools to data sets.

Accountability and Governance Implications of AI

The guidance note points out that it is important to demonstrate on an ongoing basis how risks are managed from processing personal data in AI systems.  Organisations must consider undertaking a Data Protection Impact Assessment (“DPIA”) when incorporating AI tools, as some uses of AI will likely result in a high risk to individuals’ rights and freedoms.  The assessment must be made on a case-by-case basis.  The outcomes of the DPIA must comprehensively be recorded, outlining the approach taken, including:

  1. a description of the nature, scope, context, and purposes of processing personal data;
  2. impact of the processing on individuals;
  3. an assessment of necessity and proportionality;
  4. an assessment of the risks to individuals; and
  5. identification of mitigating measures.

Transparency in AI

The guidance note requires that organisations  be transparent about the personal data that is processed in the AI system to comply with the principle of transparency.  The controller must inform individuals about:

  1. the purposes of processing personal data;
  2. retention periods for personal data; and
  3. with whom it is shared.

Lawfulness in AI

The updated content in this chapter relates to the use of AI to make inferences about people or affinity groups[1] and the use of AI in the processing of special category data.  If AI systems are used to guess, make predictions, analyse or find correlations between datasets, the outcomes may be considered to be personal data.  The same would apply to affinity groups if such inferences guessed details that would be classified as special category data.  For example, AI software may be used to assist doctors to make a medical diagnosis of a patient.  During this process, the AI system may make inferences to recognise patterns in a data set such as identifying that a combination of symptoms is associated with a particular medical condition.  The inference that the AI software makes to produce a medical condition may be considered as personal data and even special category data.

Essentially, lawfulness requires that appropriate steps be taken to effectively implement data protection principles and integrate safeguards into the processing both at the design stage and throughout the lifecycle of the processing of the data.  It is therefore important to consider applying the data protection principles to the output data.

Fairness in AI

The guidance note emphasises that fairness is a key principle of data protection when processing personal data.  Fairness requires that organisations should only process personal data in ways data subjects would reasonably expect and not in any other manner.  AI models should also be trained in a manner that does not discriminate between people.  As an example, facial recognition systems used for surveillance often contain racial biases because the datasets used to train the algorithms lack diversity which may result in misidentification of black people or ethnic minorities and lead to false positive matches.

The above is a brief summary of the recent updates to the guidance.  Please click on the link above for further reading.

Contact us for more good, clear, precise advice.


[1] Affinity groups are groups of people linked by a common interest or purpose.

Filter By

Must Reads

Subscribe to receive our latest articles

Follow Us

Related Posts

Imitation is the greatest form of flattery, but we don’t think so in this case.

We are aware of the phishing email that has been circulated to many people. Although we are not the firm mentioned in the phishing email you may have received, we’ve received several calls because of the similarity to our firm name so, unless you’d like to have a chat about other technology law matters, please don’t call us as we won’t be able to help. 

We know the firm in the phishing email is a genuine law firm, based in Cape Town, and we’ve alerted them to what is likely an impersonation scam.

If you’d like to learn more about phishing, click on the following link (we promise this is a legitimate link ) to watch this entertaining video we did eight years ago.