[First published in the Business Day]
The investigation panel of the Council for Medical Schemes (CMS) released its final report last week. Black healthcare providers’ complaints of discrimination and unfair treatment had initiated the investigation. The investigation touched on medical schemes’ and administrators’ use of artificial intelligence (AI) tools in conducting fraud, waste and abuse (FWA) investigations.
The CMS inquiry revealed troubling patterns of racial bias in how medical aid fraud detection systems were applied. Black[1] healthcare providers were disproportionately investigated and penalised with no transparent explanation as to why. Despite arguments from the Schemes that decisions were data-driven, the CMS Report concluded that procedural fairness had been undermined, and the outcomes suggested discriminatory profiling.[2]
In making this finding, the Panel considered evidence from its expert statistician and made a factual finding that his report showed racial discrimination against Black healthcare providers. Some of the Schemes argued that their FWA investigations were initiated by whistleblowers, and neutral software and algorithms that flagged providers. They argued that because of the software’s “neutrality”, there could not be discrimination.
But can we always confidently claim that software, algorithms and AI are neutral? Across industries, organisations are increasingly turning to AI and machine learning (ML) tools to enhance efficiency, accelerate innovation, and drive decision-making. Integrating AI has become central to many strategic agendas, promising objectivity and speed.
Yet, in our pursuit of digital transformation, we must remain clear-eyed about the social context in which these technologies operate. South Africa’s constitutional imperative to “improve the quality of life of all citizens and free the potential of each person”[3] cannot be fulfilled by technology alone – especially when that technology mirrors the biases of the data it learns from.
AI systems do not exist in a vacuum. While their decisions may appear neutral, they are fundamentally shaped by the data they are trained on – and that data reflects the inequalities, assumptions, and blind spots of the societies from which they originate. Algorithmic outputs are only as fair as the systems that generate them, and those systems are not immune to systemic discrimination.[4] Consider the first international beauty pageant judged by machines – Beauty.AI – where, of the 44 winners, nearly all were white. Or how a researcher – Ziad Obermeyer and his colleagues – identified how a widely used algorithm in the United States medical sector exhibited significant racial bias against Black patients.
While there is significant potential in using AI to streamline complex decisions, the opacity of these systems presents a serious challenge – particularly for individuals who may have been unfairly profiled or disadvantaged, with little recourse to challenge or understand the outcomes.
The CMS Report is a case study in the dangers of the so-called “black box” effect: the inability to see or understand how an algorithm arrived at a decision.[5] When individuals or groups are negatively affected by these decisions, they are often left without the information needed to challenge or contextualise them. This lack of transparency not only undermines accountability but also erodes public trust in digital systems.
Transparency in AI is not just a technical issue; it is a human rights imperative. Individuals should have the right to know how an algorithm made a decision about them, especially when that decision affects their livelihood, access to services, or legal rights. Making AI decision-making more “visible” could involve algorithmic impact assessments, disclosing variables used for profiling and building in pathways for explanation and appeal.[6]
Internationally, the dangers of opaque profiling have already been identified and successfully litigated against. In the Netherlands, their Tax and Customs Administration (SARS’s equivalent) used an AI-powered tool called System Risk Indicator (SyRI to detect tax and welfare fraud, which[7] The Hague District Court found that the system disproportionately targeted low-income and immigrant communities, violating the right to privacy and non-discrimination. The case highlighted how algorithmic systems can reinforce existing social prejudices under the guise of neutrality.
In South Africa, the financial sector has begun experimenting with AI in ways that raise similar concerns. One bank has developed AI tools to predict a customer’s creditworthiness.[8] Another is using generative AI tools in its internal operating processes and using AI assistants for client services.[9] While these uses can broaden access to financial services, it also demands heightened scrutiny. Who designs these models? What data are they trained on? Can customers dispute incorrect or unfair outcomes?
As South Africa continues to integrate AI into sectors like finance, healthcare, and public administration, we must resist the urge to blindly import models trained in entirely different socio-economic contexts. For example, two of the Schemes that were subjects of the CMS inquiry, used AI-powered tools from the United States. There are very real possibilities that these tools could carry embedded assumptions that do not reflect the realities of South Africa’s population. To avoid perpetuating inequality, we need to localise data inputs, ensure demographic representation in training data, and involve diverse voices in model development and oversight.
Our Constitutional commitment to equality, dignity, and freedom requires more than passive adoption of emerging technologies. It requires deliberate alignment of AI tools with our legal and ethical frameworks. If used responsibly, AI can be a powerful force for equity and innovation. But that requires a clear-eyed view of its risks, a commitment to transparency, and a refusal to let complex code mask the simplicity of fairness.
The future of AI in South Africa depends not only on technological capability but on moral clarity. We must design and regulate our systems with justice in mind.
Lucien Pierce is a director at PPM Attorneys a specialist information technology-focused law firm in Johannesburg. Luyanda Maema is a legal intern at the firm.
[1] “Black” refers to individuals who identify as black, Indian or coloured.
[2] CMS report at 110-112.
[3] Constitution of South Africa 1996, preamble.
[4] Stephen Rea “A Survey of Fair and Responsible Machine Learning and Artificial Intelligence: Implications of Consumer Financial Services” (2020). Available at SSRN: https://ssrn.com/abstract=3527034 at 3 and 18-19.
[5] Charlotte Tschider “Beyond the Black Box” 98 Denver Law Review 683 (2021) Available at SSRN: https://ssrn.com/abstract=3855782 at 698-691 and 707.
[6] Rea, Stephen op cit note 2 at 26-30; CMS report at 110-112.
[7] ECLI:NL:RBDHA:2020:1878, Available at https://uitspraken.rechtspraak.nl/details?id=ECLI:NL:RBDHA:2020:1878.
[8] TymeBank “TymeBank reaches 10 million customers as it attracts record deposits for 2024” Available at https://www.tymebank.co.za/press/tymebank-reaches-10-million-customers/, Accessed on 9 July 2025.
[9] Michael O’Carroll “Empowering people with Generative AI” Available at https://www.capitecbank.co.za/blog/technology/2024/empowering-people-with-generative-ai/, Accessed on 9 July 2025.
