Inside the Black Box: AI, Fraud Flags, and Legal Fog

Digital banking in South Africa has grown rapidly. More people are using mobile apps and online platforms to manage their money. While this brings convenience, it also increases the risk of financial fraud which is a serious concern for banks, businesses, and customers.[1] To fight this growing problem, banks are turning to advanced tools. One of the most powerful is artificial intelligence (“AI”). AI helps detect fraud faster and more accurately than traditional methods.[2] It can monitor transactions in real time, predict suspicious behaviour, and even automate reporting tasks.[3]
AI plays a crucial role in preventing fraud by combining speed, intelligence, and automation. Through real-time monitoring, AI systems powered by machine learning can scan thousands of transactions per second, instantly detecting unusual activity such as large purchases in foreign countries or logins from unfamiliar devices.[4] When suspicious behaviour is identified, the system can block transactions or alert the bank immediately, stopping fraud before it escalates. Beyond reacting to threats, AI also uses predictive analytics to anticipate them. By analysing historical data, it uncovers patterns like repeated failed login attempts or account access at odd hours – signals that often precede fraud. This allows[5] Additionally, generative AI streamlines compliance and reporting by drafting suspicious activity reports from complex datasets and automatically generating detailed reports for regulators such as the Financial Sector Conduct Authority (” FSCA”) or the South African Reserve Bank (“SARB”)[6]. This enhances accuracy and efficiency, helping financial institutions maintain regulatory compliance with greater effectiveness.[7]
Artificial intelligence offers several key benefits in fraud prevention, making it an essential tool for modern banking. One major advantage is its speed and scalability. AI systems can process millions of transactions across mobile apps, ATMs, and online platforms in real time, adapting seamlessly as volumes increase.[8] This allows financial institutions to maintain robust fraud monitoring without compromising performance. Another benefit is the reduction in false positives. By learning from customer behaviour, AI can more accurately distinguish between legitimate and suspicious activity, which means fewer unnecessary transaction blocks and improved customer trust.[9] Additionally, AI enhances risk management by identifying emerging fraud trends early and implementing smarter decision-making across departments, ultimately boosting operational efficiency and resilience.[10]
While AI significantly enhances fraud prevention, it also introduces complex legal and regulatory challenges that financial institutions must navigate carefully. In South Africa, compliance with the Protection of Personal Information Act (“POPIA”) is paramount. Banks are legally obligated to ensure that customer data is processed lawfully, fairly, and transparently – especially when AI is used for automated decision-making.[11] This means institutions must obtain proper consent, limit data usage to specific purposes, and safeguard personal information against misuse. Another major concern is transparency. AI systems often operate as “black boxes,” making decisions that are difficult to interpret or justify.[12]
In artificial intelligence, a black box refers to a system whose internal decision-making processes are hidden or too complex to interpret, even by its creators. This lack of transparency becomes especially concerning in banking, where AI is increasingly used for credit scoring, fraud detection, and loan approvals. When a bank’s AI model denies a loan or flags a transaction as suspicious, customers and regulators may demand to know why, but if the system is a black box, there’s no clear explanation. This raises issues of reliability, accountability, and fairness, particularly if biased data leads to discriminatory outcomes. To address these concerns, the financial industry is exploring Explainable AI, which aims to make models more interpretable by highlighting which factors influenced a decision. As banking becomes more reliant on AI, balancing performance with transparency is critical to maintaining ethical standards and regulatory compliance.[13]
Regulators such as the FSCA and SARB require financial institutions to maintain explainability in their AI models to ensure accountability and build public trust.[14] Additionally, there is a legal risk of bias. If AI algorithms unintentionally discriminate against certain demographic groups, it could lead to violations of consumer protection laws and expose banks to litigation or reputational damage.[15] Regulatory oversight is tightening, with expectations that AI systems be fair, auditable, and aligned with governance standards in the financial sector.[16] As AI adoption grows, banks must balance innovation with rigorous legal compliance to avoid ethical pitfalls and regulatory breaches.
South Africa’s regulatory landscape is evolving rapidly to address the growing use of artificial intelligence in the financial sector. The FSCA is actively revising its regulatory framework to incorporate AI-related risks, with a strong focus on consumer protection, fairness, and transparency.[17] These updates aim to ensure that financial institutions deploying AI systems do so in a manner that respects legal obligations under statutes like POPIA and the Financial Sector Regulation Act.[18] Meanwhile, the SARB, through its dedicated Fintech Unit, is examining the broader implications of AI on financial stability, systemic risk, and innovation.[19] SARB’s approach emphasizes responsible AI adoption – encouraging institutions to align with ethical standards while maintaining operational resilience. Notably, the 2025 Budget Review includes a comprehensive market study on AI adoption in finance, signalling the government’s intent to develop informed, forward-looking policies.[20] This study is expected to guide future legislation and supervisory practices, ensuring that AI technologies are integrated into the financial system in a way that promotes trust, accountability, and legal compliance. Together, these regulatory efforts reflect a proactive stance by South African authorities to balance technological advancement with robust governance.
Looking ahead, AI is poised to become a powerful tool in combating financial crimes such as money laundering, terrorist financing, and illicit fund transfers. Legally, this potential hinges on the development of robust partnerships between financial institutions, regulatory bodies like the FSCA and SARB, and technology providers.[21] These collaborations must be grounded in shared compliance objectives and a commitment to ethical AI deployment. To ensure lawful use, AI systems must be designed within clear ethical frameworks that align with South Africa’s existing legal standards, including the Financial Intelligence Centre Act, which governs anti-money laundering efforts.[22] These frameworks should promote transparency, accountability, and non-discrimination, while also enabling rapid innovation. Importantly, consumer protection must remain central. AI tools used in anti-money laundering must respect data privacy under POPIA, avoid profiling that could lead to unfair treatment, and provide mechanisms for human oversight and redress.[23] As regulators continue to study and refine their approach, such as through the upcoming AI market study in the 2025 Budget Review, the future of AI in finance will depend on striking a careful balance between technological advancement and legal responsibility.[24] If done right, AI could significantly enhance South Africa’s financial integrity and global competitiveness.
In conclusion, AI is transforming the way South African banks detect and prevent fraud, offering unprecedented speed, accuracy, and operational efficiency. However, this technological leap comes with significant legal and ethical responsibilities. To harness AI responsibly, banks must navigate a complex regulatory landscape that includes compliance with POPIA, FICA and other relevant financial and consumer protection laws. Legally, this means ensuring that AI systems do not infringe on individuals’ privacy rights, that data used for fraud detection is lawfully obtained and processed, and that algorithmic decisions are subject to human oversight to prevent bias or unfair outcomes. Transparency is also critical. Banks must be clear about how AI is used, what data it relies on, and how customers can challenge or understand decisions that affect them. Ultimately, responsible AI use requires a balance between innovation and accountability, where technological advancement serves the public interest and reinforces trust in the financial system. By putting customers first and adhering to legal standards, South African banks can lead the way in ethical AI adoption while strengthening fraud prevention.[25]
[1] PwC South Africa. (2023). Digital Banking Survey Report. Johannesburg: PwC.
[2] McKinsey & Company. (2022). AI in Banking: Fraud Detection Use Cases. McKinsey Global Institute.
[3] Accenture. (2023). Reinventing Financial Compliance with AI. Accenture Financial Services.
[4] Deloitte. (2021). The Future of Fraud Detection: AI & Analytics in Financial Services. Deloitte Insights.
[5] IBM. (2022). AI and Predictive Analytics in Financial Crime Management. IBM Research.
[6] Financial Sector Conduct Authority & South African Reserve Bank, 2024. Regulatory frameworks for AI-driven compliance in financial services. Pretoria: FSCA and SARB.
[7] Deloitte, 2023. AI and compliance: Transforming financial services through intelligent automation. [online] Available at: https://www2.deloitte.com [Accessed 15 Sep. 2025].
[8] World Economic Forum. (2023). AI in Financial Services: Scaling Innovation Responsibly. Geneva: WEF
[9] SAS Institute. (2022). Reducing False Positives in Fraud Detection. SAS White Paper.
[10] KPMG South Africa. (2023). AI-Driven Risk Management in Banking. KPMG Insights.
[11] Republic of South Africa. (2013). Protection of Personal Information Act 4 of 2013 (POPIA). Pretoria: Government Gazette.
[12] Binns, R. (2018). Algorithmic accountability and the right to explanation. Philosophical Transactions of the Royal Society A, 376(2133), 1–17.
[13]Kgoale, T.J. & Odeku, K. (2023). An analysis of legal accountability for artificial intelligence systems in the South African financial sector. De Jure Law Journal, vol. 56(1). Available on SciELO South Africa.
[14] FSCA. (2024). AI Model Governance Framework. Pretoria: FSCA.
[15] Bank for International Settlements. (2021). AI and Bias in Financial Services. Basel: BIS Reports.
[16] SARB. (2023). Fintech and AI Supervision in South Africa. Johannesburg: South African Reserve Bank.
[17] FSCA. (2025). Draft Regulation on Ethical AI in Financial Services. Pretoria: FSCA.
[18] Republic of South Africa. (2017). Financial Sector Regulation Act 9 of 2017. Government Gazette.
[19] SARB Fintech Unit. (2024). AI and Financial Stability in South Africa. Johannesburg: SARB.
[20] National Treasury. (2025). Budget Review 2025: Market Study on AI in Finance. Pretoria: NT.
[21] OECD. (2023). AI and the Future of Financial Regulation. Paris: OECD Publishing.
[22] Republic of South Africa. (2001). Financial Intelligence Centre Act 38 of 2001 (FICA). Government Gazette.
[23] Information Regulator. (2023). Guidance Note on AI and POPIA. Johannesburg: Information Regulator.
[24] World Bank. (2024). Enhancing Financial Integrity Through AI. Washington, DC: World Bank.
[25] FSCA & SARB. (2024). Joint Statement on AI and Consumer Rights in Financial Services. Pretoria: FSCA/SARB.