Financial Authorities Urged to Integrate Ethics in AI Systems

Financial regulators around the world are facing increasing pressure to embed ethical considerations into their artificial intelligence (AI) systems. A recent report highlights that while over two-thirds of financial authorities are employing AI technologies, a significant number lack proper governance and ethical frameworks. This raises concerns about market integrity, financial inclusion, and public trust.

According to the State of SupTech Report 2025, 67% of supervisory agencies are currently deploying or exploring AI for various applications, yet 37% report no formal governance or ethical guidelines. The implications of this oversight could be severe, potentially leading to diminished trust in financial systems and unintended negative consequences for consumers.

The reliance on AI spans multiple uses, including detecting complex money-laundering activities and predicting systemic banking issues. Central banks, securities commissions, and market conduct regulators are increasingly integrating AI into their operations. When functioning effectively, these tools can enhance financial inclusion and sustainability by identifying gaps in access to finance and monitoring risks associated with climate change.

Despite the potential benefits, the report reveals a troubling disconnect between the rapid advancement of technology and the governance structures designed to oversee it. For example, only 4% of agencies explicitly align their practices with international standards, such as the OECD AI Principles or the EU AI Act. Furthermore, a mere 5% of authorities release transparency reports detailing the impact of AI on supervisory decisions.

The lack of recognition of ethical risks in AI applications is particularly alarming. Only 8.8% of surveyed authorities view ethical concerns as a significant challenge, while 8.1% acknowledge the risk of algorithmic bias. This minimal awareness poses a considerable risk, given that AI systems can exacerbate existing inequalities if not properly managed.

As highlighted by Marlene Amstad, chair of the Swiss Financial Market Supervisory Authority (FINMA), “Supervisory decisions must remain explainable and accountable.” Her emphasis on requiring a “human in the loop” for significant interventions aims to prevent the unchecked delegation of responsibility to algorithms.

Among the challenges facing financial authorities, 64% report fragmented or inconsistent data as a critical issue. Such weaknesses can lead to biased or misleading outputs, particularly in areas like consumer protection and financial inclusion. This highlights the essential role of strong data governance, which includes clear ownership, documentation of data sources, and ongoing quality controls.

The urgency for robust governance becomes even more pronounced as authorities shift toward more autonomous AI systems. These agentic AI systems, designed to operate with limited human oversight, present new risks if not implemented with adequate safeguards. Ensuring that supervisors possess the necessary algorithmic literacy is fundamental for addressing model behavior and understanding limitations.

Some agencies are taking proactive steps to address these challenges. The U.K.’s Financial Conduct Authority (FCA) has established a data and AI risk hub along with an ethics framework that mandates independent evaluations for each AI use case before deployment. This initiative aims to foster an assurance mindset among supervisors regarding ethical practices.

Similarly, the Bank of Tanzania has created an AI and data innovation hub focused on developing guidelines that prioritize transparency, fairness, and accountability. These efforts illustrate that embedding ethics within AI systems is not only possible but essential for maintaining public confidence.

To effectively close the accountability gap, financial authorities must prioritize operational policies that promote transparency regarding AI usage in supervision. This includes translating broad principles like security and fairness into measurable standards, alongside conducting ethical impact assessments to evaluate real-world supervisory effects. Currently, only 12% of authorities mandate training on ethical AI principles for their teams.

The landscape of financial oversight is changing rapidly, yet the governance structures that support it remain lagging. With over 60% of authorities advancing towards an AI-driven future without solid accountability measures, the risk of discriminatory outcomes and market vulnerabilities looms large. For financial institutions to retain their role as trusted guardians of economic stability, ethical governance must become an integral component of supervisory frameworks.