

AI governance in the financial sector: from experimentation to effective oversight
In recent months, an uncomfortable realisation has begun to take hold in the executive committees of financial institutions: technological regulation rarely arrives before the problem. It almost always comes afterwards, once the market has already moved or an incident has forced action. We saw this after the financial crisis, with Basel III. And we saw it again with data protection. With artificial intelligence, the pattern is no different.
The difference is that AI is no longer “coming”. It is already here. It is in production and actively involved in critical processes within financial institutions: credit scoring and risk analysis, fraud detection, customer segmentation and prioritisation, automated customer service, predictive analytics in operations.
However, in many organisations a significant gap remains: these systems are not fully integrated into formal corporate risk frameworks. When an automated system influences material decisions, it ceases to be merely a technological tool and becomes: operational risk, regulatory risk, and reputational risk—and, consequently, a supervisable element.
From voluntary ethics to auditable obligation
For years, the conversation around responsible AI has been dominated by principles and conceptual frameworks: AI ethics, best-practice guidelines, non-binding recommendations. That paradigm is now shifting rapidly.
Today, these principles are being translated into verifiable operational requirements: fairness requires systematic testing; transparency demands traceable documentation; governance entails clearly defined roles and responsibilities.
This shift marks a critical transition: ethics move from being declarative to becoming auditable.
Europe: trust and rights
Europe has set a clear priority: trust. This translates into a practical premise: AI is not just technology; it also concerns its impact on rights and individuals.
AI is also about its impact on rights and people
The Artificial Intelligence Act reflects this approach through risk-based classification, management obligations, traceability, and human oversight in certain use cases.
What matters for banking and insurance is that AI does not arrive in a vacuum. It intersects with already demanding regulatory frameworks. And this is where the fundamental shift lies.
AI enters the supervisory perimeter
The European approach introduces a key distinction compared to other global models: the United States prioritises innovation; Asia prioritises efficiency; Europe prioritises trust (trust-first). This translates into a core premise: artificial intelligence is not merely a technological issue—it is also a matter of fundamental rights.
The Artificial Intelligence Act operationalises this approach through: classification of systems by risk level, risk management requirements, traceability obligations, and mandatory human oversight in certain cases. However, the real impact lies not only in the regulation itself, but in its integration with existing frameworks.
AI moves from experimental to supervised
AI is entering the prudential perimeter. One of the most relevant transformations in 2026 is the stance of supervisors: artificial intelligence is now considered material for prudential risk, operational resilience, and financial stability.
This implies a substantial shift: AI is not regulated in isolation; it is integrated into frameworks such as CRD, DORA, and ICT risk management.
From 'preparing' to 'being audited'
The regulatory timeline sets a tangible turning point: August 2026. High-risk systems must fully comply with the requirements, and supervisors will begin actively assessing their implementation. This marks a phase shift—from planning to execution and verification. In other words, AI in banking ceases to be a capability and becomes an object of audit.
In this context, the new governance standard is to integrate AI within the existing risk framework.
The most advanced institutions are not waiting for regulatory pressure. They are building structured governance models based on five pillars:
- Model inventory (AI Registry): Full visibility of models in use and their purpose.
- Risk classification: Aligned with the AI Act and the impact on business and customers.
- Integration with Model Risk Management: Embedding AI within the existing risk framework.
- Ethical and bias assessment: Systematic analysis of fairness, bias, and explainability.
- Continuous production oversight: Monitoring, periodic validation, and response mechanisms.
Insurance: when accuracy is not enough
In the insurance sector, the challenge becomes even more complex. The structural issue is indirect discrimination. Even when sensitive variables are excluded, models may generate bias through correlations (postcode → socio-economic level, behaviour → health status). This creates a critical tension between actuarial accuracy and fairness.
European regulators are raising the bar: models must not only be accurate; they must also be justifiable.
Artificial intelligence does not create new risks, it amplifies them
A key element in understanding this shift is the following: AI does not introduce entirely new categories of risk; it intensifies existing risks, increases the speed of impact, amplifies the reach of decisions, and reduces visibility if not properly governed.
For that reason, the response is not to create parallel frameworks, but to strengthen existing ones.
At a strategic level, the financial sector already understands the impact of artificial intelligence. The current challenge is operational: identifying which models are actually in production, assigning clear responsibilities, ensuring explainability in critical decisions, and establishing protocols for failures or deviations. The difference between organisations will not lie in their vision of AI, but in their ability to operationalise its governance.
Conclusion: govern before being required to
Artificial intelligence is already part of the core of the financial business. Regulation is arriving. Supervision is beginning. Requirements will increase. The question is no longer whether this shift will occur. The question is: what level of maturity will each organisation exhibit when supervision arrives?
In this new context, the institutions that will lead the sector will not necessarily be those that invest the most in AI. They will be those capable of understanding where it truly has impact, managing its risks in a structured way, and demonstrating control, traceability, and accountability.
Because in the financial sector, trust has never been optional. And in the age of artificial intelligence, neither will its governance be.
Competitive advantage does not lie in adopting AI earlier, but in governing it better