Regulators in Europe and OECD member states are increasing oversight of multimodal artificial intelligence (AI) systems in financial services amid heightened concerns about governance, transparency, and auditability. Multimodal AI-which combines text, images, speech, and other data types in decision-making workflows-is increasingly used in loan origination, risk assessment, fraud detection, and end-to-end financial processes. This complexity presents challenges for policymakers and auditors.
Background
Multimodal AI is gaining adoption in financial services for automating document validation, improving risk modeling, and streamlining compliance by unifying varied data sources such as scanned documents, charts, and handwriting. While these capabilities can increase efficiency in regulated sectors-including banking, insurance, and healthcare-they also amplify risks related to data privacy, algorithmic bias, and potential errors across integrated data streams.
Regulatory frameworks like the EU AI Act classify many AI-driven financial practices-including credit scoring, fraud detection, and risk management-as high-risk, making them subject to rigorous fairness, transparency, and auditability requirements. In Germany, MaRisk mandates risk-based model governance aligned with system complexity and business impact. The OECD has cautioned that generative and multimodal AI models' limitations in explainability pose challenges to regulatory oversight and compliance.
Details
A recent study of finance professionals and regulators found explainability and compliance are considered foundational requirements or "hygiene factors", while cost and speed are secondary, and ease of understanding remains key for enterprise adoption. One proposal suggests a four-layer governance architecture-spanning model-level self-regulation, firm-level oversight, regulator-hosted monitoring agents, and independent audits-to address risks from decentralized, adaptive AI systems.
Contemporary enterprise platforms incorporate confidence scoring thresholds to route uncertain decisions for human review, supporting requirements for traceability and audit readiness through immutable audit logs and structured activity tracking. In compliance, agentic AI systems are developed with artifact-centric modeling, delineated roles, and mechanisms ensuring traceability and accountability.
Despite these measures, Deloitte's 2025 Financial AI Index indicates that 63% of global banks face delays in AI deployment due to regulatory fragmentation and divergent compliance requirements across jurisdictions. The OECD reports that many financial institutions lack version-controlled model histories, complicating post-deployment investigations of problematic outcomes.
Outlook
As regulators enforce the EU AI Act and international supervisory standards evolve, financial institutions must bolster multimodal AI governance with stronger controls, enhanced transparency, and robust auditability. Sector-wide efforts are expected to advance regulatory harmonization and develop frameworks that scale governance alongside AI adoption.
