arrow_backEnterprise Software News

Regulators Tighten Data Provenance Rules as Banks Deploy AI Workflow Agents

Regulators are expanding data provenance and lineage rules to cover AI workflow agents in banking as enforcement escalates under BCBS 239, the EU AI Act, and the US FS AI RMF.

Regulators Tighten Data Provenance Rules as Banks Deploy AI Workflow Agents

Financial regulators across the US and Europe are expanding data provenance and governance requirements to address the rise of autonomous AI workflow agents in banking, imposing new documentation and auditability obligations on institutions and their technology vendors alike.

The shift stems from a convergence of enforcement actions, new regulatory guidance, and rapid commercial deployment of agentic AI - systems that observe, reason, and execute decisions across multiple banking workflows with limited human intervention. Agentic AI has emerged as one of 2025's defining technology trends, with financial institutions already adopting these autonomous systems to improve efficiency, scalability, and innovation. Regulators now contend that existing governance frameworks, designed for static models, cannot adequately govern systems that take independent, multi-step actions on live financial data.

Background

The regulatory pressure builds on a decade of incomplete compliance with the Basel Committee on Banking Supervision's Principles for Effective Risk Data Aggregation and Risk Reporting, known as BCBS 239. As of November 2023, only 2 out of 31 assessed global systemically important banks (G-SIBs) were fully compliant with all BCBS 239 principles, with no single principle achieved universally across the group, according to PwC. For nearly a decade, global banks treated BCBS 239 compliance as aspirational rather than mandatory. That era is ending: since January 2016, the Basel Committee has required G-SIBs to maintain complete, accurate, and timely risk data, yet enforcement was inconsistent and banks routinely delayed implementation timelines. Now regulators are done waiting.

The European Central Bank formalized its harder stance with the May 2024 Risk Data Aggregation and Risk Reporting (RDARR) Guide, which explicitly extended data lineage requirements to AI models used for critical decision-making. The Guide expanded the scope of BCBS 239 programs to include supervisory reports such as COREP and FINREP, requiring complete, up-to-date data lineage at the attribute level - from data capture through final reporting - along with a concrete register of data quality issues. The ECB has made BCBS 239 deficiencies a top supervisory priority for 2025-2027, explicitly warning that non-compliance could trigger "enforcement actions, capital add-ons, and removal of responsible executives."

In the US, the Treasury Department released the Financial Services AI Risk Management Framework (FS AI RMF) in February 2026, translating policy principles into 230 actionable control objectives covering data quality, vendor risk, lifecycle governance, and human oversight. Meanwhile, the Financial Stability Oversight Council (FSOC) elevated AI as a significant area of regulatory focus in its December 2024 Annual Report, identifying increasing AI reliance as both an opportunity and a mounting risk requiring enhanced oversight.

Details

The central compliance challenge for banks deploying AI workflow agents is "permission creep" - where an agent's effective access to data and decision authority expands beyond its original configuration during operation. Agentic AI systems in financial services operate with significantly more autonomy than traditional implementations, capable of making decisions and executing actions through various APIs and tools. This autonomy creates a critical risk: agents performing operations outside their intended authorization boundaries.1Banks see benefits of AI in regulatory compliance | Grant Thornton Circumventing required approval workflows or compliance checks may breach financial regulations including FFIEC, MiFID II, and Dodd-Frank.2The Overlooked Risk in Bank AI Adoption: Regulatory Inaction - Bank Policy Institute

Regulators now require that every data point feeding an agent-driven decision be traceable to its origin. The principles underpinning BCBS 239 describe exactly what AI models demand today - traceable data provenance, validated quality, clear ownership, and systematic guardrails. Their application has extended beyond risk reporting to regulatory reporting, financial reporting, and any critical AI model making decisions based on firm data.

AI data governance frameworks must establish the policies, controls, and oversight mechanisms that make AI deployable within regulated environments. These frameworks define how organizations validate model accuracy, document training data provenance, monitor for bias and drift, and maintain audit trails that satisfy regulatory scrutiny. Deploying AI without structured governance creates compliance failures that typically surface during audits, regulatory examinations, and customer disputes - often stemming from undocumented model development, inadequate testing, and missing audit trails rather than malicious intent.

A 2025 EY survey found that 72% of financial organizations had already integrated AI into their initiatives, yet governance was lagging, with only half making significant investments in governance frameworks to mitigate AI risks. Organizations that built robust data governance capabilities to satisfy BCBS 239 and similar regulations are now better positioned for AI deployment, having already established the lineage, quality controls, and metadata management that AI projects require. Institutions that took shortcuts on compliance fundamentals must now backfill governance capabilities before advancing their AI strategies.

In the UK, FCA Chief Executive Nikhil Rathi reaffirmed in December 2025 that the regulator would not introduce AI-specific rules, citing the technology's rapid evolution. Instead, the FCA doubled down on its principles-based, outcomes-focused approach, committing to intervene only in cases of "egregious failures." However, boards and senior managers must ensure AI-driven decisions are transparent, auditable, and aligned with Consumer Duty obligations, with accountability under the Senior Managers and Certification Regime (SM&CR) remaining non-negotiable.

For vendors and system integrators, requirements now extend to third-party oversight. Firms must understand how AI features are embedded in external platforms and ensure contracts prohibit unauthorized use of client data. UK regulators have consistently warned that AI could become a critical dependency; firms are expected to strengthen vendor due diligence, contractual safeguards, and migration strategies to avoid vendor lock-in and operational risk.

Outlook

Both the FCA and Bank of England anticipate a shift toward agentic AI in core banking decision-making functions, including credit underwriting, portfolio optimization, and risk modelling - areas where provenance failures carry the highest regulatory consequences. AI agents in financial services are projected to grow from $7.38 billion in 2025 to $47.1 billion by 2030, representing a compound annual growth rate of 44.8%, according to industry research. Institutions and vendors that have not yet established attribute-level data lineage infrastructure face mounting pressure to act before EU AI Act high-risk compliance requirements take full effect in August 2026 and as US regulators begin applying the FS AI RMF's 230 control objectives to examination cycles.