Financial regulators across the United States and Europe are accelerating requirements for data provenance, audit trails, and model governance as banks rapidly scale AI-powered workflow agents across loan origination, fraud detection, and customer operations.
The pressure intensified in February 2026 when the U.S. Department of the Treasury released the Financial Services AI Risk Management Framework (FS AI RMF) alongside a standardized AI Lexicon - the first two of six planned resources developed through a public-private partnership between Treasury's Financial and Banking Information Infrastructure Committee and the Financial Services Sector Coordinating Council. The guidance arrives as institutions grapple with governing AI systems that no longer merely assist human decisions but act on them autonomously.
Background
For over a decade, U.S. banking supervisors relied on model risk guidance rooted in the Federal Reserve and OCC's SR 11-7 framework, originally issued in 2011. At its core, SR 11-7 requires three safeguards now becoming global standards: independent validation by objective parties, ongoing monitoring comparing outputs to actual outcomes, and documentation detailed enough for unfamiliar parties to understand a model's operation. That guidance, designed for statistical models, now applies by extension to machine-learning and generative AI systems deployed across complex banking workflows.
The compliance landscape grew significantly more complex as agentic AI expanded in scope. Existing model risk management frameworks, access controls, and audit trail requirements assumed a human sits between the recommendation and the action. When AI agents operate end-to-end - reading data, drafting decisions, routing approvals, and filing outputs - that assumption breaks down, demanding a different governance architecture.
The Financial Stability Oversight Council (FSOC) elevated AI as a major focus area in its December 2024 Annual Report, explicitly identifying growing reliance on AI as both an opportunity and a mounting risk requiring enhanced oversight.
Details
On February 19, 2026, the U.S. Department of the Treasury released the FS AI RMF and AI Lexicon as part of the President's AI Action Plan, calling for risk-based governance to ensure safe AI deployment across the financial sector. Developed in coordination with more than 100 financial institutions, the Financial Services Sector Coordinating Council (FSSCC), and the Cyber Risk Institute (CRI), the framework introduces 230 control objectives spanning governance, data, model development, validation, monitoring, third-party risk, and consumer protection.
Data provenance sits at the center of the framework. It calls on organizations to build a centralized AI inventory, establish data quality and provenance standards, and define strict human oversight roles to prevent unchecked autonomous decisions. The FS AI RMF devotes substantial attention to third-party AI risk management - including due diligence on data provenance, secondary data use, and intellectual property - and emphasizes contractual provisions to protect against data misuse and secure data rights.
FINRA reinforced these expectations in its 2026 Annual Regulatory Oversight Report, published in December 2025. FINRA dedicated an entire section of the 2026 Report to generative AI, treating it as a supervised technology that demands the same compliance rigor as any critical system. The report highlights the importance of prompt and output logging, version tracking, and access controls for both human and non-human accounts. For AI agents that can act or transact, FINRA recommends narrowly scoped permissions, audit trails of actions, and explicit human checkpoints before execution.
The European dimension adds binding obligations. The EU AI Act, effective August 2026, requires institutions deploying high-risk AI systems to maintain comprehensive traceability documentation, including training data, testing protocols, and decision logs. In banking and payments, AI systems used to evaluate creditworthiness or establish credit scores for natural persons are classified as "high-risk" under the Act. The EU's Digital Operational Resilience Act (DORA) makes AI activity logging a direct legal obligation for financial institutions operating in European markets.
The cost of non-compliance is rising. Financial institutions deploying AI without audit infrastructure face regulatory fines averaging $5-10 million for AI governance failures in 2024-2025, alongside reputational damage from unexplained decisions and an inability to defend them in customer disputes or litigation. Conversely, firms with robust audit architectures report 40-60% faster regulatory response cycles and a 30% reduction in model risk exceptions.
For cross-border institutions, compliance complexity compounds further. Operating across jurisdictions often requires designing products that meet the highest regulatory standards of each market. The Monetary Authority of Singapore has issued the FEAT principles (Fairness, Ethics, Accountability, and Transparency) as non-binding guidance, frequently referenced in global compliance frameworks.
Outlook
Treasury plans to release four additional resources covering governance, transparency, and data practices. These publications are expected to sharpen guidance on data lineage, model versioning, and consumer-facing AI decisioning. In the UK, firms should prepare for forthcoming guidance on audit trails and explainability, expected by the end of 2026. For enterprise AI teams, the window to retrofit governance onto deployed systems is narrowing. With both the EU AI Act and FINRA's supervisory expectations crystallizing before year-end, institutions building provenance infrastructure now are better positioned than those treating it as a compliance afterthought.
