Major financial institutions are racing to build tiered AI data provenance frameworks as live deployments of autonomous banking workflow agents accelerate-and regulators signal that data lineage, traceability, and access controls are no longer optional. The convergence of agentic AI entering production and tightening supervisory expectations has elevated data provenance from a technical concern to a board-level risk management imperative.
Background
The push for formal AI data provenance controls follows years of regulatory groundwork. In April 2011, the Federal Reserve and OCC jointly issued Supervisory Guidance on Model Risk Management (SR 11-7), later adopted by the FDIC in 2017, establishing standards for model documentation, independent validation, and ongoing monitoring. Those requirements are now being reinterpreted for AI systems that operate across multiple data sources and institutional boundaries.
The pace has since accelerated sharply. In February 2026, the U.S. Treasury released the Financial Services AI Risk Management Framework (FS AI RMF), translating policy principles into 230 actionable control objectives covering data quality, vendor risk, lifecycle governance, and human oversight. Then, on April 17, 2026, the Federal Reserve, FDIC, and OCC jointly replaced SR 11-7 and related issuances with a revised, risk-based Model Risk Management (MRM) framework-the most significant overhaul of those standards in 15 years.
The revised guidance introduces explicit risk-tiering requirements. Under the new MRM framework, every model must be assigned to a tier reflecting its inherent risk, exposure, and purpose, with Tier-1 material models subject to full lifecycle oversight and lower tiers receiving proportionate, lighter controls. Critically for data provenance, supervisors now expect lineage documentation across every link in the model lifecycle-from development and validation through deployment, monitoring, and retirement-rather than snapshots at handoff points.
In parallel, NIST released a preliminary draft AI Cybersecurity Risk Profile in December 2025, harmonizing more than 2,500 regulatory expectations from the Federal Reserve, OCC, and FDIC into a common set of diagnostic controls, with a full public draft expected later in 2026.
Details
The urgency intensifies as AI workflow agents go live across banking operations. Institutions including JPMorgan Chase are exploring AI agents to detect fraud, automate loan approvals, and streamline legal and compliance processes. According to American Banker, the banking industry is entering "the age of agentic AI," with autonomous digital agents that access financial data, transfer funds, and make decisions on customers' behalf-yet regulators have not established dedicated agent-specific oversight frameworks.
Industry analysts are calling for a "know your agent" standard analogous to know your customer (KYC) rules. Writing in American Banker, Zor Gorelov of the Klaros Group argued that every AI agent accessing banking systems must be authenticated-meaning "its identity, provenance and ownership are verified"-as well as authorized, auditable, and aligned with fiduciary standards. Without such controls, data flowing into multi-agent orchestration pipelines may lack the traceability regulators expect under revised MRM guidance.
Legacy infrastructure compounds the challenge. According to Deloitte's 2026 Banking and Capital Markets Outlook, AI implementation across banks remains "throttled by brittle and fragmented data foundations, mounting compliance demands, outdated legacy systems, and internal resistance to change," with many AI initiatives stuck in isolated proofs of concept marked by "weak governance, duplication, and uneven impact." A Q1 2026 Wolters Kluwer Banking Compliance AI Trend Report found that explainability and transparency (cited by 28.4% of respondents) and bias and discrimination were the most acute regulatory concerns among financial institutions.
Compliance costs are escalating. AI compliance and implementation spending in banking is projected to rise from $6 billion in 2024 to $9 billion in 2025, and potentially reach $85 billion by 2030, according to Statista. Leading banks are responding by building integrated AI control frameworks that align governance, compliance, and data management, according to FTI Consulting. Many institutions are establishing AI Centers of Excellence that bring together leaders from IT, risk, and finance divisions to ensure models are deployed responsibly and in alignment with regulatory expectations.
Vendor governance is also emerging as a flashpoint. Under interagency guidance on third-party relationships, banks remain fully accountable for activities conducted through vendors, including AI-driven services. The New York Department of Financial Services, in an October 2025 industry letter, directed supervised entities to manage cybersecurity and operational risks across the full lifecycle of third-party service providers that use advanced technologies such as AI in ways that affect customers or core operations.
Outlook
The regulatory trajectory points toward further tightening. The Colorado AI Act, signed in 2024 and set to take effect on June 30, 2026, requires developers of high-risk AI systems-including those making consequential decisions about customers-to meet documentation and disclosure duties, while deployers must implement risk management programs and impact assessments. More broadly, regulators are signaling that institutions separating AI deployment from AI governance should expect scrutiny, likely triggered by fair lending findings or model risk management exam failures, according to Wolters Kluwer. As GenAI and agentic systems formally inherit the principles of the revised MRM framework, institutions that cannot demonstrate end-to-end data lineage across live workflow agents face the prospect of supervisory action before year-end.
