The U.S. Department of the Treasury has released the most concrete federal guidance to date governing how artificial intelligence must be managed, evaluated, and deployed within financial institutions-a move that directly affects banks running AI-powered workflow agents across back-office and customer-facing operations. Released on February 19, 2026, the Financial Services AI Risk Management Framework (FS AI RMF), together with a companion AI Lexicon, establishes unified data provenance, audit-trail, and risk control requirements for AI systems operating in the financial sector. Developed through a public-private partnership involving more than 100 financial institutions and government agencies, the package sets a cross-agency baseline that compliance teams, enterprise architects, and fintech vendors can no longer treat as optional reading.
Background
Prior to this release, AI governance in U.S. banking rested on a patchwork of agency-specific guidance. The Federal Reserve and OCC jointly issued the foundational Model Risk Management supervisory guidance, known as SR 11-7, in April 2011, and the FDIC adopted it in 2017. That framework established requirements for independent validation, ongoing monitoring, and detailed documentation-but it was designed for statistical models, not autonomous AI workflow agents capable of chaining decisions across enterprise processes.
As financial institutions increasingly relied on AI for decision-making, customer engagement, and operations, inconsistent terminology and uneven risk management practices created governance and oversight challenges. The problem became acute as accelerated AI adoption introduced complex risks not clearly addressed by traditional frameworks, particularly for emerging capabilities such as agentic systems.
The FS AI RMF was developed to close those gaps. The resources were produced by the Artificial Intelligence Executive Oversight Group (AIEOG), a public-private partnership between Treasury's Financial and Banking Information Infrastructure Committee (FBIIC) and the nonprofit Financial Services Sector Coordinating Council. The FBIIC, which consists of 18 federal and state regulatory organizations, has operated since 9/11 under the President's Working Group on Financial Markets and includes the Federal Deposit Insurance Corp. and the Federal Reserve Board.
Details
Developed in coordination with more than 100 financial institutions, the Financial Services Sector Coordinating Council, and the Cyber Risk Institute, the framework introduces 230 control objectives across governance, data, model development, validation, monitoring, third-party risk, and consumer protection. These controls are not abstract principles. Many map to specific system behaviors, ownership assignments, and evidence artifacts expected to withstand audit and supervisory review.
On data provenance specifically, the framework calls for organizations to build a centralized AI inventory, establish data quality and provenance standards, and define strict human oversight roles to prevent unchecked autonomous decisions. AI risk often intersects organizational functions-vendor procurement, data licensing and provenance, subcontractor access, model updates, and incident communications-and the FS AI RMF devotes substantial attention to third-party AI risk management, including due diligence covering data provenance, secondary data use, and intellectual property considerations.
Industry experts have called for the guidance to "mandate real-time behavioral guardrails and automated circuit breakers that disconnect AI agents when their outputs deviate from defined ethical or financial logic boundaries." The framework addresses this by explicitly requiring financial institutions to design system shutdown and deactivation mechanisms for the rapid, controlled disengagement of systems exhibiting inconsistent performance.
For banks relying on third-party fintech platforms to power workflow automation, the regulatory message is direct: regulators will not accept "the fintech partner owns the model" as a sufficient answer. The data quality discipline required for AI governance is more demanding than many compliance teams anticipated, and internal audit teams now need to independently test data lineage rather than simply accept business-unit attestations.
On terminology alignment-a persistent obstacle to cross-functional governance-the AI Lexicon establishes common definitions for key AI concepts, capabilities, and risk management terms. Its stated goal is to address inconsistencies in terminology and facilitate clear, consistent communication across legal, technical, and business functions.
The two resources are non-binding and fall under "soft law" for risk management: they do not create new legal obligations but aim to standardize how financial institutions address and document AI risk governance. However, the FS AI RMF is not expected to remain theoretical. In supervisory environments, frameworks become examination scaffolding, much as Federal Financial Institutions Examination Council (FFIEC) standards apply to security and IT environments in financial services.
Outlook
For companies deploying AI in regulated financial services, these resources are likely to become an important reference in examinations, internal audit expectations, third-party oversight, and contract negotiations-even where no regulator expressly incorporates them. The remaining four resources in the series will cover governance and accountability, data integrity and security, fraud and digital identity, and operational resilience, indicating that the current release is the opening installment of a phased regulatory build-out. Institutions that proactively align architecture, ownership, and evidence artifacts to this framework will be materially better positioned than those encountering the mapping exercise for the first time under supervisory pressure.
