Federal Banking Agencies Overhaul Model Risk Rules, Target AI Governance Gap

OCC, Federal Reserve, and FDIC replace 15-year-old SR 11-7 model risk framework with SR 26-2, a principles-based approach targeting vendor AI governance.

BREAKING
Federal Banking Agencies Overhaul Model Risk Rules, Target AI Governance Gap

The Office of the Comptroller of the Currency (OCC), the Federal Reserve, and the Federal Deposit Insurance Corporation (FDIC) jointly issued revised model risk management (MRM) guidance on April 17, 2026, replacing a framework that had governed U.S. banking institutions for 15 years. The revised guidance supersedes the agencies' previous supervisory guidance on MRM, the 2021 Interagency Statement on Model Risk Management for Bank Systems Supporting Bank Secrecy Act/Anti-Money Laundering Compliance, and certain additional OCC-specific guidance. The action marks the most significant recalibration of MRM supervision since the post-financial-crisis era.

Background

The OCC and the Federal Reserve Board jointly issued the original guidance in April 2011 in response to banks' increasing use of models. The FDIC adopted the guidance in June 2017 with technical conforming changes. The original framework, known as SR 11-7, established that model risk increases with complexity, uncertainty, breadth of use, and potential impact. It required three fundamental safeguards: independent validation by objective parties, ongoing monitoring comparing outputs to actual outcomes, and documentation detailed enough for unfamiliar parties to understand a model's operation.1SR 11-7 Model Risk Management: Complete Guide for AI Systems

Model use within banking and financial services has continued to grow in complexity and scope. Advances in technology and increased competition have driven banking organizations to adopt innovative approaches to improve efficiencies, mitigate risks, and maximize profits. Against that backdrop, the 2011 rules were broadly viewed as ill-suited to the AI-driven decision systems now embedded in lending, risk assessment, and fraud detection. American Bankers Association President and CEO Rob Nichols acknowledged that "in many cases, banks' use of artificial intelligence is evaluated under this framework, and prior guidance predated significant technological developments, including generative AI."

Key Provisions of the Revised Guidance

The revised guidance, designated SR 26-2 by the Federal Reserve, represents a shift toward a more flexible, principles-based approach to MRM. It clarifies that MRM should be tailored to the size, complexity, and model risk profile of a banking organization. The guidance highlights sound principles for effective model development and use, model validation and monitoring, and governance and controls. It also addresses considerations specific to vendor and other third-party products, including validation of those products.

The updated guidance is primarily directed at banking organizations with over $30 billion in total assets, though it may apply to smaller institutions with significant or complex model use. The guidance is designed to scale based on actual model risk exposure rather than asset size alone.

On vendor oversight-a critical concern as banks increasingly source AI-driven models from external providers-the guidance maintains that accountability cannot be delegated. Because certain components may be proprietary, banking organizations may not receive the underlying code, data, or methodology they would have if a model were developed internally. Nevertheless, MRM principles remain applicable, and validation of vendor products-whether by internal or outside parties-is an important element of sound model risk management.

The guidance also introduces greater flexibility around validation independence. It relaxes the structural independence requirement for validation teams, shifting the standard from organizational separation to quality of review, with emphasis on "rigorous and effective" challenge rather than reporting-line separation.

Notably, the revised guidance explicitly excludes generative AI and agentic AI models from its scope, designating these technologies as "novel and rapidly evolving." However, the guidance confirms that it does apply to both traditional statistical and quantitative models and non-generative, non-agentic AI models. It recommends that banking organizations apply their broader risk management and governance practices to determine appropriate controls for tools and systems not covered, including generative and agentic AI models. These tools are not exempted from risk management expectations but rather remain subject to the general risk management and governance standards that apply to banking organizations.

The guidance does not set forth enforceable standards or prescriptive requirements, and the agencies have stated that non-compliance will not result in supervisory criticism. However, supervisory action may result from any violations of law or unsafe or unsound practices stemming from insufficient management of model risk.

A parallel concern flagged by international supervisors is vendor concentration. Regulators have warned of potential systemic concentration risk, where financial institutions become overly dependent on a small number of third-party AI service providers, creating a single point of failure for the financial system.

Outlook

The agencies have indicated they will continue to consider additional measures to address MRM consistent with broader supervisory goals. They plan to issue a request for information (RFI) in the near future addressing MRM generally and considering, in particular, banks' use of AI-including generative AI, agentic AI, and AI-based models. That forthcoming RFI is expected to shape the next phase of AI-specific regulatory standards for the sector.

While the guidance is nonbinding, it will inform examiner expectations around model inventories, materiality assessments, validation and monitoring practices, and governance for both internal and vendor models. Banking organizations-particularly those above the $30 billion threshold-should review their current frameworks against the guidance and consider where enhancements to proportionality, documentation, governance, and oversight may be warranted. For institutions with significant third-party AI deployments, the central regulatory message remains unchanged: accountability cannot be outsourced. If a vendor's model is too opaque to validate, the institution must demand transparency or pursue alternatives.