arrow_backEnterprise Software News

U.S. Regulators Replace 15-Year-Old Model Risk Rules as EU Deadline Looms

The Fed, OCC, and FDIC replaced SR 11-7 with new model risk guidance on April 17, 2026-months before the EU AI Act's August deadline for banks.

BREAKING
U.S. Regulators Replace 15-Year-Old Model Risk Rules as EU Deadline Looms

On April 17, 2026, the OCC, FDIC, and Federal Reserve jointly issued revised interagency guidance on model risk management (MRM), replacing frameworks that had governed AI and quantitative model oversight since 2011. Designated SR 26-2, the action arrives as financial institutions simultaneously face the August 2, 2026 full-compliance deadline under the EU AI Act's high-risk provisions-compressing the window for banks to build auditable AI data provenance controls.

Background

The Federal Reserve and OCC originally issued Supervisory Guidance on Model Risk Management-known as SR 11-7-in April 2011; the FDIC adopted it in 2017. These standards became the global benchmark for AI governance in banking. The guidance established that model risk increases with complexity, uncertainty, breadth of use, and potential impact-characteristics inherent in modern AI systems.

The EU AI Act's high-risk provisions, covering credit scoring models, AML monitoring, and automated lending tools, take effect in August 2026 and carry extraterritorial reach: any institution serving the EU market falls within scope regardless of where it is headquartered. At the state level in the U.S., the Colorado AI Act, targeting high-risk AI systems in loan approvals and credit scoring, is set to take effect June 30, 2026, with penalties reaching $20,000 per violation. Texas enacted its own Responsible Artificial Intelligence Governance Act, effective January 1, 2026, while states including California, Colorado, Utah, and Texas have stepped in amid the absence of comprehensive federal AI legislation.

Details

SR 26-2 supersedes SR 11-7 and the 2021 interagency statement on BSA/AML model risk management, reflecting supervisory experience and industry feedback accumulated over fifteen years as well as significant advances in modeling practices. The revised guidance reinforces that model risk management should be tailored to a bank's size, complexity, and risk profile-with regulators noting that practices effective for large institutions may not suit smaller banks. The guidance applies most directly to organizations with over $30 billion in total assets but is designed to scale based on actual model risk exposure rather than asset size alone.

The 2026 revision introduces risk-based tailoring: every model must occupy a tier reflecting inherent risk, exposure, and purpose. Tier-1 material models carry full lifecycle oversight, while lower tiers receive proportionate, lighter controls-but only if the tiering itself can be evidenced. Development, validation, deployment, monitoring, and retirement are now treated as a single governed chain, with supervisors expecting lineage across every link rather than snapshots at hand-off points.

On data provenance, controls must address lineage tracking, feature store governance, training data documentation, de-identification, and cross-border data flows. Effective implementation requires automated lineage logging, dataset version control, and rights-signal propagation at ingestion, with data sensitivity tagging at intake so training sets are privacy-aware by design. Regulators require precise documentation of where AI training data originated, how it was transformed, who validated it, and proof of those facts at any point in time. The European Central Bank's BCBS 239 interpretation demands "complete and up-to-date lineage at the data attribute level."

In practice, financial services firms must demonstrate not just who touched data but what transformations occurred, why decisions used specific fields, and how controls were applied-especially under BCBS 239 guidance and evolving supervisory expectations. Regulators may order algorithmic disgorgement where models are trained on improperly sourced data; for institutions relying on AI-driven underwriting or fraud detection, loss of a core model presents material operational disruption.

SR 26-2 notably places generative AI and agentic AI outside its explicit scope, requiring institutions to apply existing risk management practices to govern them. The OCC, Federal Reserve, and FDIC have indicated they plan to issue a separate request for information addressing banks' use of AI, including generative AI and agentic AI, in the near future.

A Q1 2026 Wolters Kluwer Banking Compliance AI Trend Report found that explainability and transparency-cited by 28.4% of respondents-and bias and discrimination were the most acute regulatory concerns among financial institutions. Over half of organizations still lack a systematic inventory of the AI systems they operate, a fundamental prerequisite for compliance.

Outlook

Canada's OSFI guideline on enterprise-wide model risk management, coming into full effect in May 2027, establishes a principles-based, risk-proportional approach mandating that financial institutions manage model risk across the entire enterprise regardless of a model's complexity or application. The April 2026 guidance is unlikely to be the last supervisory shift in this cycle, with agentic AI principles, third-party model oversight, and climate risk modeling all in motion across global regulators. Institutions still running proofs of concept risk falling behind, while those moving fast without governance face significant regulatory and operational challenges.