arrow_backEnterprise Software News

U.S. Banking Regulators Overhaul Model Risk Rules, Signal AI Data Governance Reckoning

OCC, Fed, and FDIC overhaul model risk guidance in April 2026, explicitly excluding agentic AI and signaling imminent rules on AI data provenance and lineage.

BREAKING
U.S. Banking Regulators Overhaul Model Risk Rules, Signal AI Data Governance Reckoning

Federal banking regulators have issued the first major overhaul of model risk management standards in fifteen years, replacing the framework that governed how banks document, validate, and audit quantitative models since 2011-while signaling that separate, dedicated rules for generative and agentic AI, including requirements around data provenance and lineage, are imminent.

On April 17, 2026, the Office of the Comptroller of the Currency (OCC), the Board of Governors of the Federal Reserve System, and the Federal Deposit Insurance Corporation (FDIC) jointly issued Bulletin 2026-13, "Model Risk Management: Revised Guidance," rescinding three prior OCC bulletins and the Model Risk Management booklet of the Comptroller's Handbook. The update establishes a risk-based, proportionate framework aligned to each institution's size and model complexity, with a significant caveat: generative AI and agentic AI models are explicitly excluded from the guidance's scope, with the agencies committing to issue a forthcoming request for information (RFI) that will address AI-including generative and agentic AI-directly.

Background

The prior framework, known as SR 11-7, was issued jointly by the OCC and Federal Reserve in April 2011 and adopted by the FDIC in 2017. The agencies are now issuing updated interagency guidance to clarify model risk management principles, establish a risk-based approach, and rescind the prior guidance. Industry groups had long criticized SR 11-7 as prescriptive and poorly calibrated for modern AI, arguing it disrupted bank operations, created a compliance bureaucracy, hampered effective AI use, and impeded innovation in areas from lending to anti-money laundering monitoring to cybersecurity.

The revision comes as AI deployment in banking has accelerated sharply. AI is increasingly embedded in financial services operations, transforming consumer interactions through chatbots and targeted marketing as well as core functions like underwriting, credit decisions, fraud detection, and fair lending. Financial institutions also rely on AI to analyze consumer complaints, manage customer relationships, and inform business strategies. At the same time, regulatory compliance concerns persist globally, with institutions cautious about generative AI due to questions around customer acceptance, overreliance on third-party model providers, and regulatory uncertainty.

Internationally, the EU AI Act's obligations for general-purpose AI models took effect in August 2025, requiring providers of foundation models to publish detailed summaries of training data while downstream users must ensure their systems do not fall into prohibited categories. The U.S. approach, by contrast, has remained principles-based and non-prescriptive.

Details

Bulletin 2026-13 applies most directly to banking organizations with over $30 billion in total assets but may also be relevant to smaller institutions with significant model risk exposure. Critically, the guidance does not set forth enforceable standards or prescriptive requirements; non-compliance will not result in supervisory criticism against a banking organization.

The new guidance retains the three core validation components-conceptual soundness, outcomes analysis, and ongoing monitoring-but treats each more concisely. On governance structure, the guidance states that the quality of the validation process "depends on the rigor and effectiveness of the review rather than on organizational structure," a notable shift from the prior guidance's detailed treatment of reporting-line separation and explicit authority to challenge developers.

The explicit carve-out of agentic and generative AI is the most consequential element for enterprise AI deployment. Because generative AI and agentic AI models are novel and rapidly evolving, they fall outside the guidance's scope. Nonetheless, a banking organization's risk management and governance practices should guide the determination of appropriate controls for any tools, processes, or systems not covered. The OCC, Federal Reserve Board, and FDIC plan to issue a request for information in the near future addressing model risk management generally and considering, in particular, banks' use of AI, including generative AI, agentic AI, and AI-based models.

On data lineage-the ability to trace where AI training and inference data originates, how it is transformed, and how it influences model outputs-existing supervisory expectations remain operative but are not yet codified for AI workflows. Regulators emphasize demonstrable lineage, timely reconciliation, and governance evidence. In practice, financial services firms must show not just who accessed data, but what transformations occurred, why decisions used specific fields, and how controls were applied, especially under BCBS 239 guidance and evolving supervisory expectations. BCBS 239 requires accuracy of data and demonstrable control over its origins and transformations before use; the European Central Bank further clarifies that lineage must be complete-covering all data flows across all systems-and must reach the attribute level.

SEC registrants should expect continued government scrutiny of AI-related disclosures, governance, and controls. Firms must provide transparent and accurate disclosures on their use of AI, including its impact on financial performance and potential risks. Registrants should document AI-related governance and controls and conduct periodic monitoring and testing of key processes.

For vendor selection, the updated framework signals that institutions deploying AI through third-party providers must govern those relationships under existing risk management principles. When evaluating vendors, institutions are advised to demand field-level lineage, semantic parsing, auditable diagram exports, and policy enforcement hooks that integrate with data protection tools. Hybrid architectures that keep sensitive metadata on-premises while leveraging cloud analytics often strike a pragmatic balance in regulated banking environments.

Outlook

A future interagency request for information on AI, including generative AI and agentic AI, is planned. That RFI is expected to address how agentic workflow agents-increasingly deployed across core banking, risk analytics, and customer engagement-should document data provenance, maintain audit trails, and integrate lineage controls into incident response. Agentic AI systems that act rather than merely answer will stress-test human oversight rules, and privacy risks will grow as more sensitive work flows into AI tools. Institutions that begin mapping data sources and validating lineage accuracy now will be better positioned when prescriptive AI-specific standards arrive-whether from U.S. prudential regulators or as part of the broader international convergence already under way.