U.S. Regulators Overhaul AI Model Risk and Data Provenance Rules for Banks and Health Systems

U.S. banking and health regulators overhaul AI model risk and data provenance rules in 2025-2026, reshaping governance, vendor contracts, and compliance.

BREAKING
U.S. Regulators Overhaul AI Model Risk and Data Provenance Rules for Banks and Health Systems

Federal banking regulators have issued the most significant update to model risk management (MRM) standards in 15 years, while health agencies advance parallel data provenance and lifecycle requirements-collectively reshaping how AI-driven workflows must be governed across two of the most regulated sectors in the U.S. economy.

Background

The dual-sector regulatory shift reflects the scale of AI deployment now underway. AI adoption in financial services has surged, with spending projected to reach an estimated $97 billion by 2027. In 2025, over 85% of financial firms were actively applying AI in fraud detection, IT operations, digital marketing, and advanced risk modeling. In healthcare, 46% of U.S. healthcare organizations are currently implementing generative AI technologies.1The 2026 AI reset: a new era for healthcare policy - blueBriX That acceleration has outpaced governance frameworks written for a pre-AI era, prompting regulators to act on both fronts.

On the banking side, following a deregulatory pivot by the current administration, the Federal Reserve's SR 11-7 from 2011 had remained the definitive statement on MRM regulation. On April 17, 2026, the Federal Reserve, FDIC, and OCC replaced SR 11-7, OCC 2011-12, FIL-22-2017, and related BSA/AML issuances with a more risk-based, principles-driven MRM framework. In healthcare, the FDA published a draft guidance on AI-enabled medical devices on January 6, 2025, marking the agency's most comprehensive statement to date on AI device lifecycle obligations, while HHS released its Artificial Intelligence Strategy on December 4, 2025, outlining governance and responsible AI use across departmental operations.

Details

Banking: Revised MRM Guidance

The OCC, Federal Reserve, and FDIC issued updated interagency guidance to clarify MRM principles, establish a risk-based approach, and rescind prior issuances. The updated guidance clarifies that MRM practices should be risk-based, tailored, and commensurate with a banking organization's size, complexity, and extent of model use.

The revised guidance explicitly excludes generative and agentic AI models from its scope, on the basis that these technologies are "novel and rapidly evolving," but confirms that it applies to both traditional statistical and quantitative models and non-generative, non-agentic AI models. Critically, the guidance recommends that banking organizations apply their broader risk management and governance practices to determine appropriate controls for tools not covered-including generative and agentic AI models-which are not exempted from risk management expectations.

On data lineage, the guidance treats development, validation, deployment, monitoring, and retirement as one governed chain. Supervisors expect lineage across every link, not snapshots at hand-off points. Performance drift, data drift, and stability must be tracked continuously, with thresholds mapped to materiality. For vendor-sourced models, banking organizations may not receive the underlying code, data, or methodology they would have if a model were developed internally. Nevertheless, MRM principles remain applicable, and validation of vendor products is an important element.

The OCC, Federal Reserve Board, and FDIC plan to issue a request for information addressing MRM generally and considering banks' use of AI-including generative AI, agentic AI, and AI-based models-in the near future.

Healthcare: FDA and HHS Expectations

The FDA published a draft guidance on January 6, 2025, entitled "Artificial Intelligence-Enabled Medical Devices-Total Product Lifecycle." If finalized, it will be the first comprehensive FDA guidance specifically for AI devices, providing recommendations across the entire lifecycle from early design through validation, premarket submission, and postmarket monitoring. Key themes include transparency-such as disclosing algorithm design and data sources-bias mitigation using representative training data, and performance monitoring to track real-world safety and effectiveness.

Regulators seek to trace the origin of training data: how it was labeled, validated, and updated, and whether it represents the full range of patients it is meant to serve. Rigorous standards for training data quality, provenance, and representativeness apply, with the guidance acknowledging that AI system performance depends fundamentally on data quality.

On third-party risk, when protected health information is involved, healthcare entities must establish appropriate business associate agreements with AI vendors and implement robust data protection protocols, including encryption, strict access controls, regular security assessments, and incident response plans. Vendor-supplied AI features now carry regulatory implications; model lifecycle management, drift monitoring, and bias detection all require discipline.

These actions reflect a coordinated approach reinforced by HHS' Artificial Intelligence Strategy, released on December 4, 2025. The strategy outlines how HHS intends to integrate AI across departmental operations-including research and public health programs-with an emphasis on governance, responsible use, workforce readiness, and operational capability.

Outlook

The cross-sector governance push carries direct implications for AI vendor contracts. The new banking guidance represents a meaningful recalibration of prudential regulators' expectations, with a more concise, principles-based approach that offers less specificity on how banks should execute their risk management responsibilities.

However, federal and state policymakers are approaching AI regulation from different perspectives. States focus on establishing guardrails and promoting transparency, while the federal government maintains a broadly deregulatory posture-creating compliance complexity for institutions operating across jurisdictions. Stakeholders should anticipate continued incremental federal action alongside sustained state-level activity, with healthcare AI policy in particular likely to evolve through agency action and congressional oversight rather than comprehensive, sector-specific legislation.