Federal banking regulators issued sweeping revisions to model risk management governance for US banks on April 17, 2026, while healthcare agencies and state governments moved simultaneously to formalize AI data provenance requirements for health systems. Together, these actions mark a convergent shift toward enforceable AI data governance across two of the most heavily regulated sectors in the US economy.
Background
The regulatory action follows years of escalating AI adoption with limited sector-specific oversight. According to a 2025 report from the US Government Accountability Office, the number of generative AI use cases submitted to the Office of Management and Budget by selected federal agencies increased nine-fold from 2023 to 2024, with 41% of those cases submitted by the Department of Health and Human Services. In parallel, 46% of US healthcare organizations are currently implementing generative AI technologies, according to legal analysis published in early 2026. Yet legal experts have noted that the vast majority of medical AI in use has not been reviewed by a federal or state regulator, creating significant liability gaps for institutions deploying AI without formal oversight frameworks.
For banks, the prior supervisory baseline - the 2011 interagency guidance on model risk management - had not been substantively updated in over a decade, despite profound changes in the modeling technologies banks now rely on for credit decisions, fraud detection, and regulatory compliance.
Details
On April 17, 2026, the OCC, Federal Reserve Board, and FDIC jointly issued revised interagency guidance on model risk management, superseding the agencies' original 2011 framework and the 2021 Interagency Statement on Model Risk Management for Bank Secrecy Act and Anti-Money Laundering compliance. According to Sullivan & Cromwell's analysis, the update "represents a shift toward a more flexible, principles-based approach to model risk management." The revised framework maintains core pillars - model development and use, validation, monitoring, and governance controls - while expanding requirements around vendor and third-party model oversight.
The updated guidance is expected to be most relevant to banking organizations with over $30 billion in total assets, though regulators noted it scales to smaller institutions with significant or complex model use. Critically, the agencies explicitly stated that generative AI and agentic AI models are not within the scope of the April 2026 guidance, and that a separate request for information addressing banks' use of AI - including generative and agentic AI - will be issued in the near future. The guidance does not establish enforceable or prescriptive standards; however, according to the Federal Reserve's SR 26-2 letter, supervisory action may still result from violations of law or unsafe and unsound practices stemming from insufficient model risk management.
On the healthcare side, HHS published its AI Strategy on December 4, 2025, outlining a framework for integrating AI across internal operations, research, and public health, with governance and risk management identified as a core pillar. State-level action has outpaced federal rulemaking. The Texas Responsible Artificial Intelligence Governance Act (TRAIGA), effective January 1, 2026, requires disclosure when government agencies and healthcare providers use AI systems that interact with consumers. California's AI Transparency Act (SB 942), also effective January 1, 2026, requires covered providers with one million or more monthly users to offer tools allowing users to determine whether content was AI-generated.
Data provenance requirements are a unifying thread across both sectors. Healthcare organizations must ensure responsible management of patient data used in AI systems, with attention to provenance, quality, and security, according to legal analysis from early 2026. For banks, the revised guidance expands its discussion of vendor model oversight - reflecting growing reliance on externally developed AI tools and the need for robust data lineage controls even when models are not built in-house. Organizations commonly struggle with incomplete or fragmented metadata, manual lineage mapping, poor integration across systems, and scalability issues when implementing regulatory data lineage tracking.
Outlook
The federal banking agencies' planned request for information on generative and agentic AI will be closely watched by large financial institutions already navigating dual compliance workloads - governing both traditional statistical models and newer large language model-based systems under a single risk framework. In healthcare, the Joint Commission and CHAI plan to release additional AI governance playbooks and launch a voluntary AI certification program in 2026, signaling that accreditation bodies are moving in step with regulators. For enterprise data governance platforms serving both sectors, the convergence of lineage, access control, and auditability requirements across banking and healthcare will likely accelerate demand for unified metadata management and automated compliance capabilities through the remainder of 2026.
