A wave of coordinated regulatory action is compelling U.S. financial institutions and health systems to build formal data provenance and model governance infrastructures as federal agencies move from broad AI principles to concrete, audit-ready frameworks in 2026.
Background
The regulatory push marks a significant shift in how U.S. authorities expect AI to be governed in high-stakes sectors. Although the U.S. lacks a single national AI law, agencies are stepping in to address AI risks in finance, healthcare, and child safety. That piecemeal approach is now crystallizing into coordinated action.
On February 19, 2026, the U.S. Department of the Treasury released two non-binding financial-sector AI resources: the Artificial Intelligence Lexicon and the Financial Services AI Risk Management Framework (FS AI RMF). Developed with more than 100 financial institutions, the Financial Services Sector Coordinating Council, and the Cyber Risk Institute, the framework introduces 230 control objectives across governance, data, model development, validation, monitoring, third-party risk, and consumer protection.
Shortly thereafter, banking regulators formalized their own expectations. The Office of the Comptroller of the Currency (OCC), in coordination with the Board of Governors of the Federal Reserve System and the Federal Deposit Insurance Corporation (FDIC), issued updated model risk management guidance for OCC-supervised institutions. The OCC rescinded prior issuances, including OCC Bulletin 2011-12, "Supervisory Guidance on Model Risk Management," which had been in force since 2011. The OCC, Federal Reserve Board, and FDIC plan to issue a request for information addressing model risk management broadly and banks' use of AI-including generative AI, agentic AI, and AI-based models-in the near future.
On the healthcare side, HHS released a 20-page AI strategy and companion implementation plan on December 4, 2025, signaling formal governance expectations for health data used in AI systems. The strategy is organized around five pillars: governance and risk management, shared infrastructure and platforms, workforce capability and burden reduction, research, and modernized service delivery. HHS projects roughly a 70 percent increase in AI projects in fiscal year 2025.
Details
Data provenance-the documented chain of custody for data from collection through model training to in-production decisioning-sits at the heart of these frameworks. The guidelines define data governance to encompass data provenance and quality across the full AI lifecycle, lineage and traceability from source to model input, protections for personal and sensitive data used in training and inference, and bias detection rooted in data representativeness and completeness.
For financial institutions, the compliance gap is significant. Grant Thornton's 2026 AI Impact Survey of banking leaders found that the lack of centralized, tested governance is holding banks back from measurable AI performance. Just 18% said they were fully confident in their ability to pass an independent review of their AI controls in the next 90 days, and 50% said governance and compliance barriers limit AI performance.
The Treasury initiative reflects a shift in how regulators expect financial institutions to govern AI. Through six coordinated deliverables, regulators are signaling that AI risk should be embedded in existing risk and compliance frameworks rather than treated as a standalone technology issue. As financial institutions increasingly rely on AI to support decision-making, customer engagement, and operations, inconsistent terminology and uneven risk management practices have created governance and oversight challenges.
Treasury plans to release four additional resources covering priority areas including governance, transparency, data practices, fraud, and digital identity. Cross-border pressure is also intensifying: the EU AI Act designates healthcare AI systems as high-risk, mandating compliance with transparency, risk management, and data governance standards.
Industry observers note that data quality thresholds are a particular concern in healthcare. Organizations must establish policies for documenting data provenance and verifying the accuracy, completeness, and diversity of AI training data to prevent biased outcomes.1Decoding the US treasury’s AI risk management framework for financial services A 2025 AuditBoard study found that only one in four organizations have fully operational AI governance, despite widespread awareness of new regulations.
Outlook
The FS AI RMF is currently voluntary guidance rather than binding regulation; however, it is expected to shape auditor standards as AI adoption in financial services accelerates. Organizations that align early will be better positioned when regulatory expectations harden. The agencies plan to issue a request for information addressing model risk management broadly and banks' use of AI-including generative AI, agentic AI, and AI-based models. For CIOs and risk officers in both sectors, the operational implication is clear: AI risk is expected to be governed through existing frameworks, integrated into enterprise risk assessments, and supported by documented oversight and controls. Institutions that lack this level of governance face heightened regulatory and reputational risk.
