Federal banking and healthcare regulators are advancing parallel-and increasingly aligned-requirements for AI data lineage, model governance, and audit readiness. The result is a compound compliance burden for financial institutions, health systems, and the technology vendors serving both sectors.
Background
The regulatory convergence follows years of fragmented, sector-specific oversight. In the U.S., although no national AI law exists yet, agencies are stepping in to address AI risks in domains like finance and healthcare. In 2024 alone, U.S. federal agencies introduced 59 AI-related regulations, more than double the year before.
The White House set the political direction in December 2025. On December 11, 2025, the White House issued an Executive Order titled "Ensuring a National Policy Framework for Artificial Intelligence," stating a federal policy to sustain and enhance U.S. AI leadership through a minimally burdensome national policy framework and to limit conflicting state requirements. The EO directed rapid actions by multiple federal entities to evaluate, challenge, or preempt state AI laws viewed as inconsistent with that policy, and to use federal funding and standard-setting to influence state approaches. For financial institutions operating across multiple states, the order signals that a uniform federal standard on AI governance-including data provenance-is the administration's stated objective.
Details
The most concrete banking action came on April 17, 2026. The OCC, Federal Reserve, and FDIC jointly issued revised supervisory guidance on model risk management (SR 26-2), superseding the 2011 interagency framework that had governed American banking for 15 years. The revised guidance clarifies that model risk management practices should be risk-based, tailored, and commensurate with a banking organization's size, complexity, and extent of model use, and is expected to be most relevant to banking organizations with over $30 billion in total assets. It covers model development, validation, monitoring, and governance, and explicitly addresses vendor and third-party model products-a point of direct relevance to enterprise software providers.
A significant regulatory gap remains, however. Generative AI and agentic AI models are explicitly excluded from the revised guidance, with regulators acknowledging that these technologies are "novel and rapidly evolving." The agencies announced they plan to issue a separate request for information addressing banks' use of AI-including generative AI, agentic AI, and AI-based models-in the near future. Until that RFI produces binding rules, banks deploying large language models for credit decisioning, fraud detection, or customer service operate without specific supervisory data provenance expectations.
On data lineage and documentation, the revised banking guidance is direct: adequate documentation helps to support effective model risk management, including supporting the tracking of recommendations, responses, and exceptions. The guidance also confirms that all components of the model lifecycle benefit from model governance that is supported by clear policies and effective controls, with effective policies establishing a framework for assessing the magnitude of model risk commensurate with that risk.
Healthcare regulators have moved on a parallel track. In January 2026, the FDA and the European Medicines Agency jointly released the "Guiding Principles of Good AI Practice in Drug Development," a set of 10 high-level principles intended to steer the safe and responsible use of AI across the product lifecycle. The agencies framed the principles as a foundation for future guidance, standards, and harmonized regulatory expectations from international regulators and standards organizations. Among the recommended steps for companies: ensuring robust documentation across the AI lifecycle, including data provenance, model selection, and validation reports.
Separately, the FDA issued draft guidance for AI-enabled devices focused on documentation, transparency, bias prevention, and post-market monitoring in 2025, with a final guidance expected in 2026. The agency's "total product lifecycle" approach subjects AI systems embedded in clinical decision support software to continuous oversight-not one-time approval. Regulators want to be able to trace the origin of training data to know how it was labeled, validated, and updated, and ensure it represents the full range of patients it is meant to serve.
For health systems procuring AI tools, the vendor contracting implications are significant. Healthcare organizations must ensure responsible management of patient data used in AI systems, with particular attention to data provenance, quality, and security. Healthcare organizations should review existing contracts with AI tool providers and establish procurement standards requiring vendors to provide comprehensive information about tool development, testing, validation, bias mitigation, and ongoing support for local validation efforts. The ONC Cures Act Final Rule requires certified health IT, including some clinical decision support tools utilizing AI, to meet specific transparency criteria regarding the model's purpose, limitations, and provenance.
The NIST AI Risk Management Framework (AI RMF) underpins both sectors. A March 2025 update to the NIST AI RMF emphasizes model provenance, data integrity, and third-party model assessment, recognizing that most organizations rely on external or open-source AI components. The NIST AI RMF goes beyond cybersecurity frameworks by addressing factors like model robustness, data provenance, explainability, fairness, and broader socio-technical impacts.
Outlook
The forthcoming banking RFI on generative and agentic AI is the next major inflection point for enterprise technology vendors. Its scope-and whether it incorporates explicit data lineage standards akin to those taking shape in healthcare-will determine whether a genuinely unified cross-sector provenance framework emerges. There is growing pressure for Congress to act, especially as state laws begin to diverge significantly in scope and definitions, with a federal baseline potentially streamlining compliance and reducing risk. Organizations in both sectors are advised to build data lineage infrastructure and model documentation capabilities now, ahead of formal rulemaking, as examiners and accrediting bodies are already evaluating governance posture against existing guidance.
