Regulators in the United States and European Union are intensifying pressure on financial institutions and healthcare providers to implement verifiable AI data lineage and tiered governance controls, accelerating a de facto cross-border compliance standard even as the two blocs pursue structurally different regulatory paths.
Background
The EU's Artificial Intelligence Act-formally published on July 12, 2024, as Regulation (EU) 2024/1689-has emerged as the dominant framework shaping global AI governance obligations. The regulation took effect on August 1, 2024, with substantive obligations phasing in from February 2025 and full high-risk AI system requirements, including mandatory data provenance documentation and audit trails, scheduled for August 2, 2026.
Under the Act, high-risk AI systems in sectors such as credit scoring, medical devices, and healthcare clinical decision support must meet strict requirements for risk management, data governance, transparency, and human oversight before deployment. The European Commission published a voluntary General-Purpose AI (GPAI) Code of Practice in 2025, setting standards for training-data disclosure, robustness testing, and transparency-standards that legal analysts at King & Spalding say are becoming de facto requirements for global operators.
In the US, the federal AI policy landscape shifted following the January 2025 revocation of Executive Order 14110, leaving governance largely fragmented across sector-specific agencies and state law. More than 500 AI bills were proposed in US state legislatures in Q1 2025 alone, according to King & Spalding's transatlantic AI governance analysis. Colorado's AI Act, effective June 30, 2026, addresses high-risk AI in healthcare, housing, employment, and credit scoring, while California's rules, effective October 2025, require employers to retain automated decision system data for four years.
Compounding obligations in financial services, the EU's Digital Operational Resilience Act (DORA) entered into force in January 2025, imposing mandatory ICT risk controls and incident reporting on banks, insurers, and their technology vendors, according to Freshfields' 2026 data law trends analysis.
Details
The convergence is most acute around data provenance-the documented lineage of data from origin through transformation to model output. According to the Atlantic Council's analysis of the EU's AI and health data framework, applying compliance after the fact risks insecure data provenance and AI models that cannot be fully audited, breaching "regulation by design" principles.
For healthcare providers, the compliance challenge is twofold. AI-based Clinical Decision Support Systems are automatically classified as high-risk under the EU AI Act, and cross-border health data exchange must simultaneously satisfy MyHealth@EU interoperability requirements, according to research published in the Journal of Medical Internet Research. This creates what researchers described as a "dual-compliance challenge": vertical safety controls mandated by the AI Act and horizontal semantic-transport requirements through European eHealth gateway infrastructure.
The EU's European Health Data Space (EHDS) Regulation is also reshaping data transfer practices. The ongoing use of pooled global datasets at scale is no longer compliant with EU regulation, requiring companies to invest in federated and jurisdiction-specific data infrastructure complying with GDPR and the EHDS, according to the Atlantic Council. Cross-border data transfers of sensitive health data will require significant auditability and segmentation between EU and non-EU data, driving up compliance costs.
For financial institutions, governance obligations span multiple regulators simultaneously. According to Freshfields, a single data breach at a financial institution can now trigger investigation across multiple regulators-including data protection, prudential, conduct, and cybersecurity authorities-who increasingly coordinate. The BCBS 239 principles for risk data aggregation, which mandate data lineage controls across Global Systemically Important Banks (G-SIBs), further layer onto EU AI Act documentation requirements, according to Atlan's regulatory data lineage analysis.
Enterprises are responding by moving toward integrated governance platforms. Financial institutions are shifting away from single-vendor solutions toward ecosystems of specialized AI providers offering tools tailored to specific regulatory needs, according to cross-border compliance research published by Lucid. Fewer than one in five banks rate their AI approach as fully compliance-ready for advanced initiatives, according to KPMG's 2025 Intelligent Banking report, as cited by SmartDev's AI transformation analysis.
Healthcare organizations and banks are being advised to require comprehensive documentation from AI vendors, including data lineage, feature selection, validation methods, and bias mitigation strategies, according to research published in npj Digital Medicine. Internal validation of vendor-reported model performance is increasingly viewed as non-negotiable; institutions are warned that vendor scorecards are no substitute for independent evaluation.
Outlook
The European Commission has proposed approximately a one-year delay for many high-risk AI system obligations, pushing most enforcement into 2027 pending approval by the European Parliament and EU member states, according to Blocshop's 2026 AI law analysis. However, legal advisors note the extension does not reduce compliance expectations-it provides additional runway for implementation. Non-compliance with the EU AI Act carries penalties of up to €35 million or 7% of global annual turnover, whichever is higher. Enterprises operating across both jurisdictions are expected to accelerate investments in data cataloguing, automated lineage tracking, and federated data infrastructure over the next 18 months.
