Clinical AI Vendor Contracts: A Due Diligence Checklist for Healthcare Organizations
Three regulatory regimes converge when you onboard a clinical AI vendor: HIPAA, Texas TRAIGA, and the EU AI Act. This guide walks through the due-diligence questions that matter most — training data integrity, BAA alignment, model transparency, and contract red flags.
Why Clinical AI Vendor Diligence Is Different
When a standard SaaS vendor fails, you deal with downtime and data-loss liability. When a clinical AI vendor fails, patients get misdiagnosed. An AI model trained on biased or mislabeled data can generate inequitable treatment recommendations, erode clinician trust, and — in the worst cases — cause direct patient harm. That exposure is categorically different from anything your standard software procurement checklist is designed to catch.
Three regulatory regimes converge the moment you onboard a clinical AI vendor. First, HIPAA applies immediately: any vendor that creates, receives, maintains, or transmits protected health information on your behalf is a business associate, and deploying that tool without a signed Business Associate Agreement is itself a HIPAA violation — no breach required. Second, if you operate in Texas, the Texas Responsible AI Governance Act (TRAIGA) took effect January 1, 2026, and it includes healthcare-specific disclosure requirements: patients must be notified before or at the time AI is used in their diagnosis or treatment. Third, if your roadmap includes EU markets, the EU AI Act classifies AI-based clinical decision support as high-risk AI, with mandatory compliance for new high-risk clinical AI deployments starting August 2026.
These aren't parallel tracks — they interact, and a vendor contract that satisfies one may leave you exposed under another. This article works through the diligence questions you need answered before you sign: BAA terms, training-data integrity, FDA regulatory status, and state-law disclosure obligations.
Interrogating the Training Data
"We de-identified everything" is not a complete answer — and any vendor who treats it as one is a red flag. Under 45 CFR §164.514(b), HIPAA recognizes exactly two de-identification methods: Safe Harbor (removing 18 enumerated identifiers) and Expert Determination (a qualified statistician certifies that re-identification risk is very small). Both require formal documentation. An informal practice, a proprietary process, or a blanket assurance in sales materials satisfies neither.
The standard has also grown harder to meet over time. HHS's own guidance acknowledges that what counts as de-identified is an evolving determination — modern re-identification techniques mean that data scrubbed of obvious identifiers can still, in some cases, be traced back to individuals. When you're evaluating a vendor whose model was trained on millions of patient records, the question isn't just whether they checked boxes. It's whether the certification reflects current re-identification research.
There is a second, distinct exposure that practitioners often miss: BAA scope. Contracts that permit vendors to use patient data for "improving services" or "analytics purposes" create HIPAA liability — those phrases are not enumerated permitted uses under HIPAA, and they can push the vendor's conduct outside BAA scope. If a vendor trained its AI on PHI in a way not covered by your BAA, the vendor is in violation — but the covered entity can face liability for enabling that use. Review the commercial agreement against the BAA as a pair, not in isolation.
Before you sign, get written answers to these questions from every clinical AI vendor:
- Which de-identification method — Safe Harbor or Expert Determination — was applied to the training data, and do you have the documentation?
- Who performed or certified the de-identification, and when?
- Does your BAA explicitly enumerate model training as a permitted use of PHI?
- Does any language in the commercial agreement — including "improving services" or "analytics" provisions — conflict with or exceed BAA scope?
- How is the full training data pipeline documented, and can you provide that documentation to our compliance team?
Model Transparency, Explainability, and Audit Rights
Two overlapping regulatory frameworks define the transparency floor your vendor must already meet — and knowing where each applies tells you which gaps remain yours to close by contract. For FDA-regulated AI software as a medical device (SaMD), the FDA's December 2024 final guidance on Predetermined Change Control Plans (PCCPs) requires manufacturers to document every planned modification, the methodology used to develop and validate it, and a safety impact assessment — and to disclose PCCP status in device labeling. If your vendor markets a clinical decision support tool that meets the SaMD threshold, that documentation exists and you are entitled to see it.
For certified health IT, the ONC HTI-1 Final Rule (effective January 1, 2025) imposes a parallel transparency regime on Predictive Decision Support Interventions (PDSIs). Vendors must disclose 31 source attributes for each PDSI, including the algorithm's intended use, known risks and limitations, and the population it was validated on. They must also publish Intervention Risk Management (IRM) summaries and notify healthcare organizations whenever a model change may affect the PDSI's recommendations. Both sets of obligations are regulatory minimums — not negotiated favors — and a vendor that cannot produce this documentation is already out of compliance.
The contract does real work for everything that falls outside FDA clearance or ONC certification — which is most of the clinical AI market. Regardless of regulatory status, demand the following provisions explicitly:
- Validation study access: Vendor must provide the full validation study, including statistical performance by subpopulation (age, race, sex, care setting) — not just headline accuracy metrics.
- Model change notification: Vendor must give written notice before any material retraining, weight update, or threshold change, with a defined lead time (30 days is a reasonable floor).
- Audit rights: Buyer retains the right to audit post-deployment model performance on its own patient population, with access to output logs sufficient to reconstruct individual predictions.
- IRM summary delivery: Even for non-certified tools, vendor must deliver an IRM-equivalent document mapping known failure modes to the specific patient populations in your care setting.
Liability, Indemnification, and Insurance for AI Failures
Standard SaaS contracts are written to absorb the cost of downtime and service interruptions — not patient injury verdicts, OCR civil money penalties, or class-action malpractice claims. The liability cap in a typical AI vendor agreement is pegged to fees paid under the contract, often capping exposure at one to twelve months of subscription cost. For a $50,000-per-year clinical decision support tool, that cap is essentially meaningless against the downstream exposure it creates. Malpractice claims implicating AI tools in clinical settings have grown materially in recent years, and courts have shown no hesitation applying existing negligence, products liability, and malpractice doctrines to AI-assisted clinical errors — no AI-specific statute is required to establish liability against a covered entity.
State regulators are moving just as fast as plaintiffs' attorneys. A September 2024 Texas Attorney General enforcement action resulted in a settlement requiring a healthcare AI company to implement stricter data governance practices and disclose metric definitions — a concrete signal that failing to govern clinical AI exposure is a regulatory risk, not just a litigation risk. Your vendor contract needs to reflect that dual exposure.
Demand indemnification language that explicitly covers four categories of loss:
- Algorithmic errors causing patient harm — indemnification for claims arising from the vendor's model outputs, not limited to implementation errors by your team
- Regulatory penalties — FDA enforcement actions and OCR civil money penalties attributable to the vendor's product or conduct
- Third-party malpractice claims — defense and indemnification where a patient or payer names your organization in claims that trace to the AI tool's output
- Breach notification and remediation costs — incident response, notification, and credit monitoring obligations triggered by a vendor-side security failure
Insurance requirements belong in the contract, not a side letter. Require the vendor to carry cyber liability coverage, technology errors and omissions coverage specifically extending to AI clinical decision support, and professional liability coverage. Each policy should name your organization as an additional insured, and the vendor should be required to provide certificates of insurance at signing and annually. A vendor unwilling to carry adequate coverage for its own product's clinical risk is telling you something material about how it views that risk — and about how it expects you to absorb it.
Red Flags in Clinical AI Vendor Contracts
Most of the risk in a clinical AI vendor relationship hides in standard contract language that feels unremarkable until something goes wrong. The following clauses appear regularly in vendor paper — each one is a negotiating target, not a take-it-or-leave-it term.
- Unilateral model update rights with no notice requirement. A vendor that can silently retrain or redeploy its model without telling you is operating outside any accountable change-control framework — FDA's December 2024 PCCP guidance requires pre-specification of planned modifications and their validation methodology, and your contract should mirror that obligation.
- "Product improvement" or "service enhancement" language that implicitly covers your PHI. Unless model training is explicitly listed as a permitted use in your BAA, using patient data for that purpose is a HIPAA violation regardless of how the clause is labeled.
- Post-termination data retention for training purposes. If the vendor can retain your patient data after the contract ends to improve its model, you can never fully exit a bad vendor relationship — the data obligation survives your termination right.
- "Feedback loop" IP ownership over model improvements derived from your data. Clauses granting the vendor ownership of performance gains attributable to your patient population hand the vendor a commercial asset built entirely from your covered entity's proprietary data.
- Broad subcontracting rights without a mandatory downstream BAA chain. OCR's 2023 $350,000 settlement with MedEvolve traced directly to missing subcontractor BAA coverage — a vendor that can pass your PHI to sub-processors without requiring a BAA from each of them replicates that gap in your contract.
- "As-is" warranty disclaimers on clinical outputs. A disclaimer that eliminates any warranty of fitness for a particular clinical purpose — while the same vendor markets clinical accuracy claims in its sales materials — attempts to zero out product liability for the AI's core clinical function. Push back with a minimum performance warranty tied to the vendor's published accuracy benchmarks.
Actionable Next Steps
Contracts that look routine on the surface can quietly transfer PHI use rights your organization never intended to grant, leave your BAA misaligned with the commercial terms sitting next to it, and expose you to enforcement under laws your vendor never mentioned. The review process below is not a nice-to-have — active malpractice litigation such as Kisting-Leung v. Cigna and Estate of Lokken v. UnitedHealth has generated active class-action exposure, and those disputes began with contracts that looked routine at signing. Work through these five steps before any clinical AI vendor agreement reaches your signature line.
- Demand the IRM summary and all 31 source attributes. Under ONC's HTI-1 rule, certified health IT vendors are already required to provide end users with Intervention Risk Management summaries and 31 plain-language source attributes covering performance, fairness, validity, effectiveness, and safety. Request these documents up front — you are not asking for a concession, you are invoking an existing regulatory obligation.
- Read the BAA and the MSA side by side. Place both documents on the same table and map every permitted use of PHI in the commercial agreement against what the BAA actually authorizes. Any mismatch — including post-termination data retention rights or unrestricted use of de-identified data for model development — is a primary HIPAA exposure vector that must be resolved before execution.
- Audit your vendor stack for TRAIGA compliance (Texas organizations). Effective January 1, 2026, Texas healthcare providers must inform patients when AI systems are used in delivering health care services or treatment. Penalties run from $10,000 to $200,000 per violation under AG-exclusive enforcement. Map each clinical AI deployment to the disclosure obligation now, not after a complaint is filed.
- Embed five non-negotiable redlines. Treat these as your floor, not your opening offer: (1) robust indemnity covering the vendor's federal and state law violations; (2) privacy and security warranties restricting proprietary use of PHI; (3) explicit data ownership definitions for training data, model inputs, and outputs; (4) performance warranties addressing unlawful or unreliable outputs; and (5) audit and monitoring rights for ongoing compliance verification.
- Get legal review before deployment, not after. Counsel engaged at the contract stage is far cheaper than counsel engaged mid-dispute. The class-action and consent-decree exposure now visible in AI malpractice litigation is driven by agreements that were never reviewed for the specific risks clinical AI introduces.
If any of these steps surface terms your organization cannot accept, that is useful information — pause and get qualified healthcare technology counsel before proceeding.
Promise Legal works with healthcare organizations on clinical AI vendor contract review, BAA negotiation, and HIPAA compliance counsel.