FDA Regulation of Software as a Medical Device: A Founder's Guide to SaMD Pathways
FDA regulates software as a medical device based on what it does, not what it looks like. This guide covers the SaMD definition, IMDRF risk classification, 510(k)/De Novo/PMA pathways, FDA's PCCP framework, and clinical evidence requirements for founders.
When Software Becomes a Medical Device
Most digital health founders think FDA regulation is a hardware problem — cleared devices live in hospital equipment rooms, not app stores. The FDA disagrees. Under its Software as a Medical Device (SaMD) framework, any software intended to be used for one or more medical purposes — and performing those purposes without being part of a hardware medical device — qualifies as a regulated device. Your algorithm does not need a physical enclosure to be a medical device.
Congress drew some clear exclusion lines in the 21st Century Cures Act. Administrative tools like scheduling software, EHR systems, general wellness apps, and Medical Device Data Systems (MDDS) fall outside FDA's device definition entirely. So does qualifying clinical decision support (CDS) software — but that carve-out is narrower than most developers assume.
Under 21 U.S.C. §520(o), CDS software escapes device regulation only when it clears all four prongs: it must not acquire or process medical images or signals; it must display or analyze information from the patient's medical record; it must support — not replace — the clinician's judgment; and the healthcare professional must be able to independently review the basis for each recommendation. Miss any one prong and the exclusion collapses. Clinical AI products that automate a decision rather than surface evidence for a human to weigh almost never satisfy the third and fourth criteria.
The stakes of that determination run higher when you factor in patient risk. The IMDRF risk framework — which the FDA uses to calibrate regulatory intensity — scores SaMD along two axes: how the software's output is used (treating or diagnosing versus informing a decision) and how serious the patient's condition is. A triage algorithm for a critical-care patient sits at one end of that matrix; a tool that helps a clinician manage a non-serious chronic condition sits at the other. Where your product lands on that grid determines which clearance pathway applies — and how much evidence you'll need to get there. The sections that follow map the classification framework, the 510(k) and De Novo pathways, the Predetermined Change Control Plan option, and the clinical evidence requirements that come with each.
The FDA's Risk Classification Framework for Digital Health
The IMDRF matrix produces nine risk combinations by crossing three levels of output significance (treats/diagnoses, drives clinical management, informs clinical management) against three states of patient condition (critical, serious, non-serious). The FDA's Digital Health Center of Excellence maps those nine cells onto three regulatory tiers: low-risk products face no premarket submission or minimal oversight; intermediate-risk products clear through a 510(k) premarket notification; and high-risk products require a Premarket Approval (PMA), including clinical trial evidence. A founder who can honestly label both axes for their product can read off which tier — and which evidentiary bar — applies.
The practical translation looks like this. Software that merely informs clinical management of a non-serious condition (bottom-left cell) is almost always low-risk and often exempt from premarket review. Move to the top-right cell — software that treats or diagnoses a critical-care patient — and PMA is the operative pathway, with the clinical evidence burden that entails. Most digital therapeutics land somewhere in the middle rows: they drive or inform management of serious conditions, which generally points to 510(k) with a predicate device comparison.
This framework was formalized agency-wide by the FDA's January 2021 AI/ML SaMD Action Plan, which committed the agency to a Total Product Lifecycle (TPLC) approach: five parallel tracks covering a Predetermined Change Control Plan (PCCP) framework, Good Machine Learning Practices, and real-world performance monitoring. That plan made clear that the risk-tiering logic is not a one-time clearance gate but an ongoing regulatory relationship — your obligation to monitor and report performance does not end at authorization.
One important caveat: the FDA withdrew its guidance adopting IMDRF SaMD Clinical Evaluation principles in January 2026, signaling a move toward FDA-specific frameworks rather than direct incorporation of international standards. Founders reviewing pre-2026 regulatory roadmaps should verify which guidance documents remain active. The three-tier classification model itself remains operative — but international IMDRF materials are no longer an authoritative stand-in for current FDA expectations. The next section maps each tier to its specific clearance pathway and what the evidence package for each looks like in practice.
510(k), De Novo, and PMA — Choosing Your Pathway
The first question is not which pathway to pursue — it is whether you need a pathway at all. If your software meets all four criteria under 21 U.S.C. §520(o), it is excluded from the device definition entirely, and no premarket submission is required. Work through that analysis first. If your product falls outside the exemption, you are choosing among three submission types, and the choice turns primarily on whether a predicate device exists.
For most digital health founders, the answer is 510(k). Approximately 97% of FDA-cleared AI/ML medical devices reached market through this pathway, which requires demonstrating substantial equivalence to a predicate — a cleared device with similar intended use and technological characteristics. The median review time runs around 142 days. Viz.ai's ContaCT stroke detection software is the model here: a deep-learning algorithm for large vessel occlusion detection that cleared 510(k) by demonstrating substantial equivalence, supported by performance data showing approximately 87.8% sensitivity and 89.6% specificity. The practical implication is that predicate-hunting is not an academic exercise — identifying the right cleared device early can compress your timeline by months.
When no suitable predicate exists and your device is novel but low-to-moderate risk, De Novo is the path. The IDx-DR autonomous diabetic retinopathy screening system illustrates this: it received De Novo authorization in April 2018 (DEN180001) after a 900-patient pivotal clinical study, becoming the first cleared autonomous AI diagnostic and establishing a new product code that subsequent retinal AI products now use for 510(k) submissions. That product code creation is the De Novo's long-term value — it builds the predicate infrastructure future products rely on. The cost is time: median review runs approximately 338 days, roughly double the 510(k) timeline, and De Novo accounts for only 2–3% of AI/ML authorizations.
PMA is reserved for Class III devices — those that are life-supporting or carry high clinical impact — and has been used by fewer than 0.5% of AI/ML medical devices. HeartFlow's FFR-CT coronary ischemia analysis required prospective clinical trials to reach PMA approval. For most SaMD founders, PMA is not a realistic near-term option; if your risk classification analysis suggests Class III, the more useful question is whether a device redesign or narrower intended use could shift you into Class II territory.
The PCCP Framework — Managing AI Model Updates After Clearance
Traditional premarket review assumes a static device. A surgical blade cleared in 2010 is the same blade in 2025. AI-enabled SaMD breaks that assumption: models retrain on new data, performance drifts, and clinical inputs evolve. Under the legacy framework, every material change to a cleared algorithm could trigger a new 510(k) or De Novo submission — a cycle incompatible with software development at any realistic cadence. The Predetermined Change Control Plan (PCCP) is the FDA's answer to that structural problem.
The concept traces to the FDA's January 2021 AI/ML Action Plan, which introduced a Total Product Lifecycle (TPLC) approach treating AI-enabled SaMD as iterative systems requiring ongoing regulatory engagement rather than a one-time premarket review. In December 2024, the agency published final guidance on "Marketing Submission Recommendations for a PCCP for AI-Enabled Device Software Functions," which expanded the PCCP framework's scope from machine-learning-only products to all AI-enabled device software functions — the current governing authority for any team building under this framework.
A PCCP submitted with (or after) a marketing application must include three required sections. The Description of Modifications specifies which post-clearance changes are planned and how frequently they are expected to occur. The Modification Protocol details the development, validation, and implementation procedures — including data management practices and quantified performance criteria. The Impact Assessment demonstrates how the planned modifications affect device safety and effectiveness. If FDA authorizes the PCCP, the manufacturer can implement any modification within that pre-approved envelope without submitting a new marketing application for each update.
The December 2024 final guidance added three compliance requirements that carry real engineering weight: bias mitigation procedures embedded in data management practices, mandatory labeling disclosure of PCCP authorization status, and post-market surveillance plans for monitoring safety after each modification is deployed. For founders, the practical implication is that PCCP architecture belongs in the initial submission design — not as a post-clearance retrofit. A PCCP written retroactively is harder to scope, harder to validate, and more likely to require iterative FDA feedback that delays the change envelope you actually need.
Clinical Evidence Requirements for AI/ML SaMD Submissions
Analytical validation and clinical validation are not the same thing, and the FDA draws a hard line between them. Demonstrating that your algorithm outperforms a reference standard on a held-out test set is a necessary starting point — not a finish line. As a peer-reviewed analysis of 309 FDA-cleared AI/ML devices put it directly: "The accuracy of the algorithm is not synonymous with its clinical efficacy. Clinical evaluation is an indispensable step when submitting a 510(k) application for SaMD." The distinction matters in practice: most cleared devices never crossed that line. Only 7.77% of the 309 cleared products included comparative analysis of healthcare professional performance before and after using the AI system — the rest relied on unilateral algorithm-versus-reference comparisons.
That 7.77% figure is not a green light; it is a historical floor that FDA's current enforcement posture is actively raising. The same analysis found that only 53.72% of cleared devices disclosed any performance data in public summaries, and just 6.47% provided training dataset information. FDA's Good Machine Learning Practices (GMLP) requirements are designed to close that gap. Under GMLP, submissions must now address bias assessment and demographic representativeness across age, gender, race, and disease severity. Training data documentation is no longer a best practice; it is a formal submission requirement.
The evidentiary ceiling rises further for De Novo submissions. A 510(k) asks whether your device is substantially equivalent to a predicate — a lower bar that can be met with comparative performance data. De Novo, by design, has no predicate, so the applicant must demonstrate standalone safety and effectiveness. That is the standard IDx-DR met with its 900-patient pivotal study: prospective, multi-site, clinically meaningful endpoint. Founders building genuinely novel clinical functions — anything without a cleared predicate — should design their clinical evidence package with that benchmark in mind, not the historical average of what the cleared-but-undisclosed majority submitted.
Next Steps
The FDA SaMD pathway is navigable, but the decisions that determine your regulatory burden and timeline are made early — often before founders realize they are regulatory decisions at all. These five steps sequence the work in order of leverage.
- Run the 520(o) CDS exemption test before anything else. All four statutory prongs must be satisfied: the software must not acquire or process medical images or signals from in vitro diagnostic devices or signal acquisition systems; it must display or analyze information from the patient's medical record; it must support rather than replace clinician judgment; and the healthcare professional must be able to independently review the basis for each recommendation. Failing even one prong means you are building a medical device, and every downstream decision changes accordingly.
- File a Q-Submission (Q-Sub) before committing to clinical studies. FDA provides written feedback within 70 days on your proposed pathway, study design, and predicate device strategy, and that written guidance is binding on your eventual submission. Q-Subs accept substantive questions on your proposed pathway, study design, and predicate strategy, making them the highest-leverage regulatory investment you can make before a single patient is enrolled.
- Document training data, validation methodology, and demographic representativeness from day one. FDA's Good Machine Learning Practice (GMLP) requirements treat these as formal submission requirements, not background documentation you reconstruct later. A 2024 review of 309 cleared AI/ML devices found that fewer than 7% provided training dataset information — a gap FDA's current enforcement posture is actively closing.
- Design your Predetermined Change Control Plan (PCCP) into the product roadmap, not the submission paperwork. FDA's December 2024 final PCCP guidance allows authorized post-clearance modifications to be deployed without new marketing applications — but only when the PCCP was structured into the original submission with all three required components: a Description of Modifications, a Modification Protocol, and an Impact Assessment. A PCCP retrofitted after clearance is not a PCCP.
- Engage health regulatory counsel before clinical study design is finalized. Endpoint selection, study population definition, and comparator arm structure are the decisions with the greatest downstream impact on clearance timelines — and the least correctable once enrollment begins. The De Novo clearance for IDx-DR required a 900-patient prospective multi-site study; the design of that study was a legal and regulatory architecture decision as much as a clinical one.
An experienced health regulatory attorney can identify which of these steps your current development stage has already foreclosed and where you still have room to optimize your pathway.
Promise Legal works with digital health founders on FDA SaMD regulatory strategy, pre-submission planning, and health technology counsel.