High-Risk AI Systems Under the EU AI Act: How to Classify Yours

EU AI Act Article 6 high-risk classification activates the full Title III Chapter 2 stack. Two pathways (Annex I product-safety / Annex III use cases). Four-condition derogation with profiling kill-switch. A five-step decision tree.

High-Risk AI Systems Under the EU AI Act: How to Classify Yours
Loading AudioNative Player...

Why Classification Is the First Question

Before a provider drafts a single conformity assessment, before a deployer briefs its board, before counsel benchmarks against ISO/IEC 42001 — the threshold question under the EU AI Act is whether the system is high-risk. That single classification decision activates the full obligation stack in Title III, Chapter 2: risk management systems, data governance, technical documentation, logging, human oversight, accuracy and cybersecurity controls, post-market monitoring, CE marking, and EU database registration. Get the classification wrong and every downstream control is either missing or mis-scaled.

The exposure for misclassification runs on two tracks. The first is operational: a system that should have been classified high-risk but was not will reach the market without a conformity assessment, without CE marking, and without registration in the EU database — each a standalone breach. The second is informational. Article 99 authorizes administrative fines of up to EUR 7,500,000 or 1% of total worldwide annual turnover, whichever is higher, for supplying incorrect, incomplete, or misleading information to notified bodies or national competent authorities. A wrong classification answer, communicated to a regulator in good faith, is still the wrong answer at penalty time.

Classification is also fact-specific and sequential. It is not a single yes/no test but a decision tree: prohibited-use screening, then Annex I safety-component analysis, then Annex III use-case mapping, then the Article 6(3) derogation assessment, then transparency overlays. Each branch demands documented, cross-functional reasoning that product, engineering, and legal can defend on the record. For US multinationals, the answer also feeds the architectural choice — Walled Garden, Single Stack, or EU Withdraw — that we walk through in our EU AI Act August 2 pre-deadline checklist. The next section maps the two pathways into high-risk status itself.

The Two Pathways to High-Risk Status

The AI Act creates two independent routes into high-risk classification, and a system can fall through either one. The first is the product-safety pathway under Article 6(1), which captures AI embedded in regulated products. The second is the use-case pathway under Article 6(2), which captures AI deployed in specific enumerated domains regardless of what physical product, if any, is involved. Counsel running a classification analysis must test both pathways against every system in scope.

Pathway One: Annex I Product-Safety Components (Article 6(1))

Article 6(1) treats an AI system as high-risk only where both conditions are met: the system is a safety component of, or is itself, a product covered by the Union harmonisation legislation listed in Annex I; and that product is required to undergo third-party conformity assessment. Annex I is a long list. As WilmerHale notes, it incorporates more than 30 directives and regulations, including legislation governing toys, vehicles, civil aviation, lifts, radio equipment, and medical devices. If the product itself is not in scope of Annex I, this pathway closes and the analysis moves to Annex III.

Pathway Two: Annex III Use Cases (Article 6(2))

The second pathway operates by enumerated use case. Annex III lists eight high-risk domains:

  1. Biometrics;
  2. Critical infrastructure (digital infrastructure, road traffic, utility supply);
  3. Education and vocational training (admissions, learning evaluation, behavior monitoring);
  4. Employment, workers management, and access to self-employment (recruitment, selection, performance monitoring);
  5. Access to and enjoyment of essential private and public services and benefits (eligibility, creditworthiness, insurance pricing, emergency dispatch);
  6. Law enforcement;
  7. Migration, asylum, and border control management; and
  8. Administration of justice and democratic processes.

For most U.S. multinationals, the Annex III analysis bites first, and it usually bites in category 4. Crowell & Moring identifies automated candidate selection, performance evaluation, workplace monitoring, employee turnover prediction, and decision-making related to promotion or termination as illustrative high-risk HR applications. A talent platform that ranks candidates or a productivity tool that scores workers is on the Annex III list before any product-safety question is asked. Firms working through EU AI Act compliance should expect Annex III, not Annex I, to drive the bulk of remediation work.

The Two-Phase Timeline

The two pathways also carry different effective dates. Under Article 113, the Regulation applies from 2 August 2026, but Article 6(1) and its corresponding obligations apply from 2 August 2027. In practical terms, Annex III high-risk obligations land in August 2026; Annex I high-risk obligations land a year later. Section 3 turns to the Article 6(3) derogation, which can pull a system back out of Annex III even when the use case is listed.

The Article 6(3) Derogation: When You Escape High-Risk

Article 6(3) creates a narrow off-ramp from Annex III high-risk classification, but it is structured as a four-condition gate with a profiling kill-switch sitting on top. An Annex III system may avoid high-risk treatment only where it “does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making” — and only where one of four enumerated conditions is satisfied.

The four conditions, set out verbatim in Article 6(3), are:

  1. Narrow procedural task. “The AI system is intended to perform a narrow procedural task” — for example, a system that formats or routes inputs without substantive evaluation.
  2. Improvement of completed human work. “The AI system is intended to improve the result of a previously completed human activity” — polishing or refining output a human has already produced.
  3. Pattern detection without replacement. “The AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review.”
  4. Preparatory task. “The AI system is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III” — staging information for a downstream human decision rather than making the call.

Sitting above all four conditions is a dispositive carve-out: “Notwithstanding the first subparagraph, an AI system referred to in Annex III shall always be considered to be high-risk where the AI system performs profiling of natural persons.” Because Article 6(3) imports the GDPR definition of profiling — WilmerHale notes this is “any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects” — most HR screening, credit decisioning, and insurance underwriting systems are foreclosed from the derogation entirely.

Invoking the derogation is not a silent decision. Article 6(4) requires that “a provider who considers that an AI system referred to in Annex III is not high-risk shall document its assessment before that system is placed on the market or put into service” and shall register the system under Article 49(2). Documentation and registration are pre-market obligations, not after-the-fact paperwork.

Practitioner consensus is that the derogation is narrower than providers want it to be. DPO Consulting frames Article 6(3) as covering only Annex III systems that “do very limited tasks with minimal effect” — not a general escape valve for systems whose providers believe their risk is low. Compounding the uncertainty, the European Commission missed its February 2, 2026 statutory deadline to publish Article 6 implementation guidelines, leaving providers without authoritative worked examples of the derogation succeeding. Until that guidance arrives, the conservative reading — document thoroughly, register, and assume profiling forecloses the off-ramp — is the defensible posture. Section 4 turns that posture into a decision tree.

A Classification Decision Tree

Practitioner guidance converges on a sequential decision tree as the cleanest way to work through Article 6 without missing a step. The structure below mirrors how the firm runs classification memos for clients deploying AI systems in the EU market.

  1. Annex I pathway. Is the AI system a safety component of an Annex I product, or itself a product covered by Annex I, and does that product require third-party conformity assessment under the listed sectoral legislation? Both prongs are cumulative. If both are satisfied, the system is high-risk under Article 6(1) and classification is complete.
  2. Annex III categories. If the Annex I pathway does not apply, does the system fall within any of the eight Annex III use-case categories? If no, the system is not high-risk under Article 6. The Article 5 prohibited-practices screen and the Article 50 transparency obligations still need to be checked separately.
  3. Profiling screen. If Annex III applies, does the system involve profiling of natural persons? If yes, the system is high-risk and the Article 6(3) derogation is unavailable as a matter of statute. Classification is complete.
  4. Article 6(3) derogation. If Annex III applies and there is no profiling, does the system meet exactly one of the four narrow Article 6(3) conditions and pose no significant risk of harm to the health, safety, or fundamental rights of natural persons? Both elements are required; the structure is conjunctive. If yes, the derogation is available. The provider must document the assessment under Article 6(4) and register the system in the EU database before placing it on the market.
  5. Default to high-risk. If the derogation does not apply, the system is high-risk and the full Title III Chapter 2 obligation stack attaches.

One caveat that catches deployers off-guard: classification proceeds function by function, not system by system. Under the Article 3(12) intended-purpose framing, a single deployed product can be high-risk for one use case and outside Article 6 for another. A platform that scores job applicants (Annex III, point 4) and also generates marketing copy (not Annex III) gets two separate classifications, two separate compliance postures, and frequently two separate technical files. Section 5 walks through the obligations that attach once a system lands on the high-risk side of this tree.

What to Do With the Answer

Classification dictates the obligation stack, not the other way around. Once a provider has run the decision tree, three branches follow.

High-risk under Annex III. The provider must implement the Title III Chapter 2 obligations — risk management, data governance, technical documentation per Annex IV, logging, transparency, human oversight, and accuracy/robustness controls under Articles 9-15 — and complete the conformity assessment, CE marking, and EU database registration required by Articles 43-49. From there, the implementation question becomes architectural: whether to ring-fence EU users, run a single global stack at EU-grade compliance, or withdraw from the market. Promise Legal's EU AI Act pre-deadline checklist for US multinationals walks through the Walled Garden, Single Stack, and EU Withdraw options.

Derogated under Article 6(3). The provider must document the derogation assessment before placing the system on the market and register the system under Article 6(4). The documentation is the compliance posture; without it, the derogation does not hold.

Not high-risk. The system still has to clear Article 5 prohibited practices and the Article 50 transparency duties, which apply independently of Annex III status. Providers of general-purpose AI models also carry the Article 53 GPAI obligations regardless of how any downstream system is classified.

Classification is the input; the architecture decision is the output.

High-risk classification is fact-specific and decision-tree-driven. Talk with our team about scoping a classification memo before your next system reaches the EU market.

Start the conversation