EU AI Act August 2: A Pre-Deadline Checklist for U.S. Multinationals

EU AI Act high-risk obligations apply 2 August 2026, with penalties up to 7% of worldwide turnover. Article 2(1)(c) reaches U.S. multinationals when AI output is used in the Union. A 10-item pre-deadline checklist for in-scope GCs and CAIOs.

EU AI Act August 2: A Pre-Deadline Checklist for U.S. Multinationals
Loading AudioNative Player...

The Clock: August 2, 2026

The Annex III high-risk obligations of the EU AI Act apply from 2 August 2026 under Article 113. That is the date by which in-scope providers and deployers of Annex III high-risk AI systems must have their conformity assessments, technical documentation, risk management systems, and post-market monitoring in place. Earlier tranches have already taken effect: prohibited practices on 2 August 2025, AI literacy on 2 February 2025. The Annex I (safety-component) high-risk regime follows on 2 August 2027.

The penalty structure under Article 99 is what moves this from compliance theater to a CFO-level matter. Prohibited-practice infringements are capped at €35 million or 7% of worldwide annual turnover, whichever is higher. Other infringements of operator obligations sit at €15 million or 3%. Supplying incorrect, incomplete, or misleading information to authorities draws up to €7.5 million or 1%. The percentages, not the euro figures, are the binding number for any U.S. multinational with material global revenue.

Article 2(1)(c) extends the Regulation extraterritorially: providers and deployers established outside the Union fall within scope when the output produced by their AI system is used in the Union. A U.S.-headquartered company with EU customers, EU subsidiaries, or EU end-users does not escape the Regulation by keeping its servers and staff in the United States.

The Digital Omnibus on AI, published 19 November 2025, proposes deferring the Annex III high-risk obligations to 2 December 2027 and Annex I to 2 August 2028. The proposal is not law. As tracked on the European Parliament Legislative Train, trilogue negotiations remained ongoing in spring 2026 and have not produced an adopted text. Until adoption, August 2 is the operative date. The first question for in-scope companies is who, exactly, is in scope.

Who Is in Scope: Provider, Deployer, Importer, Distributor

The Regulation assigns obligations by function, not by corporate label. Article 3 of the EU AI Act defines a provider as an entity that develops, or has developed, an AI system and places it on the market or puts it into service under its own name or trademark. A deployer uses an AI system under its authority, outside personal non-professional activity. Importer and distributor roles attach to entities that place a third-country system on the EU market or make it available downstream. Each role carries a distinct obligation set, and a single entity can occupy more than one.

Functional definitions defeat tidy organizational charts. A U.S. parent that builds an AI tool for in-house use by an EU subsidiary is, on the face of Article 3, both provider and deployer. Internal-only deployment does not strip provider obligations; corporate structure does not, either.

Extraterritoriality compounds the exposure. Article 2(1)(c) pulls in providers and deployers established in a third country whenever the output produced by the AI system is used in the Union. A model trained and hosted in the United States that scores EU job applicants or generates EU-facing recommendations is in-scope provider or in-scope deployer territory, regardless of server location.

One obligation is already live. Under Article 4, the duty to ensure AI literacy of staff and persons dealing with AI on the operator's behalf has applied since 2 February 2025; supervision and enforcement begin 2 August 2026. U.S. multinationals frequently miss this because the headline penalty regime is not yet active.

Article 25 sets a reclassification trap. A distributor, importer, deployer, or third party becomes the provider of a high-risk system on three triggers: putting its name or trademark on the system, making a substantial modification, or modifying intended purpose such that the system becomes high-risk. White-label arrangements and acquisitions of EU AI vendors are the most common fact patterns. With scope mapped, classification is the next move.

High-Risk Classification: The Decision That Drives Everything

An AI system is high-risk under one of two triggers. First, under Article 6(1), a system is high-risk if it functions as a safety component of a product — or is itself a product — covered by the Union harmonisation legislation listed in Annex I and required to undergo third-party conformity assessment. Second, under Article 6(2), a system is high-risk if it falls within one of the eight enumerated use cases in Annex III: biometrics; critical infrastructure; education and vocational training; employment and workforce management, including access to self-employment; access to essential private and public services; law enforcement; migration, asylum, and border control management; and the administration of justice and democratic processes. Most U.S. multinationals encounter the regime through Annex III, particularly the employment, essential-services, and biometrics categories.

Classification is not a label exercise. It triggers the full Title III, Chapter 2 obligation stack under Regulation (EU) 2024/1689: a risk management system (Article 9), data and data governance controls over training, validation, and testing sets (Article 10), technical documentation sufficient to demonstrate conformity (Article 11), automatic event logging (Article 12), transparency and instructions for use directed at deployers (Article 13), human oversight measures designed into the system (Article 14), and accuracy, robustness, and cybersecurity benchmarks (Article 15). Each pillar produces evidence. None is satisfied by policy language alone.

The Article 6(3) derogation is narrow and frequently misread. An Annex III system escapes high-risk status only if it performs a narrow procedural task, improves the result of a previously completed human activity, detects decision-making patterns or deviations without replacing or influencing the prior human assessment, or performs a preparatory task — and presents no significant risk of harm. Profiling of natural persons defeats the derogation outright; profiling is always high-risk. Any provider invoking the derogation must document the assessment under Article 6(4) and register the system before placing it on the market.

Misclassification carries its own penalty channel. Under Article 99(5), supplying incorrect, incomplete, or misleading information to notified bodies or national competent authorities — including in a self-classification file — draws fines up to €7.5 million or 1% of worldwide annual turnover, whichever is higher. A sloppy Article 6(3) memo is therefore not a paperwork shortcut; it is independent enforcement exposure. Section 4 addresses what the underlying documentation must actually contain.

Documentation Obligations: What Must Be in Place by August 2

Think of the documentation stack in three layers. Annex IV is the binder. The seven pillars in Title III Chapter 2 — Articles 9 through 15 — are the chapters inside. Articles 43 through 49 are the public-facing seals: conformity assessment, CE marking, and database registration. By 2 August 2026, in-scope providers need all three layers populated and defensible.

Annex IV mandates eight categories of technical documentation: a general description of the system with its intended purpose and provider; detailed system elements including development process, training methodologies, and datasets; monitoring, functioning, and control mechanisms; performance metrics; an Article 9 risk management description; documented lifecycle changes; harmonised standards or common specifications applied; and the EU declaration of conformity. Each category must be specific enough that a notified body or market surveillance authority can audit against it.

Article 9 is the obligation most U.S. providers underestimate. Risk management is a continuous iterative process across the entire lifecycle, with regular systematic review and updating — not a one-time pre-launch artifact. Article 10 sets the data governance bar: providers must document design choices, data collection and preparation, assumptions, prior assessments of dataset availability and suitability, examination of biases, identified data gaps, and the measures used to detect, prevent, and mitigate those biases.

Once the documentation exists, the seal chain begins. Article 43 requires a conformity assessment — internal control under Annex VI for most Annex III systems, or a third-party assessment via a notified body under Annex VII where required. Article 48 mandates CE marking that is visible, legible, and indelible. Article 49 requires providers and authorised representatives to register Annex III high-risk systems in the EU AI database before placing them on the market.

General-purpose AI models sit on a parallel track. Article 53 obliges GPAI providers to maintain technical documentation, make information available to downstream providers, implement a copyright policy honoring Article 4(3) DSM Directive opt-outs, and publish a sufficiently detailed summary of training content using the AI Office template. With harmonised standards still pending, the GPAI Code of Practice — Transparency, Copyright, and Safety & Security chapters drafted by independent experts — is the only practical near-term vehicle for demonstrating Article 53 compliance. The question becomes how to architect this stack for a U.S. multinational.

Implementation Choices: Three Architectures for U.S. Multinationals

The compliance question for U.S. multinationals is not whether to comply but how to architect the program. Three patterns dominate the field, and each is a portfolio decision about cost, contagion, and product reach rather than a binary yes-or-no.

Architecture A — EU Walled Garden. The company runs a separate AI stack for EU users, with EU-localized data, EU-specific model versions, and EU-specific governance artifacts. This pattern carries the highest fixed cost and the most operational drag, but it limits regulatory contagion to a contained perimeter. In Promise Legal's view, it is viable at enterprise scale or where deployments touch sensitive Annex III categories such as employment screening or credit decisions.

Architecture B — Single Stack, EU Conformity. The company runs one global AI program documented to the highest applicable bar. This is materially less costly than re-papering per region when a NIST AI RMF or ISO 42001 spine is already in place. The Cloud Security Alliance maps Articles 9 and 27 of the EU AI Act directly onto NIST AI RMF Govern Subcategories 1.4 and 1.5 and ISO 42001 Subclauses 8.2 and 8.3, with a mature ISO 42001 risk process covering a substantial majority of the Article 9 technical burden. For Texas-headquartered companies, this is the same NIST AI RMF discipline that earns the TRAIGA safe harbor, now extended to a second jurisdiction at marginal incremental cost.

Architecture C — EU Withdraw. The company removes AI features from EU users and routes around the Union entirely. In Promise Legal's experience, this is cost-rational only where EU revenue sits at a small fraction of the book and the product can be cleanly geofenced without breaking the core experience. Cooley notes that multinationals must in any event reconcile EU obligations with Canada's AIDA and U.S. state-level frameworks, so withdrawal narrows the surface but does not eliminate the global program.

Whichever architecture the GC selects, Article 22 forces one concrete decision before 2 August 2026: a non-EU provider placing a high-risk AI system on the Union market must, by written mandate, designate an EU-established authorized representative who verifies the technical documentation and EU declaration of conformity, keeps them at the disposal of authorities for ten years, and registers the system in the Article 49 database. Section 6 turns to whether the Commission's Omnibus deferral changes that calculus.

The Omnibus Question: Should You Bet on the Deferral?

The Digital Omnibus on AI of 19 November 2025 is a Commission proposal, not law. Until Parliament and the Council adopt it, the original AI Act timelines, including the 2 August 2026 trigger for high-risk obligations, remain legally binding (Morrison Foerster). The institutional posture is unsettled: as tracked on the European Parliament's Legislative Train, trilogue negotiations between Parliament, Council, and Commission remain ongoing without an adopted compromise text.

Two carve-outs deserve specific attention. The Article 4 AI literacy obligation is not disappearing; Parliament's joint position retains a mandatory duty on providers and deployers, with the standard recalibrated from a “sufficient level” to “supporting the improvement” of literacy. GPAI obligations under Article 53 are not part of the proposed deferral and remain on their original timeline, as Skadden and Morrison Foerster have separately observed. Practitioner consensus across Morrison Foerster, Skadden, and Lewis Silkin treats the Omnibus as a recalibration of timing and procedural burden, not a relaxation of substantive duties (Skadden).

Promise Legal's view rests on asymmetry, not on a prediction about EU politics. Betting on adoption saves modest early-readiness spend; betting wrong means missing a binding deadline with enforcement exposure under Article 99. The downside dominates. Build to August 2, treat any deferral as a refund rather than a plan, and keep AI literacy and GPAI workstreams running on the original cadence. With the math settled, the checklist remains.

The Pre-August Checklist

The preceding sections covered scope, classification, documentation, architecture, and the deferral question. The synthesis below converts those analyses into a workable pre-deadline scope — ten items that, taken together, define what a U.S. multinational should have in motion before 2 August 2026.

  1. AI inventory mapped to provider, deployer, importer, and distributor roles. The EU AI Act's role definitions are functional rather than nominal, and Article 25 can re-cast a deployer as a provider on rebrand or substantial modification — meaning the inventory has to capture not just systems but the legal posture each system creates.
  2. Annex III screen for every system. A single high-risk classification triggers the entire Title III Chapter 2 obligation set, so the screen is the gating decision that determines whether a system enters the heavy-compliance lane or stays out of it.
  3. EU customer and subsidiary footprint review. Article 2(1)(c) reaches whenever AI output is used in the Union, regardless of where the system runs or where the provider is established, which makes the footprint review a jurisdictional question rather than a commercial one.
  4. Article 4 AI literacy program in place. The literacy obligation has been live since 2 February 2025; supervision begins 2 August 2026, so a program that exists only on paper will not survive first contact with a competent authority.
  5. GPAI training-content summary prepared if you build, fine-tune, or distribute foundation models. Article 53(1)(d) requires use of the AI Office template summary, and the GPAI Code of Practice is currently the only practical compliance vehicle for in-scope providers.
  6. Authorized representative engaged if you ship high-risk AI systems to the EU. Article 22 mandates a written mandate to an EU-established representative before market entry, and the engagement is a precondition rather than a parallel workstream.
  7. Technical documentation drafts started against Annex IV. The eight-category package must be in place before placing on the market; conformity assessment under Articles 43 through 49 depends on it, and reverse-engineering the file post-launch is not a defensible posture.
  8. Conformity assessment path identified per system. The choice between Article 43 internal control and notified-body third-party assessment determines timeline, cost, and CE marking sequencing under Article 48 — and the path is system-specific, not enterprise-wide.
  9. Vendor representations refreshed: lawful-training-corpus warranty, AI bill of materials, audit rights. Article 25 reclassification means upstream vendor exposure cascades through the indemnity stack, so contract paper drafted before the Act took effect almost certainly understates the risk allocation now required.
  10. Architecture decision documented and board-approved. Whether the chosen path is Walled Garden, Single Stack, or EU Withdraw, a board-ratified architecture choice is itself an artifact of good-faith governance under frameworks like the NIST AI RMF and ISO 42001 — and the absence of a documented choice is itself a finding.

Each item on the list compounds in cost as August 2 approaches. For U.S. multinationals mapping these obligations against an actual EU footprint, the work cannot wait until the Commission's enforcement letters arrive, and the Digital Omnibus deferral is not a basis for slowing down.

Mapping these obligations against your actual EU footprint is more straightforward with counsel who have built EU AI Act readiness programs before. Talk with our team about your August 2 timeline.

Start the conversation