Drafting AI Disclosures for the 10-K: Materiality Without Hype

Two-front pressure: 92 SEC comments / 56 companies push toward disclosure; AI-washing enforcement (Presto) punishes overstatement. Six-element Item 1A architecture, four-step pre-clearance workflow, integrated documentary spine.

Drafting AI Disclosures for the 10-K: Materiality Without Hype
Loading AudioNative Player...

The Drafting Problem: Materiality Without Hype

After the SEC's 2024 settlement with Presto Automation, public-company AI disclosure drafting sits at the intersection of two opposing pressures. Under-disclosure invites staff comment letters and Section 10(b) plaintiff scrutiny. Over-disclosure—particularly the kind of forward-looking AI capability language that reads like marketing copy—now carries direct enforcement risk. Drafters have to thread both needles in the same filing.

The first pressure is empirical. A review of SEC disclosure comments issued since 2021 identified at least 92 separate comments addressing AI-related disclosures across 56 different companies. The staff is reading AI sections closely and pushing issuers to substantiate, contextualize, or remove vague references. Registrants that treat AI as a buzzword in Item 1A risk factors or MD&A should expect follow-up.

The second pressure cuts the other way. In the Presto matter, the SEC charged the company with materially false and misleading statements about critical aspects of its flagship artificial intelligence (AI) product, Presto Voice. The takeaway from Cooley's analysis of the order is direct: claims about AI prospects “should have a reasonable basis, and investors should be told that basis.” That standard now governs every AI sentence in a 10-K, not just product-page copy.

The drafting target follows from those two pressures. Disclosures must be specific enough to satisfy materiality expectations and grounded enough to withstand an AI-washing inquiry. That means tying every AI claim to a documented reasonable basis, calibrating language to actual deployment status, and building disclosure controls that catch hype before it reaches the filing. The remainder of this guide works through that sequence at the level of Item 1A, MD&A, and disclosure controls—and connects each to the broader AI-washing enforcement and litigation landscape public-company GCs are now navigating.

Item 1A AI Risk Factor Drafting

SEC staff comment letters in 2024 and 2025 have established a consistent posture for Item 1A AI disclosure: precise definitions, company-specific risk identification, and a reasonable basis for any forward-looking statement. Staff has asked registrants to consider including definitions of ‘AI,’ ‘generative AI,’ ‘deep learning,’ ‘large language models,’ ‘neural networks,’ and any other industry-specific terminology, to identify material operational, legal, competitive, and similar risks, and to revise disclosure to explain the basis for the belief when companies make AI capability claims. The SEC Investor Advisory Committee's December 2025 recommendations reinforce this posture and explicitly note that Reg S-K items 101, 103, 106, and 303 are flexible enough to accommodate the rise in the use of AI — meaning issuers should integrate AI disclosure into existing items rather than wait for a prescriptive regime.

Promise Legal recommends a six-element architecture for Item 1A AI risk factors:

  1. A company-specific definition of AI. Adopt a definition that maps to how the company actually uses the technology — rule-based automation, predictive ML, generative LLMs, or agentic systems are not interchangeable. The IAC explicitly recommends that issuers define what they mean when they use the term ‘Artificial Intelligence.’
  2. Board oversight mechanisms. The IAC recommends issuers disclose board oversight mechanisms, if any, for overseeing the deployment of AI. Promise Legal's analysis of Marchand-derived AI governance duties sets out the committee charter, reporting cadence, and information-rights architecture that maps to this expectation.
  3. Internal-deployment risks separated from external AI claims. Where material, distinguish risks from internal AI use (operational, security, employment) from risks tied to consumer- or investor-facing AI claims (product performance, marketing, capability representations). Conflating the two is a common driver of staff comment.
  4. AI vendor concentration risk, where applicable. Practitioner-driven extension — not yet a staff comment template — but reliance on a small number of foundation-model providers, inference platforms, or training-data vendors creates concentration exposure that increasingly belongs in Item 1A for AI-dependent issuers.
  5. AI-related regulatory exposure. Identify the specific regimes that apply: the Texas Responsible AI Governance Act, the EU AI Act, the Colorado AI Act, and NYC Local Law 144 are the current high-salience candidates for U.S.-listed issuers with relevant operations or workforces.
  6. AI-related litigation exposure. Address training-corpus IP claims, algorithmic discrimination claims, and AI-washing securities exposure as distinct litigation vectors rather than a single generic line.

The trajectory data explains the urgency. According to DLA Piper, 7 AI-related securities class actions were filed in 2023, 14 in 2024, and 12 in the first part of 2025. In Promise Legal's view, thin Item 1A AI risk factors — generic boilerplate that fails to define AI, identify company-specific risks, or substantiate capability claims — are the single biggest target for plaintiff securities firms working from this trajectory.

MD&A: How to Talk About AI's Business Impact

If Item 1A is where AI risk gets framed, MD&A is where AI value gets quantified — and where AI-washing exposure most often originates. Revenue attribution, customer counts, productivity gains, and efficiency claims all flow through MD&A, and each one creates a factual record that plaintiffs and the SEC can test against operational reality. The materiality trigger is lower than many drafters assume. As Cooley has framed the inquiry in the wake of the Presto enforcement action, if a company is “discussing AI in earnings calls or having extensive discussions with the board, is it potentially material?” Voluntary AI promotion, in other words, can manufacture its own disclosure obligation.

Two enforcement patterns illustrate the drafting risk. In Presto, the SEC charged that the company “failed to adequately disclose that the voice AI technology that powered all Presto Voice units was actually owned and operated by Supplier A” and that it claimed to have eliminated the need for human order taking when in fact human review remained in the loop. Innodata, by contrast, was hit with a securities class action after Wolfpack Research published a short report titled “Exposing INOD's Smoke and Mirrors AI,” alleging the company's Goldengate platform was rudimentary software built by a handful of employees; the stock dropped more than 30% in a single day on February 15, 2024.

The drafting principles follow directly. Scope precisely — “AI-enabled revenue” is not “AI-driven revenue,” and third-party model dependencies, human-in-the-loop processes, and supplier relationships should be named where they materially shape the offering. Quantify only what can be substantiated, and disclose limitations alongside gains. The PSLRA forward-looking statements safe harbor is not a backstop for over-claiming: if a plaintiff proves a corporate officer knew a projection was false at the time it was made, “the statutory shield evaporates.” Finally, MD&A should be reconciled against earnings-call scripts and investor decks before filing. DLA Piper has documented suits where companies “attributed their growth, customer retention, or competitive advantage to their AI or machine learning technologies when the actual business drivers were traditional marketing, aggressive sales tactics, or unrelated factors” — the kind of inconsistency that becomes the plaintiff's opening paragraph.

Disclosure Controls and AI-Claim Pre-Clearance

The Presto order did not stop at faulting the company's substantive AI claims. The SEC treated the absence of disclosure controls as an independent enforcement theory. Cooley's analysis quotes the order directly: Presto “failed to design, implement, or maintain disclosure controls and procedures to ensure that the information disclosed by Presto in Commission filings was accurate,” and “no one at Presto was formally responsible for ensuring that the information disclosed in Presto's Commission filings was accurate.” The D&O Diary's read of the same record is starker: Presto “had no established process for drafting, reviewing, or approving periodic or current reports required to be filed with the Commission” and “never implemented disclosure controls and policies and procedures for reviewing periodic or current reports required to be filed by the company.” The operational answer is a pre-clearance workflow that any AI-touching statement must pass through before it leaves the building.

Promise Legal recommends a four-step pre-clearance sequence for AI claims:

  1. Technical fact-check. Does the system actually do what the draft says it does? Engineering — not marketing — signs off on the underlying capability claim.
  2. Drafting review. Is the claim hedged appropriately, and does it carry the qualifications required to avoid materially misleading by omission?
  3. Consistency cross-check. The claim is reconciled against prior public statements, MD&A, earnings call scripts, and investor decks so the company is not saying different things in different forums.
  4. Board or committee approval. Material AI claims are escalated for documented sign-off rather than cleared at the staff level.

The scope of in-scope documents is broader than periodic reports alone. The workflow should govern 10-Ks, 10-Qs, 8-Ks, registration statements, and proxies, but also earnings call scripts, press releases, investor decks, executive social posts, and conference materials. Ownership sits jointly with the disclosure committee and the AI Council, with a designated chief AI officer or equivalent accountable for the technical-accuracy step — an architecture Promise Legal has addressed in its analysis of chief AI officer personal liability after McDonald's. The SEC Investor Advisory Committee's December 2025 recommendations reinforce this allocation, urging issuers to “disclose board oversight mechanisms, if any, for overseeing the deployment of AI.”

The integrated-workstream point closes the loop. The same documentary spine — capability inventory, claim log, technical sign-offs, board minutes — satisfies the disclosure-controls finding from Presto, the TRAIGA NIST AI RMF safe harbor, and the Caremark good-faith defense. One workstream, three regimes.

Implications for the Disclosure Committee

The through-line across Sections 2-4 is that AI disclosure quality is a function of process, not prose. Disclosure committees that treat AI as a drafting problem will keep producing risk factors that read like marketing copy; committees that treat it as a controls problem will produce filings that hold up under SEC comment, plaintiff scrutiny, and board review. Three moves should be on the next committee agenda.

  1. Refresh Item 1A AI risk factors in the next 10-K cycle using the architecture in Section 2 — company-specific risks tied to actual AI deployments, paired with capability claims the company can substantiate.
  2. Institute pre-clearance for every AI-touching public statement per Section 4, routing earnings scripts, investor decks, press releases, and product marketing through the AI Council and disclosure controls function before release.
  3. Align AI risk-factor language with the rest of the company's AI governance documentary record — board minutes, AI Council charters, model inventories, and incident logs should describe the same AI program the 10-K describes.

That last point matters because the same documentary spine carries weight across multiple legal regimes. The risk register and NIST AI RMF mapping that support a TRAIGA safe harbor posture are the same artifacts that demonstrate good-faith oversight under Caremark and the Marchand line of board-oversight cases, and the same artifacts that support officer-level diligence after McDonald's. They are also what allows a disclosure committee to defend Item 1A and MD&A language against a Section 10(b) AI-washing claim — truthful about capabilities, candid about risks, consistent with what the company actually does.

The drafting work in the next 10-K cycle is where that alignment either holds or breaks.

10-K AI disclosure is now a controls problem, not a drafting problem. Talk with our team about scoping the pre-clearance workstream before your next 10-K cycle.

Start the conversation