AI-Washing Litigation in 2026: What Public-Company GCs Need to Know
On Jan 14, 2025, the SEC charged Presto Automation with the first public-company AI-washing action. Four enforcement surfaces — SEC, plaintiffs' bar, FTC, and EU AI Act — now scrutinize every public AI claim. The GC's pre-clearance workstream is the answer.
The 2026 AI-Washing Landscape: Three Concurrent Tracks
On January 14, 2025, the SEC brought what may be its first AI-washing enforcement action against a public reporting company. The target was Presto Automation, and the agency alleged that the company misrepresented its flagship product, Presto Voice — describing third-party speech-recognition technology as proprietary, and claiming the system eliminated human order-taking when most orders still required human intervention. The Presto matter closed the gap between adviser-side AI-washing cases and the disclosure obligations of every other reporting issuer. After Presto, no public-company AI claim sits outside the enforcement perimeter.
The 2026 landscape now runs on three concurrent tracks, and public AI statements sit on all three at once.
Track one: SEC enforcement. The Commission's AI-washing posture moved from registered advisers (Delphia, Global Predictions) to fraud charges tied to AI claims (Joonko/Raz) and then to a public-company disclosure case in Presto. The trajectory is one direction, and Section 3 maps it in detail.
Track two: private securities class actions. Cornerstone Research counted 12 AI-related securities class actions in the first half of 2025, putting the year on pace to surpass the 2024 total of 15 — itself more than double the seven filings in 2023. AI was the most-filed trend category in H1 2025, and the Maximum Dollar Loss Index rose 154% over the prior half-year. Stanford's Securities Class Action Clearinghouse codes a filing as AI-related when the issuer develops AI models, manufactures AI infrastructure, or uses AI for business purposes, and the allegations turn on AI or AI-disclosure failures. That definition reaches well beyond pure-play AI companies.
Track three: adjacent agency exposure. On September 25, 2024, the FTC announced Operation AI Comply, a five-action sweep against DoNotPay, Ascend Ecom, FBA Machine, Ecommerce Empire Builders, and Rytr. Then-Chair Lina Khan framed the program in plain terms: “Using AI tools to trick, mislead, or defraud people is illegal... there is no AI exemption from the laws on the books.” State AG activity and the EU AI Act extend the same logic across jurisdictions, and Section 5 takes those up.
Every public AI claim — in 10-Ks, 10-Qs, earnings calls, press releases, and investor decks — now sits inside this three-track map. Section 10(b) and Rule 10b-5 supply the federal securities backbone; FTC Section 5 and state UDAP statutes supply the adjacent backstop. For the public-company GC, disclosure controls have to account for all three tracks at once, because plaintiffs and regulators already do.
What Counts as AI-Washing: Doctrine and Patterns
AI-washing is not a new cause of action. It is conventional Rule 10b-5 and Section 17(a) securities fraud applied to a new fact pattern. The elements are unchanged: a material misrepresentation or misleading omission, made with scienter (at least recklessness, defined as an extreme departure from ordinary care), in connection with the purchase or sale of a security, on which the plaintiff relied to its detriment. What is new is the subject matter — claims about machine learning models, training data, automation rates, and product roadmaps — and the technical opacity that lets puffery drift into actionable misstatement. Across the SEC enforcement docket and the private class action bar, four recurring patterns have emerged.
Pattern A — Capability Overstatement. The issuer claims to use AI or machine learning that does not exist or is not used as described. The SEC's March 2024 actions against Delphia and Global Predictions are the template: Delphia stated from 2019 through 2023 that it incorporated client data into AI/ML algorithms when, as it admitted in a 2021 SEC examination, it had neither used client data nor built such an algorithm; Global Predictions falsely marketed itself as the “first regulated AI financial advisor.” See SEC Press Release 2024-36.
Pattern B — Customer-Count Fraud. The fraud is not about the AI itself but about the commercial traction the AI is supposedly producing. In SEC v. Raz, the Commission charged Joonko's founder with defrauding investors of at least $21 million through false claims of more than 100 customers (including Fortune 500 names), more than 100,000 candidates in the pipeline, and over $1 million in revenue, supported by fabricated testimonials and forged bank statements and contracts. SDNY brought a parallel criminal case for securities and wire fraud carrying a 20-year statutory maximum on each count. See SEC Press Release 2024-70.
Pattern C — Product-Feature Substitution. The marketed AI feature is, in practice, a third-party model, a human-in-the-loop workflow, or manual labor dressed in algorithmic language. The SEC's January 2025 order against Presto Automation alleged that Presto Voice statements were misleading because the AI speech-recognition was, for a period, owned and operated by a third party, and Presto's own engine still required human intervention on the vast majority of orders — against a backdrop of no established process for drafting or approving periodic reports and no implemented disclosure controls. See Cooley's analysis. The same theory drives the Innodata securities class action filed in D.N.J. on February 21, 2024, after Wolfpack Research's “Smoke and Mirrors” report characterized the company as “a manual data-entry business driven by offshore labor, not innovation” and its Goldengate platform as “rudimentary software developed by just a handful of employees,” sending the stock down more than 30% the same day.
Pattern D — Roadmap Inflation. Forward-looking or aspirational capabilities are marketed in language that a reasonable investor would read as describing near-term realized performance. Forecasts and product visions are not categorically off-limits, but when present-tense framing, demo footage, or quantified efficiency claims outrun what the system actually does in production, the gap is the same gap that Patterns A through C rest on.
The recent doctrinal limit is Macquarie Infrastructure Corp. v. Moab Partners, L.P., in which the Supreme Court held unanimously on April 12, 2024 that a pure omission — failure to disclose information required by Item 303 of Regulation S-K — cannot support a private Rule 10b-5(b) claim absent an otherwise-misleading statement. Macquarie is cold comfort for AI-washing defendants. Half-truths and statements rendered misleading by what is left unsaid remain actionable, and the cases above are built on affirmative representations about capabilities, customers, and product architecture. The common denominator across all four patterns is the same: the gap between the marketing copy and the technical reality, measured by what a reasonable investor would have understood the issuer to be saying.
The SEC Signal: Three Years of Cases, One Direction
The Commission has been telegraphing this enforcement track for two years. The SEC Division of Examinations incorporated AI-washing into its 2024 examination priorities for registered investment advisers, and the 2025 priorities continued to scrutinize whether AI-related disclosures, supervisory frameworks, and internal controls align with what firms are actually doing. Examination priorities are not enforcement actions, but they are a reliable forecast of where Enforcement will look next.
The cases since then have walked a clear ladder. On March 18, 2024, the SEC announced its first AI-washing settlements against two registered investment advisers — Delphia ($225,000 civil penalty) and Global Predictions ($175,000) — for marketing AI capabilities they did not have. Then-Chair Gary Gensler's framing left no ambiguity about how the agency views the conduct.
We find that Delphia and Global Predictions marketed to their clients and prospective clients that they were using AI in certain ways when, in fact, they were not. Investment advisers should not mislead the public by saying they are using an AI model when they are not. Such AI washing hurts investors.
One month later, the agency escalated. The SEC's civil complaint against Joonko founder Ilit Raz alleged a fraud scheme of approximately $21 million and sought a permanent injunction, civil penalties, disgorgement with prejudgment interest, and an officer-and-director bar. The U.S. Attorney's Office for the Southern District of New York filed a parallel criminal indictment charging one count of securities fraud and one count of wire fraud, each carrying a maximum of 20 years. AI-washing had moved from civil penalty to potential prison time.
The third rung landed on January 14, 2025. The SEC charged Presto Automation, a restaurant-technology company that became publicly traded through a September 2022 SPAC merger and remained Nasdaq-listed until September 2024. Without admitting or denying the findings, Presto consented to a cease-and-desist order. The SEC declined to impose a civil penalty based on the company's cooperation and remedial efforts — but the order itself, with its disclosure-controls findings, is the artifact that matters for everyone else.
Promise Legal's read is that the post-Gensler Commission's specific enforcement priorities remain in flux, but the doctrinal foundation built across these three cases does not depend on a particular Chair. Rule 10b-5 and Section 17(a) have been applied to AI disclosures by a public reporting company, and that order is now part of the record. The operational implication for public-company GCs is straightforward: assume the AI section of the 10-K is being read by Enforcement, and assume the next action will not be the last.
The Private Securities Bar: Plaintiffs Catch Up
The plaintiffs' securities bar has moved from opportunistic AI filings to a structured practice category. Cornerstone Research's H1 2025 report identified 12 AI-related securities class actions filed in the first half of 2025, making AI the leading trend category ahead of cryptocurrency and SPACs. Over the same period, the Maximum Dollar Loss Index reached $1,851 billion, a 154% increase from H2 2024 and the eighth consecutive semiannual period above the $622 billion historical average. Plaintiffs are not just filing more AI cases; they are filing them inside a market where stock-drop exposure runs well above baseline.
The category has a working definition. The Stanford Securities Class Action Clearinghouse treats a filing as AI-related when the issuer develops AI models, manufactures AI infrastructure, or uses AI in its business, and the allegations connect to AI or AI-disclosure. In 2024, the sectoral split skewed Technology (8) and Communications (4), with Industrial (2) and Consumer Non-Cyclical (1) rounding out the cohort, signaling that AI-washing exposure now extends past the obvious model developers.
The canonical sequence was set by In re Innodata. On February 15, 2024, Wolfpack Research published “Smoke and Mirrors,” describing Innodata's Goldengate platform as “a rudimentary software developed by just a handful of employees” sitting atop “a manual data-entry business driven by offshore labor, not innovation.” The stock dropped more than 30% the same day. A securities class action was filed in the District of New Jersey on February 21, 2024 — six days later — covering investors from May 9, 2019 through February 14, 2024. Short-seller report, intraday drop, complaint within a week: that is now the template.
C3.ai shows the variant where an AI-marketed issuer meets disappointing results. The Northern District of California complaint covers a class period from February 26, 2025 through August 8, 2025 and alleges the company failed to disclose the impact of its CEO's health on deal closure and management's inability to mitigate that impact. After C3.ai announced a disappointing Q1 FY26 preliminary revenue figure of approximately $70 million on August 8, 2025, the stock fell approximately 25.6% the next full trading day.
Iris Energy illustrates the AI-pivot fact pattern. The complaint, filed October 7, 2024 in the Eastern District of New York, covers shareholders from June 20, 2023 through July 11, 2024 and alleges that Iris Energy made false or misleading statements about its ability to transition facilities from Bitcoin mining to high-performance computing and AI workloads. The complaint alleges the company spent less than $1 million per megawatt building its data centers, against an industry expert estimate of $10 to $20 million per megawatt for HPC-ready facilities, and that its Childress County, Texas site is a poor fit for HPC. The matter is pending and these allegations remain unproven.
Read across these cases, plaintiffs are scrutinizing a recurring set of disclosure surfaces: silent or thin Item 1A risk factors on AI-specific exposures, quantified AI-revenue and AI-customer counts asserted on earnings calls, and AI product-launch press releases that outrun the product actually shipped. Each is a place where optimistic forward narrative can be measured against a later disappointing result — and that is the measurement plaintiffs' counsel are now built to perform. AI-washing exposure, however, does not stop at the securities laws.
Adjacent Risk: FTC, State AGs, and the EU AI Act
The SEC track is the loudest, but it is not the only one. A single AI marketing statement — one paragraph in a 10-K, one line in an earnings call, one claim on a product page — can simultaneously trigger four distinct enforcement regimes: SEC Section 10(b), FTC Section 5, state unfair-and-deceptive-acts-and-practices (UDAP) statutes, and, for issuers with EU exposure, the EU AI Act. Different regulators, different theories, same statement.
The FTC opened the federal companion track on September 25, 2024 with Operation AI Comply, the agency's first coordinated AI-deception sweep. The sweep brought five enforcement actions, including matters against DoNotPay (marketed as the “world's first robot lawyer”), three AI-claim business-opportunity schemes, and Rytr for facilitating AI-generated deceptive consumer reviews. Then-Chair Lina Khan framed the operating principle bluntly: “Using AI tools to trick, mislead, or defraud people is illegal... there is no AI exemption from the laws on the books.”
State attorneys general have begun bringing parallel AI-deception settlements alongside the FTC sweep, focused on consumer-facing AI claims and risk-management or governance failures. The state UDAP track tends to follow FTC theory but adds independent civil penalties and, in some jurisdictions, private rights of action.
Texas added a third domestic overlay. The Texas Responsible AI Governance Act (TRAIGA), signed June 22, 2025 and effective January 1, 2026, prohibits the intentional use of deceptive trade practices to manipulate human behavior in ways that circumvent informed decision-making, and specifically addresses “dark patterns” — interfaces designed to mislead. The framework is intent-based, which narrows the liability surface but raises the stakes when intent can be inferred from internal documents. We discuss the statute's safe harbor and NIST AI RMF alignment in our TRAIGA business-decision analysis.
For issuers with EU exposure, Article 50 of the EU AI Act imposes transparency obligations on providers and deployers of AI systems that interact with people, generate or manipulate content, or constitute deep fakes. Deployers must inform users they are interacting with AI unless that fact is obvious, and AI-generated or AI-manipulated content must be marked as artificially generated. Article 50 becomes fully applicable in August 2026, with fines up to €15 million or 3% of global annual turnover, whichever is higher.
The practical implication for public-company GCs is the overlay itself. A single 10-K risk factor, press release, or investor-deck slide can be Section 10(b) exposure to the SEC, Section 5 exposure to the FTC, UDAP exposure to a state AG, deceptive-practices exposure under TRAIGA, and Article 50 exposure under the EU AI Act — concurrently. Disclosure controls designed only for the securities track will miss three of the four other regimes.
The Pre-Clearance Workstream: How GCs Get Ahead of It
The disclosure surface area has expanded faster than most controls frameworks. academic analysis of recent SEC filings finds that more than 43% of issuers in a corpus of over 30,000 filings now reference AI in their Risk Factors section, and the share is materially higher among large-cap reporting companies. AI risk has migrated from operational footnote into Item 1A, signaling that filers now treat AI as a material risk source rather than a marketing flourish. That migration changes the legal stakes of every AI sentence in a periodic report.
Promise Legal's view is that public-company GCs should stand up a discrete pre-clearance workstream that touches every AI-referencing public statement before it leaves the building. The scope is broader than 10-Ks and 10-Qs: it includes 8-Ks, registration statements, proxies, earnings call scripts, press releases, investor decks, executive social posts, and industry conference materials. A workable pipeline runs four steps in order:
- Technical fact-check. Confirm with the product and engineering owners that the AI system actually does what the draft claims it does — and at the maturity level implied.
- Drafting review. Calibrate hedges against what the company can substantiate; SEC staff comment letters have pushed issuers to define AI precisely, disclose company-specific AI risks, and substantiate claims rather than suggest systems are more autonomous, scalable, or commercially mature than they actually are.
- Consistency cross-check. Reconcile the draft against prior filings, transcripts, and marketing — inconsistency is what plaintiffs and Enforcement mine first.
- Board or committee sign-off on material claims. Track who approved what, when, and on what record.
Item 1A is where most issuers still under-disclose: AI dependency, AI vendor concentration, AI-related litigation exposure, and AI regulatory compliance posture all warrant their own treatment. The SEC Investor Advisory Committee's December 2025 recommendations sharpen the expectation set: adopt a company definition of AI, disclose board oversight mechanisms, and report separately on internal versus consumer-facing AI deployment when material. Pre-clearance is the operational answer to those expectations.
The enforcement reason to build this now is Presto. The SEC's Presto order alleged the company had no established process for drafting, reviewing, or approving its periodic and current reports and never implemented disclosure controls or procedures — meaning the absence of pre-clearance is itself an enforceable theory, independent of whether any individual AI claim was false. A recurring pre-clearance workstream is materially cheaper than forensic remediation after a stock drop.
Implications for the Public-Company GC
AI-washing has graduated from regulatory rhetoric to enforcement reality. Across the SEC, the plaintiffs' bar, the FTC, and the EU AI Act and state AI statutes, four enforcement surfaces now scrutinize the same public AI claims under different legal theories. The public-company GC sits at the intersection of all four, and the workstream that answers one largely answers the others.
Three immediate moves should anchor the GC's 2026 agenda:
- Implement AI-claim pre-clearance for every public statement. Earnings scripts, investor decks, press releases, product marketing, and executive social posts should route through the same disclosure-controls gate before they leave the building.
- Refresh Item 1A AI risk factors in the next 10-K cycle. Define what the company means by AI, disclose the board's oversight structure, and separate internal-use deployments from consumer- and investor-facing claims, consistent with SEC Investor Advisory Committee recommendations.
- Align disclosure with the documentary spine that earns TRAIGA's NIST AI RMF safe harbor and supports Caremark good-faith defenses. The model cards, validation logs, board minutes, and oversight artifacts that satisfy regulators are the same record that substantiates §10(b) positions in litigation.
The cross-cluster point is the one most GCs underweight. One documentary workstream defends across three legal theories and four enforcers: the TRAIGA NIST AI RMF safe harbor, the post-McDonald's Caremark exposure facing chief AI officers and the directors who supervise them, and §10(b) AI-disclosure liability. The GC who builds that record once defends across the entire enforcement map; the GC who builds it three times, late, defends none of them well.
Promise Legal helps public-company legal teams stand up that workstream before the next disclosure cycle.
AI-washing pre-clearance is cheaper as a recurring workstream than as forensic remediation after a stock drop. Talk with our team about scoping the workstream before your next disclosure cycle.