After McDonald's: Why Chief AI Officers Are Now Personally Liable for Oversight Failures
In re McDonald's (Del. Ch. 2023) extended the Caremark oversight duty to corporate officers within their domain. With 60% of enterprises naming a CAIO, the named officer faces a personal-stakes posture that DGCL § 102(b)(7) does not cover.
The Personal Inflection: Why 2026 Is Different
Roughly 60% of surveyed enterprises now staff a Chief AI Officer, according to the Wharton School's “Accountable Acceleration” Year Three Report published in October 2025. The survey base covered approximately 800 business leaders at U.S. organizations with 1,000-plus employees and $50 million-plus in revenue. Many of these CAIO appointments are responsibilities layered onto existing executive roles rather than greenfield hires. The named-officer fact pattern is no longer a thought experiment. It is the modal arrangement at enterprise scale.
Two Delaware decisions reframe what that named role carries with it. In In re McDonald's Corp. Stockholder Derivative Litigation, 289 A.3d 343 (Del. Ch. 2023), Vice Chancellor Laster held that “corporate officers owe the same fiduciary duties as corporate directors, which logically includes a duty of oversight.” The court denied the motion to dismiss as to the former Global Chief People Officer. McDonald's ported the Caremark oversight duty from the boardroom into the C-suite.
Four years earlier, the Delaware Supreme Court in Marchand v. Barnhill, 212 A.3d 805 (Del. 2019), held that Blue Bell's board breached Caremark by failing to make any good-faith effort to implement board-level monitoring of food safety. Marchand established that mission-critical regulatory areas trigger heightened oversight scrutiny.
The doctrinal pieces are assembled. AI is now mission-critical for most enterprises that have built it into revenue, hiring, customer interactions, or regulated decisions. The structural analogy to the food-safety director after Marchand is uncomfortable but accurate. The first AI-specific officer Caremark suit is a question of when, not whether. The doctrinal pieces deserve careful attention, because the doctrine drives the personal exposure.
The Doctrinal Foundation: Caremark, Marchand, McDonald's
The duty of oversight has assembled itself in four moves over thirty years. In 1996, Chancellor Allen's In re Caremark International Inc. Derivative Litigation held that the duty of care includes “a good faith effort to assure that a corporate information and reporting system, which the board concludes is adequate, exists.” Liability attaches only on sustained or systematic failure. The opinion built the architecture every later case would refine.
In 2006, Stone v. Ritter relocated that architecture inside the duty of loyalty. The Delaware Supreme Court held that “the requirement to act in good faith is a condition of the duty of loyalty,” and that Caremark claims sound in loyalty rather than care. Stone also crystallized the two-prong test: a fiduciary fails the duty by either (a) utterly failing to implement any reporting system or controls, or (b) having implemented one, consciously failing to monitor or oversee its operations. The loyalty rooting is not academic. It places Caremark liability outside the reach of DGCL § 102(b)(7) exculpation, a point that becomes load-bearing for personal exposure.
In 2019, Marchand v. Barnhill sharpened the doctrine for high-stakes risk. The court sustained a Caremark claim against Blue Bell's board after a listeria outbreak, holding that “mission critical” regulatory requirements put directors on notice for heightened oversight. A complete failure to implement a board-level food-safety monitoring system was enough to plead bad faith. Marchand told boards that mission-critical domains demand a dedicated reporting channel, not generalized risk talk.
In 2023, In re McDonald's Corporation Stockholder Derivative Litigation extended the duty downward. Vice Chancellor Laster held that “the same policies that motivated [the Court] to recognize the duty of oversight for directors apply equally, if not to a greater degree, to officers.” The court drew a domain line: “Some officers, like the CEO, have a company-wide remit,” while “other officers have particular areas of responsibility, and the officer's duty to make a good faith effort to establish an information system only applies within that area.” That domain limitation has a sharp exception. “A particularly egregious red flag might require an officer to say something even if it fell outside the officer's domain.”
The result is an officer-level two-prong test that mirrors the directors': a good-faith effort to put in place reasonable information and reporting systems within the officer's area, and good-faith action in response to red flags. Liability requires bad-faith conduct, not negligence. The doctrinal pieces are now portable across any officer with a defined remit. The remaining question is whether AI clears Marchand's mission-critical threshold.
Why AI Meets the Marchand Threshold
The case for AI as a mission-critical risk no longer rests on abstract argument. It rests on four concurrent exposure tracks, each producing dated, dollar-denominated evidence that boards and officers cannot credibly claim to have missed. Promise Legal reads the data as follows.
Securities class actions are accelerating. Cornerstone Research counted twelve AI-related securities class actions filed in the first half of 2025, on pace to exceed the fifteen filed in all of 2024, with the Maximum Dollar Loss Index reaching $1,851 billion in H1 2025 — a 154% jump from H2 2024. Stanford's Securities Class Action Clearinghouse has identified 53 AI-related filings through mid-2025 spanning model developers, infrastructure manufacturers, and operational users.
Copyright exposure has reached settlement scale. Bartz v. Anthropic reached a $1.5 billion settlement in late summer 2025 — the largest copyright settlement in U.S. history — covering roughly 500,000 works at approximately $3,000 per work, with final fairness hearing set for April 2026.
Employment collective actions now run at population scale. In May 2025, the Northern District of California granted preliminary ADEA collective certification in Mobley v. Workday, where Workday's own representations placed the rejected-application universe at 1.1 billion and the prospective collective at “hundreds of millions” of members.
SEC AI-washing enforcement now names individuals. In January 2025 the Commission charged Presto Automation — a formerly Nasdaq-listed restaurant-tech company — in the first AI-washing enforcement action against a public company, following Delphia and Global Predictions (March 2024, $225K and $175K civil penalties) and the Joonko/Ilit Raz matter (April 2024, $21 million in alleged investor fraud with a parallel SDNY criminal case). The Innodata securities class action, filed February 2024 in D.N.J. after a Wolfpack Research short-seller report characterized the company's AI platform as “rudimentary software developed by just a handful of employees,” supplies the private-litigation analogue: a 30%+ stock drop on alleged AI mischaracterization.
No Delaware court has yet applied Marchand's mission-critical threshold to AI specifically, and Promise Legal phrases the forecast accordingly. But on these facts, the dispositive question for a Chief AI Officer is no longer whether AI qualifies as mission-critical — the volume and magnitude push that answer toward yes — but whether the named officer can produce dated artifacts showing a good-faith attempt to monitor. Section 4 turns to what that personal exposure actually looks like in practice.
What Personal Exposure Actually Looks Like
Four exposure vectors now converge on the named AI officer, and each one travels through the individual rather than stopping at the entity. The vectors are derivative Caremark liability under McDonald's, securities exposure for AI-washing under Section 10(b), direct civil-rights exposure under the Mobley agent theory, and a contracting D&O backstop as carriers attach AI exclusions. Read together, they describe a personal-liability surface that a generic corporate shield does not cover.
Vector 1: derivative officer Caremark. As Sections 2 and 3 traced, McDonald's imports the two-prong test to officers, and AI deployments at scale satisfy the mission-critical threshold under Marchand. The named officer with AI in the title is the natural Caremark defendant when the system causes harm and the monitoring record is thin.
Vector 2: Section 10(b) AI-washing. The SEC charged Joonko CEO Ilit Raz with defrauding investors of at least $21 million through false statements about AI capabilities and customer counts, and sought a permanent injunction, civil penalties, disgorgement, and an officer-and-director bar; the Southern District of New York brought parallel criminal securities-fraud and wire-fraud charges. Public statements about model performance, training data, and customer adoption are now actionable at both the civil and criminal layer, and the officer who signs or sources those statements is the obvious target. (See SEC Press Release 2024-70.)
Vector 3: direct discrimination under Mobley. The Mobley court allowed claims that Workday acted as an agent of employers to proceed, treating an AI vendor as a direct civil-rights respondent. No court has yet applied that agent theory to a named in-house AI officer, but the structural logic carries: a Chief AI Officer who directed the deployment of a screening or scoring system can be drawn into the same agent-theory exposure in their own name, not merely the company's.
Vector 4: D&O and indemnification gaps. Carriers are now writing absolute AI exclusions that eliminate coverage for any claim based upon, arising out of, or attributable to the use, deployment, or development of AI, and underwriters scrutinize AI governance practices to decide whether the exclusion attaches. A documented governance program with clear legal, compliance, and security reporting lines is now an underwriting variable, not a nice-to-have.
The closing doctrinal point is the one that surprises non-Delaware counsel. DGCL Section 102(b)(7), as amended in 2022, lets Delaware corporations exculpate senior officers from monetary liability for duty-of-care breaches, but it does not reach loyalty breaches, bad-faith acts or omissions, improper personal benefit, or direct claims by the corporation. Caremark sits in the duty of loyalty and the bad-faith bucket, so the standard Section 102(b)(7) exculpation does not stop a Caremark suit, and under either prong the officer is liable only on a showing of bad faith. A named CAIO who cannot produce dated monitoring artifacts, sitting atop a system that causes harm, occupies a posture comparable to a CFO during an accounting fraud. The next question is what the documentary defense actually looks like.
The Documentary Defense: What 'Good Faith' Looks Like in 2026
After McDonald's, the dispositive question in any officer oversight claim is narrow and unforgiving: can the named officer produce dated artifacts showing a good-faith attempt to monitor? The Court of Chancery confirmed that Caremark's exacting pleading standard — particularized facts suggesting bad faith — applies equally to officers, and the defensible record is contemporaneous information-system documentation built before anyone files a complaint. There is no retroactive fix. The artifacts either exist with timestamps that predate the incident, or they do not.
Promise Legal's view is that the same documentary spine does double duty. The eight-artifact documentary spine that earns the TRAIGA NIST AI RMF safe harbor — policy stack, AI inventory with risk classification, AI Council charter and minutes, signed Acceptable Use Policies, model and system cards, vendor onboarding artifacts, training and verification records, and an incident-response playbook with tabletop exercises — is the same record that anchors the Caremark good-faith defense. One build, two regimes.
Mature programs layer a tiered escalation architecture on top of the spine. Multidisciplinary AI Councils pull in legal, risk, security, data, engineering, HR, and operations, and they run monitoring runbooks with KPIs, fairness checks, and drift thresholds. Escalation triggers are defined in advance: Level 1 for deployment without a documented owner, Level 2 for monitoring gaps beyond 30 days. The trigger architecture is what converts a static policy binder into the kind of information system Marchand demands.
For legal teams, ABA Formal Opinion 512 stacks a verification overlay on top of all of this. Lawyers must have a reasonable understanding of the capabilities and limitations of any generative AI tool, and independent verification is fact-specific and non-delegable to the model. The opinion sweeps in competence, confidentiality, communication, candor, supervision, and fees — every one of those duties needs its own audit trail in the AI Council minutes and verification protocols.
Documenting, however, only works if the officer has authority that matches the responsibility. That is a role-design problem, not a paperwork problem.
The Role Design Question: Authority and Responsibility Must Match
Skadden's read of McDonald's is the structural pivot point: while the CEO carries a company-wide remit, other officers “have particular areas of responsibility, and the officer's duty to make a good faith effort to establish an information system only applies within that area.” Duty is bounded by domain, not title. That cuts both ways. A CAIO named to the role without budget authority, without escalation rights to the CEO and Audit Committee, without signing power on the policy stack, and without mandatory inclusion in M&A AI diligence inherits the full Marchand exposure of the AI domain while holding none of the defensive tools required to discharge it.
The written charter is the structural defense. Promise Legal recommends that any CAIO engagement be papered before the title is announced, covering at minimum:
- Defined scope of the AI domain, in writing, signed by the CEO
- Formal escalation channel to the CEO and Audit Committee, with cadence
- Dedicated budget line and hiring authority for the AI Council and second line
- Signing authority on the AI policy stack and standards
- Mandatory seat in M&A AI diligence and post-close integration
- Indemnification agreement refresh confirming coverage for AI-domain decisions made within scope
D&O is the next variable. Woodruff Sawyer reports that carriers now scrutinize AI governance practices during underwriting to decide whether an AI exclusion is warranted, and they expect a clear articulation of how legal, compliance, and security functions are managed and reported to the board. Documented AI governance is now a coverage variable, not a footnote. The firm's view is blunt: a CAIO without a written charter is just a defendant in waiting.
Implications for the Named Officer
The doctrinal pieces are in place. Caremark built the oversight floor, Stone v. Ritter located it in the duty of loyalty, Marchand raised the bar for mission-critical risk, and McDonald's extended the same framework to officers within their particular areas of responsibility. AI is mission-critical for any enterprise that has appointed a CAIO. The structural analogy to the food-safety director after Marchand is uncomfortable but accurate: a named officer with a defined risk domain, a board-level reporting cadence, and no documented monitoring system is the precise fact pattern Delaware courts have signaled they will scrutinize. The first AI-specific officer Caremark suit is a question of when, not whether.
Start a personal AI governance log today, dated. Calendar entries showing AI Council attendance, dated reviews of AI inventory updates, signed approvals on policy revisions, and escalation memos when red flags appeared are the artifacts that distinguish a defensible record from a reconstructed one. The log belongs to the officer, not the company.
Review D&O coverage and indemnification language with counsel. Confirm AI-domain coverage, check for absolute AI exclusions that some carriers have begun inserting, and refresh the indemnification agreement language to align with the post-McDonald's officer exposure profile. DGCL § 102(b)(7) does not exculpate officers, and loyalty-based claims are not exculpable for directors either.
Confirm the role's written charter, scope, and escalation channels. No charter means no defined area of responsibility, which under McDonald's means exposure without the defensive tools the doctrine ties to a bounded role. Scope clarity is itself a Caremark artifact.
The same documentary discipline that earns the TRAIGA NIST safe harbor and EU AI Act conformity also produces the personal Caremark defense. One artifact set, three regulatory fronts, plus officer-level protection. For the named officer mapping doctrinal exposure against an actual AI footprint, the work cannot wait until the first plaintiff's bar inquiry arrives.
Officer-level Caremark exposure is bounded by the artifacts you can produce. Talk with our team about your AI governance program and personal-liability posture.