The Marchand Test for AI Governance: What Boards Owe Their Shareholders
Marchand creates heightened-scrutiny zone for mission-critical risk. Glass Lewis 2026 + CalPERS treat AI oversight gaps as director recall signals after material incidents. Six-artifact board record satisfies Marchand, TRAIGA, and Glass Lewis.
The Board Question: Is AI Mission-Critical for Your Company?
Delaware's Marchand v. Barnhill (2019) reframed the director duty of oversight that In re Caremark first articulated in 1996. The decision held that where a risk is essential and mission-critical to the corporation's operations, the board must have a system to monitor and report on it, and the absence of such a system supports an inference of bad faith sufficient to survive a motion to dismiss. In Marchand, that risk was food safety at Blue Bell Creameries; the court found the complaint pled facts supporting a fair inference that no board-level system of monitoring or reporting on food safety existed.
The 2026 question for boards is whether artificial intelligence clears the same threshold. The answer is company-specific, but the institutional pressure to ask the question in writing has now arrived from outside Delaware. Glass Lewis has signaled that AI governance and related disclosures will be top of mind for issuers and investors heading into the 2026 proxy season, and where insufficient AI oversight has caused material harm to shareholders, Glass Lewis will identify the responsible directors or committees and weigh AI management when evaluating director nominees. CalPERS, in parallel, has indicated it may withhold votes from director nominees where there is evidence of failed or insufficient oversight of AI-related risks.
ISS's 2026 benchmark policy did not adopt an AI-specific director recommendation, so the proxy advisor overlay is uneven. The cleaner framing is that Glass Lewis and major institutional investors have built an AI-specific accountability lane on top of the broader risk-oversight expectations that ISS and others continue to apply. For boards whose products, revenue, or core operations depend on AI systems, the practical question is no longer whether to formalize AI oversight. It is how the documentary record will read to a Delaware court testing a Marchand claim and, separately, to a proxy advisor scoring director nominees in the next cycle. Promise Legal's AI and technology governance practice sits at that intersection, and the sections that follow work through what each audience expects to see.
What Marchand Actually Requires of Boards
The doctrinal arc runs from In re Caremark (Del. Ch. 1996) through Stone v. Ritter (Del. 2006), into Marchand v. Barnhill (Del. 2019), and most recently Hughes v. Hu (Del. Ch. 2020). Stone rooted the oversight duty in the duty of loyalty, making it non-exculpable under DGCL Section 102(b)(7), and articulated the two-prong test that still governs: liability attaches where “(a) directors utterly failed to implement any reporting or information system or controls; or (b) having implemented such a system or controls, consciously failed to monitor or oversee its operations thus disabling themselves from being informed of risks or problems requiring their attention.” Both prongs require bad faith. Both are pleaded, and disproved, on the documentary record.
Marchand refined prong (a) for the mission-critical context. The Delaware Supreme Court held that “when a plaintiff can plead an inference that a board has undertaken no efforts to make sure it is informed of a compliance issue intrinsically critical to the company's business operation, then that supports an inference that the board has not made the good faith effort that Caremark requires.” The refinement matters because it shifts the analytical question from whether the board had some generalized compliance apparatus to whether the board had a monitoring system specifically calibrated to the risk that defines the business. A food-safety system at a non-ice-cream company is not a defense for an ice-cream company.
Hughes v. Hu demonstrated how thin a record needs to be for a Caremark claim to survive a motion to dismiss. Vice Chancellor Laster sustained the claim at the pleading stage based on the audit committee's thin record of oversight over financial-statement and related-party-transaction risks, finding a substantial likelihood that defendants breached their duty of loyalty by failing to act in good faith. The lesson is operational: the duty is satisfied or breached in the minutes. Boards must, as Skadden has summarized the post-Marchand guidance, “document their efforts in sufficient detail to demonstrate the attention they have paid to understanding and overseeing risk and compliance systems and responding to any issues that arise.” The Harvard Corporate Governance Forum reaches the same conclusion: boards must memorialize the general topics of board-level oversight and risk discussions in their minutes to defend against future Caremark claims.
That sets up the threshold question for any board sitting in 2026: does AI qualify as the kind of risk that triggers the mission-critical refinement? Section 3 takes that up directly.
Does AI Meet the Marchand Threshold for Your Company?
No Delaware court has yet held that artificial intelligence is, per se, a mission-critical risk under Marchand. That holding, in the firm's view, is a matter of when rather than whether. The doctrinal trajectory — Caremark through Stone v. Ritter, Marchand, and Hughes v. Hu — has consistently expanded the universe of risks that demand board-level monitoring rather than generalized ERM, and the commentary aimed at boards has already begun naming AI as the next entry on that list. Harvard's Corporate Governance Forum, for instance, has counseled that companies with significant AI investments should consider their Caremark obligations where AI has become — or is likely to become in the near future — a mission-critical regulatory compliance risk requiring a good-faith, board-level system of monitoring and reporting.
Until Delaware speaks directly, the practical question is one of touchpoints. Promise Legal advises boards to test the company against a multi-factor analysis. The more of these factors that apply, the closer the company sits to the heightened-scrutiny zone:
- The company is an AI developer or its core product is an AI system.
- The business model relies materially on AI to deliver the service customers pay for.
- The company makes AI claims in 10-Ks, 10-Qs, or earnings calls that position AI as core to the business narrative.
- AI is deployed in regulated workflows — employment decisions, healthcare delivery, financial services, consumer credit — where statutory and agency oversight already attaches.
- The competitive narrative to investors and customers depends on AI capability or AI-driven differentiation.
Where several of these touchpoints stack, the National Association of Corporate Directors has cautioned that generalized risk oversight mechanisms and reliance on ad hoc management reporting may not withstand Caremark scrutiny. The corollary is that boards in that posture should establish, in conjunction with management, internal controls for any mission-critical AI risks and institute dedicated reporting and board oversight mechanisms tied to those risks.
Promise Legal's working view is that most public companies of any meaningful AI exposure have already crossed the threshold. The remaining question — the one a Delaware court would ask — is what the documentary record shows. Board minutes, charter language, and reporting cadence are the artifacts that determine whether the duty was satisfied or breached. That, in turn, places enormous weight on the institutions now pressuring boards to produce that record, which is where the next section turns.
The Proxy Advisor Overlay: Glass Lewis and Investor Expectations
Doctrinal exposure under Marchand is only half the pressure on the board. The other half comes from the proxy advisers and institutional investors who translate governance gaps into voting recommendations and engagement campaigns. For AI oversight, that pressure is concentrated at Glass Lewis, with CalPERS and the broader institutional investor base providing reinforcement.
Glass Lewis has adopted explicit policy language on AI. The firm expects “clear disclosure concerning the role of the board in overseeing issues related to AI, including how companies are ensuring directors are fully versed on this rapidly evolving and dynamic issue,” and tells boards to mitigate material AI risks through strong internal frameworks that include ethical considerations and effective oversight. The recall trigger is narrow but sharp: Glass Lewis has stated it will not make voting recommendations on the basis of AI oversight absent material incidents, but where insufficient oversight or management of AI has caused material harm to shareholders, it will identify the responsible directors or committees and evaluate the response on a case-by-case basis. ISS's 2026 benchmark update did not adopt an AI-specific director recommendation; the broader risk-oversight framework remains the channel for ISS-driven pressure.
The investor base reinforces this posture. CalPERS, anchored to the same fiduciary mandate flagged earlier, is among the institutional investors pressing for board-level AI accountability. Survey data sharpens the picture: 65% of U.S. investors believe all companies should disclose board AI oversight, 49% support codifying AI oversight in committee charters, and 46% favor having the full board or a specific committee handle AI oversight. Against those expectations, only 28% of S&P 100 companies were found to have disclosure of both board-level oversight and an AI policy — the governance gap that proxy advisers and shareholder proponents are now pricing in. This is the practical reinforcement of the documentary discipline Caremark already requires.
The Board Documentary Record: What Counts as Substantial Compliance
The practical question for any board treating AI as mission-critical is narrower than the doctrine suggests: what, exactly, does the record need to show? Skadden's articulation of post-Marchand practice is the operative standard — directors must document their efforts in sufficient detail to demonstrate the attention they have paid to understanding and overseeing risk and compliance systems and responding to any issues that arise. The discipline is documentary, contemporaneous, and dated. Promise Legal counsels boards to maintain a six-artifact spine, each artifact mapped to a specific Caremark fault line.
- Risk classification. A board resolution, dated and entered into the minutes, formally assessing whether AI is mission-critical to the company's operations, products, or regulated activities. This is the Marchand threshold finding.
- Oversight assignment. A charter — audit committee, technology committee, or a standalone AI committee — with express language assigning AI risk oversight, scope, and reporting authority.
- Reporting cadence. Quarterly (or more frequent) management reporting to the assigned committee covering model performance, material incidents, regulatory developments, and red-flag escalations, with the cadence memorialized in the charter.
- Director education. Minutes reflecting substantive director briefings on AI-specific legal exposure — TRAIGA, the EU AI Act, securities disclosure, employment discrimination — not generic technology updates. The NACD's guidance is to seek a board member who has familiarity with AI or, alternatively, engage independent advisor(s) to supplement the board's skill set.
- Incident response. A board-reviewed and ratified incident response playbook addressing AI-specific scenarios — model failure, training-data contamination, hallucination-driven harm, unauthorized model use — with documented tabletop or live activation events.
- Officer accountability. A named Chief AI Officer or functional equivalent with a defined reporting line to the board or the assigned committee. Officer-level Caremark exposure after In re McDonald's means the CAIO carries a personal AI governance log that the board can inspect.
The urgency is empirical. ISS-STOXX's review of 3,048 U.S. companies found that only 481 (16%) disclosed the presence of at least one director with specialized AI skills; only 275 companies (9%) acknowledged having established policies on AI; only 245 (8%) disclosed board-level oversight of AI. The governance gap is the documentation gap. Promise Legal's view is that the same six-artifact spine — extended by the technical controls described in the firm's eight-artifact documentary record for the TRAIGA NIST AI RMF safe harbor — simultaneously satisfies Marchand, the TRAIGA affirmative defense, and Glass Lewis 2026 disclosure expectations. One record, three regimes.
Implications for Directors and the GC Who Reports to Them
The doctrinal pieces are in place. Marchand supplies the director-level Caremark trigger, McDonald's extends parallel exposure to officers, AI has crossed the mission-critical threshold for most public companies, and the proxy advisers have begun voting against directors who lack a documented oversight architecture. What remains variable, and therefore what the board controls, is the quality of the documentary record.
Three immediate moves follow.
- Direct the GC to produce a Marchand-readiness memo that classifies the company's AI deployments by criticality, identifies which systems trigger board-level oversight obligations, and inventories the existing reporting cadence against the gaps.
- Confirm the board committee structure for AI oversight. Audit, technology, or a dedicated AI committee are all defensible; what is not defensible is ambiguity about which committee owns the risk and on what cadence it reports to the full board.
- Institute quarterly AI risk reporting from management to the board, with written materials retained in the minute book — the dedicated reporting and oversight mechanism the NACD identifies as the baseline expectation for mission-critical AI risks.
The efficiency for the GC is that one well-built artifact set discharges four overlapping regimes. The same Marchand-readiness memo, committee charter, and quarterly reporting package supports (a) director Caremark defense under Marchand, (b) officer Caremark defense under the post-McDonald's framework for Chief AI Officers, (c) the TRAIGA NIST AI RMF safe harbor, and (d) Section 10(b) AI-washing pre-clearance for public statements about AI capabilities.
One artifact set, three statutory regimes, plus officer liability — that is the governance posture this cluster of doctrine demands.
Marchand readiness is now a board-level workstream, not a year-end checklist. Talk with our team about scoping the documentary record before the next cycle.