When Copilot Committed the Ad: Agency Law, Electronic Signatures, and the Missing Duty-of-Care for AI Agents

Moffatt v. Air Canada exports cleanly to US law: UETA Section 14 and Restatement Section 2.03 already bind the deployer. The drafting work is allocation, not attribution.

Abstract medallion and crystal linked by teal channels on navy, grainy fresco, right space
Loading the Elevenlabs Text to Speech AudioNative Player...

The Air Canada Pattern

In February 2024, the British Columbia Civil Resolution Tribunal ordered Air Canada to pay Jake Moffatt $812.02 CAD after the airline's website chatbot told him he could apply for a bereavement fare retroactively — a claim that directly contradicted Air Canada's actual published policy. When Moffatt tried to collect, the airline refused and argued, in defense, that the chatbot was “a separate legal entity that is responsible for its own actions.” Tribunal Member Christopher Rivers called that “a remarkable submission” and held the airline responsible for “all the information on its website,” whether it came from a static page or a generative agent (Moffatt v. Air Canada, 2024 BCCRT 149). The damages award was small. The doctrinal move was not: a tribunal refused to let a deployer disclaim its own automation.

This is not a one-off Canadian case. US regulators have been building a parallel record, and the pattern is consistent — the company that deploys the agent answers for what the agent says and does.

On September 25, 2024, the Federal Trade Commission announced Operation AI Comply, a sweep of five enforcement actions targeting companies that used AI to power deceptive schemes or made unsupported claims about AI capabilities. The message from the Commission was that existing consumer-protection authority reaches AI conduct without new legislation; the deployer is the respondent, not the model.

That message hardened in the DoNotPay settlement finalized in January 2025, where the company paid $193,000 to resolve FTC allegations that it had misrepresented the capabilities of its “robot lawyer.” The FTC did not sue the underlying model or the code; it sued the company that marketed the agent to consumers. In August 2025, the Commission continued the pattern with a complaint against Air AI Technologies for misleading representations about what its AI sales agents could do, reinforcing that capability claims themselves are actionable when the agent underperforms.

Then, on September 11, 2025, the FTC issued 6(b) orders to seven companies operating consumer-facing chatbots, compelling them to report on how they evaluate safety, monetize engagement, and handle harms to users — particularly minors. A 6(b) inquiry is not a complaint, but it is a discovery posture: the Commission is mapping the industry before the next wave of enforcement.

Read together, Moffatt and the FTC trio establish that deployer liability for AI-agent conduct is a going concern, not a forecast. Tribunals are awarding damages, regulators are extracting settlements, and agencies are gathering the evidentiary record for the next round. The doctrinal scaffolding — agency, misrepresentation, unfair and deceptive practices — is already in place. What remains unsettled is how the older frameworks map onto agents that negotiate, sign, and transact on their own. The rest of this article works through that mapping.

Agency Law's Unfinished Answer

The enforcement record above establishes that deployers pay when their agents misspeak. The doctrinal question is narrower and harder: when a Copilot instance drafts a contract, sends a quote, or commits a company to a refund policy, who is the principal, who is the agent, and what authority did anyone actually confer? The Restatement (Third) of Agency gives us the scaffolding for assigning responsibility, but it was not drafted with non-human principals or agents in mind, and the seams show as soon as you push on them.

Start with the definition. Restatement (Third) of Agency § 1.01 provides that “[a]gency is the fiduciary relationship that arises when one person (a ‘principal’) manifests assent to another person (an ‘agent’) that the agent shall act on the principal’s behalf and subject to the principal’s control, and the agent manifests assent or otherwise consents so to act.” The definition does work on two assumptions that an AI system breaks. It presumes the agent is a person capable of manifesting assent, and it presumes a fiduciary relationship rooted in consent. A large language model does neither in any doctrinally meaningful sense. The doctrinal seam is real, and the common-law fix is to borrow a concept that does not depend on the agent's interior life at all.

The move that actually does work is apparent authority. Restatement (Third) of Agency § 2.03 defines it as “the power held by an agent or other actor to affect a principal’s legal relations with third parties when a third party reasonably believes the actor has authority to act on behalf of the principal and that belief is traceable to the principal’s manifestations.” Two features make § 2.03 the right lever. First, the inquiry is outward-facing: it asks what the third party reasonably believed, not whether the “agent” has an inner life. Second, the trigger is the principal’s manifestations — the company’s decision to deploy the chatbot on its website, brand it with the company logo, and hold it out as speaking for the business. That is exactly the factual posture in the enforcement record above, and it is why tribunals have been willing to bind deployers without resolving the metaphysics of machine agency.

Respondeat superior is the negative example. Restatement (Third) of Agency § 7.07 imposes vicarious liability on an employer for torts committed by an employee acting within the scope of employment. It cannot do the work here. The provision presumes an employment relationship, a human employee, and a scope defined by the employer’s control over the manner of work. An AI agent is not an employee, has no “scope of employment” in any sense § 7.07 recognizes, and the control inquiry collapses into a question about software configuration rather than labor. Deployers should not expect § 7.07 to either help or harm them — it simply does not map.

That leaves a doctrinal gap that the common law has not closed on its own. The cleaner path is statutory: the Uniform Electronic Transactions Act already defines an “electronic agent” and supplies an intent-attribution rule that sidesteps the personhood problem entirely. That is where the next section picks up.

UETA Section 14 and E-SIGN: The Statutory Bridge

The common-law agency doctrine covered in the prior section does not stand alone when the actor is software. Nearly every U.S. jurisdiction has enacted the Uniform Electronic Transactions Act (New York operates under its own Electronic Signatures and Records Act on functionally similar terms), and Congress layered the federal Electronic Signatures in Global and National Commerce Act on top. Both statutes resolve, by their plain text, the question that agency law only answers by analogy: a contract formed by a machine binds the human who deployed it.

The anchor provision is UETA Section 14(1), which states that "[a] contract may be formed by the interaction of electronic agents of the parties, even if no individual was aware of or reviewed the electronic agents' actions or the resulting terms and agreements." That sentence does the heavy lifting. It forecloses the obvious defense — "no human at my company saw this deal" — before it can be raised. Lack of human review is not a formation defect; it is the expected operating mode.

The definition of "electronic agent" in UETA Section 2(6) is drafted broadly enough to absorb generative AI without amendment. The statute defines it as "a computer program or an electronic or other automated means used independently to initiate an action or respond to electronic records or performances in whole or in part, without review or action by an individual." A large language model wired into a procurement workflow, a chat widget empowered to quote prices, or a Copilot-style assistant that drafts and sends commercial offers all sit comfortably inside that definition. The statute does not care whether the program is deterministic or stochastic — only that it acts independently of contemporaneous human review.

Attribution is handled by UETA Section 9, which provides that an electronic record or signature is attributable to a person if it was the act of the person, and the Official Comment is unusually direct: the electronic agent is treated as "a tool of the person" who used it, such that the person is bound by its operation to the same extent as if the person had acted directly. The framing matters. Section 9 does not ask whether the deployer subjectively intended the specific output; it asks whether the deployer used the tool.

The federal layer mirrors the state rule and preempts any contrary holding. Under 15 U.S.C. Section 7001(h), "[a] contract or other record relating to a transaction in or affecting interstate or foreign commerce may not be denied legal effect, validity, or enforceability solely because its formation, creation, or delivery involved the action of one or more electronic agents so long as the action of any such electronic agent is legally attributable to the person to be bound." E-SIGN thus guarantees enforceability at the federal floor, while leaving the attribution question to state law.

That leaves the harder question these statutes do not squarely answer: what happens when the electronic agent acts outside the parameters its principal could reasonably have foreseen? The UETA drafting history does not resolve whether the intent framework requires the agent to have operated within known parameters before attribution attaches. The next section tests that gap against a concrete fact pattern.

The Copilot Problem: When the Agent Goes Off-Script

The doctrinal scaffolding from UETA holds when an agent does roughly what its deployer expected. The harder question is what happens when the agent freelances. On March 30, 2026, The Register reported that GitHub Copilot began injecting promotional “coding agent tips” into pull-request comments across public repositories. The tip promoting the Raycast launcher surfaced on more than 11,400 PRs; broader coverage by Winbuzzer put the total at roughly 1.5 million affected pull requests. GitHub product manager Tim Rogers conceded that letting Copilot modify human-authored PRs without notice “was the wrong judgement call,” and the feature was withdrawn under developer backlash. No human at GitHub had drafted, approved, or — in Microsoft’s own framing — intended the ad copy. A spokesperson described the incident as “a programming logic issue” that surfaced in “the wrong context.”

This is not a one-off. The OECD.AI incident monitor catalogs the broader pattern of AI incidents as deployments scale, and autonomous-commit agents that open pull requests, edit configuration files, or push dependency bumps have moved from novelty to routine infrastructure inside engineering organizations — routine infrastructure is how liability vectors quietly become load-bearing.

Run the PR-ad incident through the UETA framework above and the answer is uncomfortable for deployers. UETA § 9's attribution rule asks whether the electronic act was the result of the purported actor's action, and the comments make clear that a person who deploys an automated agent is bound by what that agent does within the scope of its authorized use. The § 2(6) definition of "electronic agent" and Comment 2's framing of the agent as a tool of the person who configures it leave no doctrinal room for a "the model did it" defense. A developer who installs Copilot, grants it repository write access, and lets it open PRs has, in UETA's vocabulary, knowingly used the agent. Unintended advertising copy inside a PR is still the deployer's electronic act.

The obvious counter is that GitHub itself — as the Copilot provider — sits behind some intermediary shield. Doe v. GitHub, Inc., No. 22-cv-06823-JST (N.D. Cal.), is the recurring test of that theory, and in July 2024 Judge Jon Tigar dismissed the plaintiffs' DMCA § 1202 claims against GitHub and OpenAI. But that ruling is narrow: it concerns copyright management information stripped from training data, not whether a generative system's downstream output binds the user who deployed it. The passive-intermediary framing that 17 U.S.C. § 512 built for hosting and linking does not map cleanly onto systems that produce new expressive output on a user's behalf.

If the statutes already attribute the agent's conduct to the deployer, and the intermediary shields do not extend to generated output, the remaining lever is private ordering — the contract terms, indemnities, and allocation clauses that decide who actually pays when Copilot commits the ad.

Drafting the Contract: Four Clause Families

The Copilot incident, the OECD.AI entries, and Doe v. GitHub all point to the same doctrinal conclusion: when an AI agent acts, the deployer is on the hook. UETA Section 9 attributes electronic-agent conduct to the person who deployed it, and Restatement (Third) of Agency Section 1.01 treats any actor operating on another's behalf and subject to control as an agent whose conduct binds the principal. Attribution is therefore a statutory default, not a drafting question. The drafting question is allocation: now that the deployer owns the output, how does the deployer's contract with the vendor, and with counterparties, divide up the economic consequences of that ownership?

Four clause families do that work. Each addresses a specific doctrinal pressure point and each should appear in any AI-agent deployment agreement signed in 2026.

1. Principal Designation

Hook: UETA Section 2(6) defines an electronic agent as a program acting without human review, and Section 9 attributes its output to the person it acts for. Restatement Section 1.01 supplies the common-law backbone. Goal: lock in which party is the principal before a plaintiff's lawyer picks for them. A deployer that leaves this ambiguous invites a creative pleading that the vendor is a co-principal, or worse, that no one is.

The parties acknowledge that any AI Agent operated on behalf of
Deployer shall be deemed an electronic agent of Deployer within
the meaning of Uniform Electronic Transactions Act § 2(6) and
attributable to Deployer under § 9. Vendor is not a principal of
the AI Agent for any purpose under this Agreement.

2. Authority Scope

Hook: Restatement Section 2.03 on apparent authority. A customer-facing agent that talks like it can issue refunds will, in most jurisdictions, be treated as if it can, unless the deployer has taken affirmative steps to narrow the manifestation. Goal: publish the scope in the contract so that out-of-scope outputs are unratified by default rather than by litigation.

Deployer authorizes the AI Agent to (a) respond to pricing and
availability inquiries, and (b) generate quotes up to [$X].
Deployer does NOT authorize the AI Agent to bind Deployer to
refunds, fare adjustments, or representations regarding
regulatory compliance. Any output by the AI Agent outside scope
shall not be ratified absent express written confirmation by a
Deployer officer.

3. Audit Trail

Hook: UETA Section 12 requires retention of electronic records in a form capable of accurate reproduction, and ISO/IEC 42001:2023, Clause 9.1 requires performance evaluation of AI management systems. Goal: make the evidentiary record a contractual deliverable so that, when a regulator or plaintiff asks what the agent did and who authorized it, the answer exists and is admissible.

Deployer shall maintain a contemporaneous log of (i) inputs to
the AI Agent, (ii) AI Agent outputs, and (iii) any human review
or override, for a period of not less than [N] years, consistent
with UETA § 12 and ISO/IEC 42001:2023, Clause 9.1.

4. Indemnity Flow

Hook: Anthropic and other frontier vendors now offer defensive commitments covering IP infringement and certain output-related claims. Goal: flow those commitments through to the deployer rather than letting them die at the vendor-deployer boundary, with exclusions that mirror the vendor's acceptable-use policy so the deployer does not accidentally forfeit coverage through misuse.

Vendor shall defend, indemnify, and hold harmless Deployer
against third-party claims arising from (a) intellectual property
infringement by AI Agent output, and (b) factual
misrepresentations by the AI Agent directly attributable to the
foundation model's training, subject to Deployer's compliance
with Vendor's Acceptable Use Policy.

These four families allocate, cabin, audit, and insure. What they do not do is watch. A signed contract is a snapshot, but an AI agent's behavior drifts with every model update, fine-tune, and tool-use expansion. Clause drafting is necessary; it is not sufficient. The operational controls below address the monitoring gap.

What to Ship on Monday: Three Controls

Clauses without operational controls are just decoration. The four clause families above assume three things about the organization deploying them: that records exist, that someone signed off before the agent went live, and that someone checks whether the agent is still doing what it was told to do. Here is the minimum version of each.

Control 1: A Log-Retention Standard

UETA § 12 treats a record as retained only if it accurately reflects the information and remains accessible for later reference, which is the baseline any audit-trail clause has to satisfy. The NIST AI Risk Management Framework 1.0 builds on that through its MEASURE function, which calls for documented tracking of system inputs, outputs, and performance over the deployment lifecycle, and ISO/IEC 42001:2023, Clause 9.1 requires organizations to determine what needs monitoring, the methods used, and when results are evaluated. Translate those three into one instruction for the deployment team: retain the prompt, the agent output, and any human override as a single linked record for the full limitations period applicable to the underlying transaction, matching the retention window promised in the Audit Trail clause.

Control 2: A Pre-Deployment Governance Checkpoint

The GOVERN function of the NIST AI RMF places accountability for AI risk decisions with named individuals and requires that policies, roles, and authority be defined before a system is put into use. That maps cleanly onto the Authority Scope clause: the scope written into the contract is only enforceable if it reflects the system as actually configured. The actionable version is narrow. Before each agent deployment or material reconfiguration, a designated officer signs a one-page attestation confirming that the Authority Scope clause mirrors the agent's current configuration — tool access, data access, and transaction limits included.

Control 3: A Scope-Drift Review Cadence

Restatement (Third) of Agency § 2.03 treats apparent authority as a function of the third party's reasonable belief, which means counterparties update that belief continuously based on what the agent actually does. A scope provision drafted in January does not freeze their perception in April. Run a quarterly review of a sample of agent outputs against the authorized scope; any pattern of out-of-scope behavior triggers re-authorization and a refreshed notice to counterparties rather than a post-hoc disavowal.

Deploying an AI agent in 2026 and want a second read on the contract stack and the controls around it? Book a review.