Blog / Strategy / Agentic Commerce and Chargebacks
STRATEGY · 14 MIN READ

Agentic Commerce Is Coming for Your Checkout — Here's What the Chargeback Rules Don't Cover Yet

AI agents are already completing real purchases. The card networks are building infrastructure to support them. What they haven't built yet are the dispute rules that will protect you when something goes wrong.

By Cal Weston 15+ years in credit card dispute resolution · Published March 29, 2026

AI Agents Are Already Shopping

Agentic commerce isn’t a future scenario. It’s happening now. OpenAI’s Operator can browse the web and complete checkout flows on a user’s behalf. Google has integrated shopping agents into its assistant products. Amazon’s Rufus—which has reportedly reached over 250 million users—helps customers research and make purchasing decisions. Perplexity has added direct shopping functionality that lets users buy products without leaving the interface.

These aren’t demos. They are production features, already in the hands of consumers. When a user tells an AI agent “order the same dog food I got last month” or “book me the cheapest flight to Chicago this weekend,” the agent doesn’t ask for permission twice. It finds the product, enters the card details stored in the user’s profile, and completes the transaction.

The card networks have noticed. Visa launched its Token Authentication Program (TAP) in October 2025, creating a credential framework for AI-initiated payments. Mastercard launched Agent Pay in April 2025 and introduced its Verifiable Intent standard in March 2026, designed to cryptographically confirm that a consumer did in fact authorize an AI agent to act on their behalf. American Express launched its ACE Developer Kit in April 2026. These are real programs—infrastructure investments by the largest payment rails in the world. They signal clearly that the networks believe agentic commerce is coming at scale.

What This Article Covers

This is the anchor article in our series on agentic commerce and chargebacks. We cover the current rules gap, why it matters to merchants now, and three concrete strategies you can implement today that will serve you whether agent transaction volumes remain modest or explode. Follow-up articles will go deeper on each strategy.

The Rules Gap: Infrastructure Without Dispute Frameworks

Here is the central problem, stated plainly: no card network has updated its chargeback dispute rules to address AI agent transactions. Not Visa. Not Mastercard. Not Amex. No definitive framework has been announced.

The programs above—TAP, Agent Pay, Verifiable Intent, ACE—are authentication and identity infrastructure. They create ways for agents to identify themselves and for networks to verify that a consumer delegated authority to an agent. That is genuinely useful. But authentication infrastructure and chargeback liability rules are separate things. The former tells you who made the transaction. The latter tells you who bears the loss when a cardholder disputes it.

Today, when a cardholder calls their bank and says “I didn’t authorize this,” the dispute process follows a set of rules that assume a human being made a conscious decision to click “pay.” Every reason code, every liability framework, every compelling evidence standard in the Visa Core Rules, the Mastercard Chargeback Guide, and the Amex dispute manual was written with a human at the keyboard in mind. None of it anticipated a scenario where the “cardholder” set up an AI agent three weeks ago, forgot about it, and is now genuinely surprised that $200 left their account.

The Liability Gap

When an AI agent makes a purchase the consumer didn’t consciously expect, who bears the loss? Under current rules, the answer skews heavily toward the merchant. The consumer can dispute it. The bank will likely side with the consumer. And the merchant is left holding the chargeback, the fee, and the returned goods—despite having done nothing wrong.

There’s a specific dispute pattern worth watching: “authorized but not consciously requested.” Early reports from payment professionals suggest this type of claim is beginning to surface—cases where the consumer technically gave an AI agent broad permission to make purchases, but disputes a specific transaction because they didn’t expect it or didn’t want that particular item. No documented public cases from AI agent purchases have been confirmed yet, but the pattern fits cleanly into existing reason codes like Visa 13.1 (Merchandise Not Received) or 13.3 (Not as Described), and the networks’ current rules give merchants limited tools to fight back.

Why Compelling Evidence 3.0 Won’t Save You Here

Visa’s Compelling Evidence 3.0 (CE 3.0) was a significant improvement for fighting friendly fraud. It lets merchants match the IP address and device ID from a disputed transaction against two or more prior undisputed transactions to establish a pattern of authorization. If the same device, same IP, same shipping address shows up repeatedly, that’s strong evidence the cardholder was the actual buyer.

AI agents break this logic entirely. When an agent completes a checkout, the IP address belongs to a cloud server—OpenAI’s infrastructure, Google’s infrastructure, or whichever AI platform the consumer is using. The device ID is the agent’s session, not the consumer’s laptop or phone. There is no match to the consumer’s prior purchase history, because the consumer’s device never touched the transaction. CE 3.0’s primary matching criteria simply don’t apply, leaving merchants without their best current defense tool for exactly the class of transactions that will grow most.

Why This Matters Now, Not Later

You might be thinking: agent purchases are still a small fraction of my order volume. Why build defenses for a problem that hasn’t materialized yet?

Two reasons.

First, the pattern scales faster than the rules do. Chargeback rule changes require network deliberation, consultation periods, and implementation cycles. When agent transaction volumes reach the point where disputes become visible in aggregate data, it will take 12 to 24 months for network rule changes to follow—if they follow at all. Merchants who haven’t built basic defenses by then will be exposed during precisely the period when volumes are highest and rules are least clear.

Second, the three strategies that will protect you against agentic commerce disputes are the same strategies you should already be implementing for ordinary transactions. None of them require you to predict exactly how AI commerce will evolve. They make you a better merchant regardless. The agentic angle simply makes the urgency concrete.

Three Strategies That Protect You Today and Tomorrow

1. Check-to-Accept Terms of Service

Most merchants bury their terms of service in a footer link with a line of text that says “by completing your purchase, you agree to our terms and conditions.” This passive acceptance is weaker than it looks—and it becomes significantly weaker in an agentic context.

A proper check-to-accept mechanism requires the buyer to take a discrete, affirmative action: check a box that says “I agree to the Terms of Service” before the purchase completes. This creates a timestamped, logged record that the terms were explicitly accepted, not just constructively implied. In a representment, the difference matters. “The user was shown a link to our terms” is much weaker than “the user checked a box confirming acceptance of our terms, timestamped at 2:14 PM on March 15.”

Now apply this to agent transactions. If an AI agent checks that box on behalf of a consumer, the dispute question shifts. It moves from “were your terms clear enough?” to “did the consumer authorize the agent to accept terms on their behalf?” That is a question about the consumer’s relationship with the AI agent—one the consumer explicitly agreed to when they set up the agent’s permissions. It takes the merchant out of the liability frame almost entirely.

Practically: if your checkout doesn’t already require an explicit checkbox for terms acceptance, add one. Log the acceptance event with a timestamp in your order management system. Include this log in every chargeback representment package. This is good practice for all transactions; it becomes critical as agent volumes grow.

Implementation Note

Your terms of service checkbox should link to a versioned, date-stamped copy of your terms—not just a URL that may change. When you submit chargeback evidence, you want to show the exact terms the buyer accepted, not whatever your current terms say. Archive each version with a publish date.

2. Clear, Precise Product Descriptions

AI agents make purchase decisions based on what they can read. They parse your product descriptions, compare options, and select based on attributes—size, color, compatibility, shipping time, price. They cannot look at a product photo and infer context the way a human can. They do not assume things not written in the description.

This means vague product descriptions are a direct path to “not as described” chargebacks in an agentic world. If your listing for a laptop bag says “fits most laptops” and the agent orders it for a consumer with a 17-inch machine, and it doesn’t fit, you have a dispute on your hands. If your description for a software license doesn’t specify that it’s for one seat only, and the agent buys it assuming multi-user access, the consumer has a reasonable complaint—and under reason code 13.3 (Not as Described), they probably win.

Clear product descriptions are already a best practice for human buyers. The agent dynamic simply removes the tolerance for ambiguity. A human buyer will sometimes overlook a vague detail, contact support, or accept a reasonable explanation. An algorithm doesn’t. It matched what it found in the text to what the consumer asked for, and if those don’t align, there’s no grace period.

Audit your product catalog for ambiguous language. Specifically:

  • Compatibility claims (“fits most” should become specific dimensions or model numbers)
  • Quantity and license scope (“per user,” “per seat,” “single use” should be explicit)
  • Subscription terms (“monthly plan” should include the exact billing amount and cycle)
  • Digital product delivery (“download available after purchase” should specify how and when)
  • Physical dimensions and materials for anything where size or composition matters

This audit pays dividends with human buyers today and builds a foundation that will matter considerably more as agent purchase volumes increase.

3. Bot Management, Not Bot Blocking

The instinct for most merchants, when they hear “bots are buying on your site,” is to block them. That was the right instinct for a long time. Bots were scrapers, credential stuffers, inventory hoarders. Nothing good came from automated traffic at checkout.

Agentic commerce changes this calculus. Some bots are now legitimate buyers acting on behalf of real, paying customers. A blanket bot block that catches OpenAI Operator or Google’s shopping agent is blocking real transactions. You are actively preventing sales.

The shift merchants need to make is from “block all bots” to “identify and manage automated buyers.” This requires a policy, not just a firewall rule. Concretely:

  • User-agent transparency. Legitimate AI agents increasingly identify themselves in their user-agent strings. OpenAI, Perplexity, and others have published documentation about how their agents identify themselves. Build your bot management rules to distinguish between known-good agent signatures and unidentified automated traffic.
  • Authentication requirements for checkout. Requiring account creation or sign-in before purchase doesn’t stop agents (they can create accounts), but it creates a persistent identity record tied to the transaction. If the account was created by the consumer before the agent used it, you have a chain of authorization that supports your representment.
  • Rate limiting at the product and checkout level. You can rate-limit checkout attempts per account or IP without blocking agents entirely. This preserves legitimate agent purchases while limiting abuse vectors.
  • An explicit AI agent purchase policy. Consider whether you want to publish terms about AI agent purchases—either welcoming them with conditions (must be authenticated, must use a registered account) or restricting them for certain product categories. Having a policy makes your position defensible in a dispute.
The Unintended Consequence of Blanket Blocking

If your bot detection flags a legitimate AI agent and declines the transaction, but the consumer then disputes a different transaction claiming they didn’t authorize the agent’s purchases generally, you may find yourself in a dispute without the transaction records you need. Managing agents rather than blocking them creates better documentation of your policies and decisions.

What to Watch

The rules will eventually catch up. Here are the developments worth monitoring:

Network dispute rule updates. Visa, Mastercard, and Amex update their operating rules on a regular cycle. Watch for proposed changes that introduce new reason codes or liability frameworks specific to agent-initiated transactions. When TAP, Agent Pay, or ACE matures beyond infrastructure into actual dispute frameworks, the liability question will finally have a written answer. Until then, it doesn’t.

Compelling Evidence 3.0 revisions. The CE 3.0 framework was designed for a world where the same human uses the same device. Now that the same human may use an AI agent on a cloud server, the matching criteria need revision. Early reports suggest Visa is aware of this gap. Whether and when they address it is worth watching.

PSD3/PSR shared-liability proposals in Europe. The EU’s updated payment regulations are beginning to address scenarios where consumers delegate payment authority to third parties. If shared-liability frameworks for authorized agents emerge in European regulation, they will likely influence network rule changes globally over time.

Consumer awareness. One underappreciated variable is consumer education. Early reports from customer service teams at merchants with agent-compatible checkouts suggest that many consumers who dispute agent-initiated transactions genuinely didn’t understand what they authorized when they set up the agent. As AI agents become more mainstream and more visible, consumer understanding of what they’re authorizing should improve—reducing “authorized but not consciously requested” disputes over time. But that’s a 2027 or 2028 problem; the near term is messier.

The Bottom Line

Agentic commerce is real, growing, and already creating a category of dispute that current chargeback rules don’t address cleanly. The card networks are building the authentication rails for a world where AI agents make purchases—but they have not written the dispute rules that will govern what happens when something goes wrong.

Merchants who wait for the rules to catch up before preparing will spend some period of time exposed: absorbing dispute losses on transactions they had no way to fight under existing frameworks. Merchants who implement the three strategies in this article—check-to-accept terms, precise product descriptions, and a managed (not blocked) approach to automated buyers—will be in a substantially stronger position when those disputes arrive.

None of these strategies require you to predict exactly how agentic commerce will evolve. They make you a stronger, better-documented merchant regardless. The agentic exposure is simply the reason to do them now rather than eventually.

The time to prepare is while the rules are still being written. After the rules exist, you will be competing on execution. Right now, you can compete on preparation.

Get Full Access to Every Defense Playbook

Subscribe to get copy-paste response templates, evidence checklists, and the exact language networks look for — plus all reason code guides and premium deep dives.

Subscribe for Full Access

Frequently Asked Questions

Under current chargeback rules, yes—a cardholder can still file a dispute. The frameworks for AI-authorized purchases do not yet exist in Visa, Mastercard, or Amex dispute rules. Merchants are largely unprotected when the dispute involves an AI agent purchasing on a consumer’s behalf without explicit per-transaction confirmation. This is the core problem this article addresses.

Not reliably. Compelling Evidence 3.0 relies on matching IP addresses and device IDs from prior undisputed transactions. AI agents operate from cloud servers, not the consumer’s personal device, so the IP and device fingerprint won’t match—undermining the primary matching criteria CE 3.0 depends on. This is one of the clearest examples of how existing chargeback tools weren’t built for agentic transactions.

These are infrastructure programs launched by the card networks to support AI agent payments. Visa launched its Token Authentication Program (TAP) in October 2025. Mastercard launched Agent Pay in April 2025 and introduced its Verifiable Intent standard in March 2026, designed to cryptographically confirm consumer authorization of agent actions. American Express launched its ACE Developer Kit in April 2026. These programs create identity and authentication rails for agent transactions—but as of now, none of them come with updated chargeback dispute rules.

A blanket block is increasingly counterproductive. Legitimate AI agents are acting on behalf of real consumers with real payment intent. Blocking them blocks revenue. The better approach is to distinguish between legitimate, identified agent traffic and unidentified automated traffic—and to require authentication for agent-initiated purchases, which creates an authorization chain you can use in representment if a dispute arises.

Related Articles

🔒

Get Our CNP Fraud Defense Guide — Free

Create a free account and get instant access to our Card Not Present Fraud Defense Guide — the most common e-commerce chargeback, with copy-paste response frameworks and evidence checklists across all four card networks.

Sign Up Free — Instant Access

No credit card required • Free forever