This is article 4 in our Agentic Commerce series. Previous articles covered the chargeback rules gap, ToS as dispute evidence, and product descriptions that hold up under dispute review. This article covers the part merchants find most counterintuitive: protecting your checkout from bot-driven fraud without blocking the AI agents who represent your next major customer acquisition channel.
The Bot Landscape Has Changed
For most of the past decade, bot management had a clear objective: identify automated traffic and block it. The assumption was simple—bots are not buyers. They scrape prices, test cards, and generate fraudulent transactions. Keep them out and your fraud rates drop.
That assumption no longer holds.
According to HUMAN Security's 2026 bot traffic analysis, 51% of internet traffic is now automated. Surebright's e-commerce research puts automated traffic on e-commerce sites at 57%. These numbers have been climbing for years. What's changed is the composition. The bot traffic hitting merchant checkouts today includes a growing share of legitimate AI shopping agents acting on behalf of real customers with real payment credentials and genuine purchase intent.
According to HUMAN Security, 51% of internet traffic is bots. According to Surebright, 57% of e-commerce traffic is automated. A meaningful and growing share of that automated traffic is legitimate AI shopping agents with real purchase intent. A blanket bot-blocking policy now turns away potential revenue.
The merchant who blocks all automated traffic in 2026 is blocking a material fraction of purchase-ready intent. The merchant who blocks no automated traffic is leaving their checkout open to card testing attacks that can push their VAMP ratio past Visa's enforcement threshold in a single weekend. Neither extreme is viable.
Three Types of Automated Traffic at Your Checkout
Effective bot management starts with distinguishing the three types of automated actors you're dealing with.
Type 1: Malicious Bots
These are the bots traditional bot management was designed to stop. Their objectives include:
- Card testing / enumeration: Running thousands of small transactions to validate stolen card numbers. Even micro-transactions that don't complete generate TC40 fraud reports that count against your VAMP ratio.
- Credential stuffing: Using leaked username/password combinations to access existing customer accounts and make fraudulent purchases.
- Inventory hoarding / scalping: Buying high-demand items to resell at markup, or holding inventory to prevent legitimate customers from purchasing.
- Price scraping for competitive harm: Systematically extracting pricing to undercut or manipulate market positioning.
All of these generate fraud signals, chargebacks, or VAMP ratio exposure. They should be blocked, rate-limited, or challenged aggressively.
Type 2: Legitimate AI Shopping Agents
These are agents acting on behalf of real customers who have explicitly authorized the agent to browse and purchase. They include:
- Perplexity's shopping features executing merchant-approved product purchases
- OpenAI's shopping and operator tools completing checkout on behalf of users
- Google Shopping agents surfacing and purchasing from product listings
- Personal AI assistants (including Claude, Gemini, and others) that users have authorized to manage purchases
- Replenishment agents handling automatic reorders for consumable goods
These agents use real payment credentials, represent genuine purchase intent, and are increasingly the mechanism through which some demographic segments will do a majority of their shopping. Blocking them is turning away customers.
Type 3: Grey-Area Automation
The hardest category to manage. This includes:
- Price comparison scrapers that may drive legitimate referral traffic but consume server resources and expose pricing
- Browser automation tools used by individuals for legitimate personal automation (auto-checkout for limited-release products)
- Affiliate and cashback tools that inject code or redirect checkout sessions
- Research and monitoring bots from agencies, investors, or media organizations
These require policy decisions rather than technical ones—and they require those decisions to be made explicitly rather than handled by default block rules.
Card Testing and VAMP: Why Bot Management Is Now a Financial Compliance Issue
The connection between bot management and chargebacks has always existed. What's changed in 2026 is that the threshold for consequences has tightened significantly with Visa's VAMP program.
We covered VAMP in detail in our VAMP explainer, but the key points for bot management are:
Visa estimates that card enumeration fraud causes $1.1 billion in annual losses across the network. Merchants processing 300,000 or more transactions monthly who exceed a 20% enumeration ratio (enumeration-related fraud reports as a percentage of total transactions) face VAMP enforcement. Card testing bots are the primary driver of enumeration ratios. This is no longer just a fraud problem—it's a program compliance problem.
The insidious aspect of card testing for VAMP purposes is that it doesn't require successful transactions. A bot attempting 10,000 card validation attempts against your checkout generates TC40 fraud signals for every attempt, even the declined ones. Your total transaction count doesn't increase meaningfully. Your fraud report count climbs. Your ratio spikes.
Additionally, Visa has identified a 450% increase in dark web mentions of "AI Agent" tools designed for fraudulent automated purchasing. These are not legitimate shopping agents—they are purpose-built fraud tools using AI-like interfaces to evade traditional detection. The emergence of legitimate AI agents has created cover for malicious ones, because both look like non-human automated traffic from the outside.
Why Traditional Defenses Don't Work Here
Before getting to the framework, it's worth being explicit about why the standard playbook breaks down.
CAPTCHA and Browser Challenges
CAPTCHA-based challenges assume a human is present to solve them. A legitimate AI shopping agent cannot solve a CAPTCHA on behalf of its user—at least not without the user being present, which defeats the purpose of agent-based shopping. Deploying CAPTCHA on checkout pages blocks legitimate AI agents just as effectively as malicious ones.
3D Secure Authentication
3D Secure (Visa Secure, Mastercard Identity Check) requires the cardholder to be present to complete the authentication step—typically via a push notification to their banking app or a one-time code. For fully automated agent-initiated transactions, 3DS creates an abandonment event, not a security event. It is the right tool for human CNP fraud; it is the wrong tool for bot-initiated transactions of any kind.
3D Secure only works when a cardholder is present to complete authentication. It is fundamentally incompatible with fully automated agent transactions and creates checkout abandonment rather than fraud prevention when deployed against agent traffic. Do not treat 3DS as your primary defense against bot-driven fraud.
IP and User-Agent Blocking
Blocking specific IP ranges or user-agent strings is a short-term measure that sophisticated bots evade within hours. More importantly, legitimate AI agents operating at scale may share infrastructure with other automated traffic, making IP-based blocking an unreliable signal for distinguishing malicious from legitimate intent.
Device Fingerprinting Alone
Traditional device fingerprinting identifies returning devices and compares behavioral signals to known human patterns. Agents running headless browsers generate fingerprints that look like bots because they are bots—even when they're legitimate ones. Fingerprinting that flags all headless browser traffic as malicious will flag legitimate AI agents. You need additional signals to distinguish intent.
A Tiered Bot Management Framework
The framework below replaces the binary "allow/block" model with a tiered approach that matches the response to the risk level of the traffic type.
Tier 1: Rate Limiting at the Checkout Layer
Rate limiting is your first and most important control. Malicious card testing bots need volume—they are running hundreds or thousands of attempts to validate a card list. A rate limit of, say, 10 checkout attempts per IP address per hour stops enumeration attacks without affecting legitimate agent transactions, which typically involve one or a small number of purchases.
Rate limiting should be applied at:
- The payment form submission endpoint (not just the product page)
- The account creation and login endpoints (to prevent credential stuffing)
- The coupon/promo code validation endpoint (a common enumeration target)
- The address validation API if exposed (another enumeration attack surface)
Exponential backoff for repeat failures (e.g., lock out after 5 failures, double the lockout period on each subsequent violation) makes enumeration attempts computationally expensive without affecting normal checkout flows.
Tier 2: Behavioral Analysis for Intent Classification
Behavioral analysis distinguishes malicious bots from legitimate agents by looking at purchase behavior, not just traffic patterns.
Signals associated with malicious card testing:
- Multiple different card numbers attempted from the same IP or device in a short window
- Consistent small transaction amounts (testing micro-transactions specifically)
- No product browsing before checkout (directly hitting the checkout endpoint)
- High decline rates followed by immediate retry attempts
- Billing addresses that don't correspond to plausible shipping destinations
Signals associated with legitimate agent transactions:
- A single card credential used consistently (the real customer's stored payment)
- Transaction amounts consistent with real product pricing
- Product page visits and product API calls before checkout initiation
- User-agent strings or referrer headers identifying known legitimate agent platforms
- Consistent shipping address matching a customer's prior order history
Modern bot management platforms (Cloudflare Bot Management, DataDome, Imperva) can classify traffic on these behavioral signals at the edge layer before requests reach your checkout system. This is where investment in bot management infrastructure pays off in VAMP ratio protection.
Tier 3: Agent Identification and Credentialing
The longer-term solution emerging from the industry is explicit agent identification protocols. Visa is actively developing its Transaction Aggregation Protocol (TAP), which we covered in the anchor article. TAP's goal is to create a verified identity layer for agent-initiated transactions so that card networks, issuers, and merchants can distinguish authorized AI agent purchases from automated fraud.
Until TAP and similar frameworks are widely deployed, some merchants are getting ahead of this by:
- Accepting agent identity headers: Published AI agent platforms (Perplexity, OpenAI operators, others) document the user-agent strings and headers their agents use. Maintaining an allowlist of known legitimate agent identifiers lets you route their traffic through appropriate handling rather than the default bot challenge flow.
- Offering API-based checkout for agent platforms: Some larger merchants are creating lightweight checkout APIs specifically designed for programmatic access, with appropriate rate limits and authentication requirements, rather than trying to make web checkouts work for both humans and agents simultaneously.
- Requiring platform-level agreements: For high-volume agent integrations, establishing direct agreements with the agent platform operator (rather than relying on per-transaction signals) provides the authorization trail you need for dispute defense.
Tier 4: Transaction-Level Fraud Scoring
For transactions that pass behavioral analysis but still carry some risk signals, transaction-level fraud scoring provides a final filter before authorization. This is distinct from the checkout-layer controls above—it operates on the completed transaction record rather than the request pattern.
For agent-initiated transactions specifically, consider scoring on:
- Whether the billing address and shipping address match the customer's profile history
- Whether the order value is consistent with the customer's prior purchase history
- Whether the product category is consistent with the customer's prior purchases
- Whether the agent platform is one you have visibility into (known platform vs. unknown automation)
High-risk scores should trigger a hold for manual review rather than an automatic decline, particularly for first-time agent transactions from otherwise good customers. Automatic declines on legitimate agent transactions become chargebacks when the customer disputes the declined charge—a double loss.
Get the VAMP Enumeration Defense Checklist
Premium members get our complete VAMP compliance checklist, rate limiting configuration guide, and the bot management vendor comparison matrix for merchants in the $1M–$50M GMV range.
Subscribe for Full AccessWhat NOT to Do
Several common responses to bot pressure create as many problems as they solve. Avoid these.
Don't Deploy a Single CAPTCHA Layer and Consider the Problem Solved
CAPTCHA alone is not a bot management strategy. Sophisticated card testing operations use CAPTCHA-solving farms (human workers solving challenges at scale) or AI-based CAPTCHA bypass tools. It adds friction for legitimate users, fails against determined attackers, and specifically blocks legitimate AI agents. It should be one tool in a layered approach, not the primary defense.
Don't Block All Headless Browser Traffic
Headless browser traffic includes Googlebot, social media preview scrapers, accessibility tools, and legitimate AI agents—in addition to malicious bots. A blanket headless browser block will impact SEO crawling, social sharing previews, and AI agent access simultaneously. Use behavioral analysis to distinguish traffic types rather than a binary headless/not-headless classification.
Don't Set Velocity Rules That Catch Normal Subscription Behavior
Subscription replenishment agents that process multiple orders for a customer (or multiple customers) from the same platform IP can trigger velocity rules designed to catch card testing. If your velocity limits are tuned only for single-user patterns, you may block legitimate subscription management agents. Build in platform-level exemptions for known, credentialed agent services.
Don't Treat Every Chargeback from an Automated Transaction as Unwinnable
The absence of 3DS authentication on an agent transaction does not automatically mean you lose the chargeback. As we covered in our agentic commerce anchor article, the dispute rules in this area are still developing. A strong evidence package including ToS acceptance records, precise product descriptions, delivery confirmation, and agent transaction logs may still win a representment even without cardholder authentication. Fight the disputes you have evidence for.
Building an AI Agent Policy
The merchants who will be best positioned over the next three years are those who develop an explicit AI agent policy now, before Visa TAP and equivalent frameworks from Mastercard and Amex are fully deployed. An AI agent policy is a formal set of decisions about how your business will handle agent-initiated transactions. It covers:
Which Agent Platforms Are Authorized
Decide which AI agent platforms you will explicitly accept transactions from. This doesn't have to be an exhaustive list—it should include the major platforms (Perplexity, OpenAI operators, Google Shopping agents) and have a process for adding others. Authorized platforms get the benefit of the doubt on behavioral signals. Unrecognized automation gets higher scrutiny.
What Transaction Limits Apply to Agent Purchases
Consider whether agent-initiated transactions warrant different per-transaction or per-day limits than human checkout. This is not discrimination against agents—it's risk management for a transaction type where your traditional authentication signals (3DS, CVV challenge) are absent. Many merchants start with a per-transaction cap on agent purchases and raise it as they build confidence in their detection capabilities.
How Agent Transaction Records Will Be Stored
Agent transaction records are your dispute evidence. If an AI agent makes a purchase on behalf of a customer and that customer later disputes it, your defense depends on being able to show what the agent accessed, when, which product description it read, and what it agreed to on the customer's behalf. Log agent sessions with the same rigor you apply to human checkout sessions—or more.
What Happens When an Agent Transaction Is Disputed
Define your escalation path before disputes arrive. Which team handles agent-initiated chargebacks? What evidence package do they assemble? What is the threshold for fighting versus accepting? Having this defined in advance means faster response times, which matters especially for Amex C31 disputes with their 20-day window.
| Bot Type | Recommended Response | VAMP Risk | Dispute Risk |
|---|---|---|---|
| Card testing / enumeration | Block — rate limit, behavioral block, IP reputation | Critical — primary VAMP driver | High — fraud chargebacks |
| Credential stuffing | Block — login velocity limits, MFA | Medium | High — account takeover chargebacks |
| Known legitimate AI agents | Allow with transaction limits and logging | Low if managed | Medium — authorization disputes possible |
| Unknown AI automation | Challenge — require agent identification | Unknown | Medium to high |
| Price scrapers | Rate limit — policy decision on access | Low | None direct |
| Dark web "AI Agent" fraud tools | Block — behavioral analysis, velocity limits | High | High — fraud chargebacks |
The Broader Picture: Layered Defense for Agentic Commerce
Bot management is one piece of a complete defense posture for the agentic commerce era. The full stack requires each layer to be in place and working together:
- Understanding the agentic commerce dispute gap — The rules that don't exist yet and how to operate in their absence
- ToS as dispute evidence — Explicit authorization records for agent-initiated purchases
- Precise product descriptions and structured data — Preventing wrong purchases and winning "not as described" disputes
- Bot management that distinguishes malicious from legitimate automated traffic — This article
Each layer reduces your exposure at a different point in the transaction lifecycle. Together, they create the kind of defense depth that the agentic commerce era requires—and that will look very familiar to the card networks when they build their formal dispute rules, because it will already mirror the evidence framework they're likely to adopt.
Frequently Asked Questions
Card testing (also called card enumeration) is when fraudsters use automated bots to run small transactions against a merchant's checkout to validate stolen card numbers. Even when these tests don't result in chargebacks, Visa counts the resulting TC40 fraud reports toward your VAMP ratio. Visa estimates enumeration fraud causes $1.1 billion in annual losses. Merchants processing 300,000 or more transactions monthly who exceed a 20% enumeration ratio face Visa's VAMP enforcement program, which carries significant monthly fines and the risk of monitoring program placement.
Aggressive bot blocking can hurt conversion if it misclassifies legitimate AI shopping agents as malicious bots. According to HUMAN Security, 51% of internet traffic is bots, and AI shopping agents from tools like Perplexity, Google Shopping, and OpenAI's shopping features represent a growing share of that traffic—and a growing share of purchase-ready intent. The goal is not to block all automated traffic but to distinguish malicious bots from legitimate agents and treat each appropriately. Rate limiting and behavioral analysis achieve this more effectively than blanket bot blocking.
No. 3D Secure requires a cardholder to be present to complete the authentication step—it is fundamentally incompatible with fully automated agent-initiated transactions. For bot-driven fraud, you need bot-detection controls at the checkout layer, not cardholder authentication steps. For legitimate AI agent transactions, 3DS creates abandonment rather than security. The right defense against card testing bots is rate limiting, behavioral analysis, and IP reputation scoring—not cardholder authentication.
Visa's Transaction Aggregation Protocol (TAP) is a framework in development that would create a verified identity layer for agent-initiated transactions, allowing card networks, issuers, and merchants to distinguish authorized AI agent purchases from automated fraud. It is not yet fully deployed. Merchants who build structured agent logging, explicit ToS acceptance for agent transactions, and clean product description practices now will be well-positioned to comply with TAP and similar frameworks when they roll out—because the evidence requirements are likely to mirror what good practice looks like today.