The market for agentic commerce could become a meaningful part of online retail within the next five years. According to a Bain & Company report, AI agents could account for between 15 and 25 percent of U.S. e-commerce sales by 2030 – a market worth roughly $300 billion to $500 billion. Similar projections are emerging in Europe, pointing to a structural shift toward transactions that are initiated, influenced, or completed by autonomous AI systems operating either independently or within retailer platforms.
AI-powered software is already being deployed to research products, select vendors, place orders, and complete transactions on consumers’ behalf. If Bain’s projections hold true, agentic AI could move from the periphery to a core feature of modern e-commerce. That shift raises a central legal question: when autonomous systems execute a meaningful share of transactions, who bears responsibility when those systems misprice products, misrepresent goods, discriminate, breach contracts, or otherwise cause harm? And is existing law equipped to allocate that responsibility effectively?
From Assistance to Autonomy
The foundation for broader adoption is already visible. Bain reports that roughly 30 to 45 percent of U.S. consumers use generative AI for product research and comparison, while Salesforce estimates that AI influenced roughly $3 billion in U.S. Black Friday sales. As of now, fully autonomous purchasing remains early. Shopify Chief Financial Officer Jeff Hoffmeister recently said the company is not yet seeing a meaningful number of transactions completed directly by AI agents on its platform, underscoring that most current uses of AI in commerce still involve research, recommendations, and assisted decision-making rather than fully automated transactions.
Adoption is also likely to vary by category. Routine purchases – where price, availability, and speed dominate – may transition first. More discretionary and identity-driven categories, including fashion, luxury, and travel, may follow more slowly as consumer trust and system sophistication develop.
At the same time, agentic commerce is already shaping deal activity across the retail ecosystem. Earlier this year, PayPal acquired Cymbio, a multi-channel commerce orchestration platform that helps brands make product catalogs discoverable and purchasable across marketplaces and emerging AI-powered interfaces. PayPal has positioned the deal as a way to enable its payment services to operate across AI environments such as Microsoft Copilot and Perplexity.
More recently, Meta agreed to acquire Moltbook, an AI-agent social network designed to facilitate interactions between autonomous agents and users. Meanwhile, J.P. Morgan Payments has partnered with French commerce software company Mirakl to support infrastructure for agent-driven transactions, combining Mirakl’s marketplace technology with J.P. Morgan’s payments capabilities.
For retailers and brands, these developments carry significant strategic implications. Companies that fail to define an agentic commerce strategy early risk ceding control over data, checkout, fulfillment, and the consumer relationship. At the same time, the growing role of autonomous systems introduces new legal exposure that existing liability frameworks have yet to address at scale.
When Software Acts Like a Market Participant
Commercial law has traditionally assumed that economic activity is conducted by humans – or by legal entities acting through human agents. Agentic AI tests that premise. These systems do more than support decision-making. They may initiate transactions, select trading partners, and execute commercial decisions with limited real-time human involvement. Yet despite functioning in ways that resemble market participants, AI agents are not legal persons. They cannot form legal intent, owe duties, or bear liability in their own name. When harm occurs, responsibility must attach elsewhere.
Existing legal frameworks governing contract, negligence, product liability, and agency are capable of allocating responsibility for AI-driven harms, even if they are strained by the technology’s speed and scale.
Contract law provides an initial framework. Electronic contracting regimes already recognize that agreements may be formed through automated systems interacting with one another. Under laws, such as the Uniform Electronic Transactions Act and the federal E-SIGN Act, contracts can be formed through the interaction of “electronic agents,” even when no human reviews each step in real time. Courts are therefore likely to analyze many AI-executed transactions through existing doctrines of delegated authority and automated contracting.
Beyond contracts, agentic AI raises familiar negligence risks. Liability may turn on questions of foreseeability and reasonable care, including whether risks were identified, systems adequately tested, and meaningful oversight mechanisms were put in place. In practice, courts may focus less on the novelty of the technology and more on governance failures such as inadequate monitoring or risk management.
Discrimination provides another lens through which courts may evaluate AI systems. As AI tools increasingly perform market-facing functions traditionally handled by humans, such as screening applicants or allocating access to goods and services, courts have shown a willingness to apply existing anti-discrimination frameworks.
Recent litigation illustrates how courts may approach these issues. In Mobley v. Workday, a federal court allowed claims to proceed against a provider of AI hiring software alleged to have acted as a decision-making intermediary for employers. Although the dispute arises in the employment context, it underscores a broader principle: companies that rely on AI systems to make or materially influence decisions affecting individuals may remain responsible for the outcomes.
Regulation is responding, as well. Under the EU AI Act, certain systems deployed in high-risk contexts are subject to requirements around risk management, traceability, and human oversight. While most retail applications are unlikely to fall automatically into these categories, the law reflects growing regulatory attention to AI systems that shape economic decision-making.
THE TAKEAWAY: If market projections hold true, AI agents could soon execute a meaningful share of U.S. e-commerce transactions. As agentic systems move from experimentation into routine commercial use, autonomy will not serve as a limiting principle for responsibility. AI may act like a market participant, but it will not be treated as a legal one. When companies allow machines to decide and transact at scale, existing law will still apply – and responsibility will continue to follow control, governance, and the allocation of risk.
