Before you delegate pricing to an AI agent, know what you actually believe

Before you delegate pricing to an AI agent, know what you actually believe

Date

May 4, 2026

Contact us

That sounds obvious. It isn't.

Gartner predicts that over 40% of agentic AI projects will be cancelled by 2027. This isn't because the technology failed to deliver, but because organisations weren't ready for what delegation actually requires. That gap is worth understanding before anything else.

The most compelling case for agentic AI in retail pricing isn't that it's smarter than your team. It's that it's faster

A pricing team, however experienced, can meaningfully review and act on prices once a day at best. An agentic system works across thousands of SKUs continuously, responding to demand signals as they happen. That difference in tempo is the opportunity.

The real power comes from the combination of two layers. First, econometric models use historical sales data and price elasticities to calculate the optimal price point. Then, agentic AI executes those recommendations at scale. Together, they don't just react to the market; they move with it. Both need something most retail organisations haven't built yet: a clear decision architecture.

Retail pricing decisions are typically guided by institutional memory and the intuitive judgement of experienced merchants

In a meeting-based environment, that works. Trade-offs get discussed, context gets shared, and the right people make the call. That model doesn't scale to continuous execution.

In conversations with pricing teams, a pattern surfaces quickly. When the question is how they balance margin protection against volume targets, the answer is usually a description of process: who is in the room, which dashboards get opened, and how the conversation tends to go.

Press for the underlying rule — at what exact point does a stock risk overrule a margin floor? — and the clarity fades.

A system can't inherit feelings or intuition. If you don't define the rules, it will still act, just not in the way you expect. Moving towards agentic AI forces organisations to do something they've often avoided: making their implicit trade-offs explicit. And that starts with a question that turns out to be harder than it looks: **What does "optimal" actually mean here? **

Different teams will answer differently. Finance wants margins. Operations wants empty warehouses. Without a clearly articulated objective, a single definition of what the system is optimising for, there's no basis for an agent to make consistent decisions. This isn't a technical specification. It's a strategic one.

For the past decade, retail AI has mostly worked as an advisory layer

Systems generate recommendations and humans vet them, filling in everything the model doesn't know about brand, legal constraints, and commercial context.

Agentic AI shifts where that vetting happens. Rather than reviewing a recommendation after it's made, the merchant defines the boundaries before the system ever runs. The filter moves to the front of the process, shifting from reactive approval to proactive governance.

For a system to act autonomously, it needs more than broad goals

It needs a clear framework covering both what it's optimising for and where it cannot go.

The objective function defines what winning looks like. Is the system optimising for margin, sell-through, or a combination of both? The answer will differ across categories and seasons. Getting it wrong doesn't produce cautious mistakes; it produces confident ones, at scale, consistently pushing your pricing in the wrong direction.

Constraints define the operating limits, and they come in two forms. Local constraints apply at the product level. A never-out-of-stock item might carry a rule that discounting stops at 10%, regardless of what demand signals suggest. Global constraints apply across the assortment. A retailer might decide that no more than 30% of the collection should be on promotion at once, not because of any single product's economics, but to protect how the brand reads as a whole.

Without these limits, the system doesn't just optimise; it over-optimises. It will chase short-term signals at the cost of margin, brand perception, or both.

The trust paradox: reasoning as governance

Accountability is the central challenge with agentic systems. In a trade meeting, a merchant can walk through their reasoning. When an agent adjusts a thousand prices, the organisation needs that same reasoning to be accessible and auditable.

Explainability is the bridge. For an organisation to genuinely trust a system with delegated authority, the system needs to show which constraint was binding and which trade-off it made. Without that, the system will be overruled — not because it's wrong, but because no one can explain why it's right.

This is also where governance needs a clear line. When does the human retain final say, and when does the system act on its own? That answer will differ across retailers and categories, but it has to be decided deliberately, not discovered after something goes wrong.

The re-valuation of the merchant

None of this reduces the merchant's role. It changes what that role looks like.

In an agentic model, merchants are no longer bottlenecks for high-frequency adjustments. They become architects of the decision framework. They set the objective, define the constraints, and stay in an iterative loop as the system learns. The model calculates optimal prices based on elasticity data and the defined rules, while merchants review outputs and refine the logic.

The question shifts from "what is the right price today?" to "how should our brand respond under these conditions?" That's a more structural role and, honestly, a more interesting one.

The move towards agentic AI is less a technology upgrade and more an exercise in knowing yourself as an organisation. It asks retailers to make explicit what they actually believe: about margins, about brand, and about where human judgement ends, and structured delegation begins.

Most organisations will get access to the same tools. What separates them is how clearly they've defined what good looks like, and whether they knew what they believed before they handed over the keys.

Your team prices every day

Let them practice somewhere safe — without learning on live customers — and see the invisible trade-offs they've been missing

Reserve a seat for the Pricing Game

Talk to our experts

Let's create real impact together with data and AI

Petr Pushkar

Senior Manager, Retail

Petr Pushkar