Select Page

Evaluating Price Response as Opposed to Price Elasticity


Last year, Amy Gallo published “A Refresher on Price Elasticity” in the Harvard Business Review). It was a great primer for how most analytics firms address price questions from marketers, and I am a little frustrated with myself for taking so long to comment on it.

She makes a number of great points on the common mistakes that managers make using price elasticity. With the help of Dr. Jill Avery, Senior Lecturer of Business Administration at the Harvard Business School. The classic issue with price elasticity is extrapolating price changes. This is why Middlegame builds our models across the entire category of SKUs and all the price points that are represented from both a price per unit and per volume perspective.

Then their discussion moves a little more to where I wanted to comment. Amy and Jill point out that understanding the price elasticity of demand for your product doesn’t tell you how to manage it:

December_16_Blog NEW_Callout

Bingo … differentiating yourself from competitors and representing value means that your product represents a choice to shoppers and therefore any evaluation of price response has to be relative to the prices of those competitors. Some competitor prices will interact heavily with your potential change in price than with other competitors. That is why we use Competitive Interaction Analysis (CIA)®, our market structure component, to guide our response modeling.

However, the problem that I have is that price elasticity is not static across price points. Given a change in price, the relative presence of value versus the competition changes.  We would probably expect a growing price to eventually decrease in order to stimulate shopper response.  However, we should also expect that the less distinctive the price advantage becomes compared to the competitors’ prices and the less impact price has. The marketing analytics community repeatedly argues for this “wear out” effect with advertising. However, price elasticity effects constantly save various approaches to build thresholds. This is even in the “log-log” model. Lee Cooper presents this issue in detail in the most treasured book in the Middlegame library, Market Share Analysis .

It makes sense that a price increase of 10% might have the same negative impact as a price decrease of 10% has in terms of a positive impact. But it also makes a lot of sense that it might not persevere as the new competitive landscape takes fold. Leveraging the typical price elasticity approach, they will both be the same. An elasticity of -1.4 is a -14% increase or -14% decrease. Unfortunately, I think a lot of bad pricing decisions have been made using tables of elasticities as opposed to simulating price response in the context of the immediate competition.

We talk a lot about customers at Middlegame even though we usually call them shoppers. Jill Avery is one of the best academics on CRM that I follow. Last year she published the book Strong Brands, Strong Relationships with two of her colleagues. I was  fond of one of the later chapters that highlighted systems and metrics for measuring brand relationships. Many of the “softer” measures of brand attitudes are actually better predictors of future sales than the more oblique rapid fire measures that we look at in marketing analytics. It is a big push at Middlegame for us to start including those in our work. Regardless, you can find a copy here.

Middlegame is the only ROMI consultancy of its kind that offers a holistic view of the implications of resource allocation and investment in the marketplace. Our approach to scenario-planning differs from other marketing analytics providers by addressing the anticipated outcome for every SKU (your portfolio and your competitors’) in every channel. Similar to the pieces in chess, each stakeholder can now evaluate the trade-offs of potential choices and collectively apply them to create win-win results.