Discussant - Kamakura

Introduction

I should preface my discussion by noting that I am not a statistician, nor a Bayesian one. Therefore, I shall focus on the Marketing issues, and leave the methodological details to Ed George. This makes it easier for me, because I can make suggestions for changes or extensions without worrying about whether they are feasible or how to implement them.

To start, I must take my hat off to Alan for tackling a problem of such complexity and magnitude. A measuring store-level model would require 336 parameters in his formulation: 12 brand intercepts, 12 coefficients each (for display, deal and lagged sales), 144 price cross-elasticities and 144 terms in the error covariance matrix. All this for each of 83 stores!

As Alan pointed out, even with the large amounts of data available from supermarket scanners, store-level estimates of these 336 parameters are not accurate enough to be useful for decision making. One typically obtains counter-intuitive results such as negative price cross-elasticities among competing brands. The other extreme (i.e., pooling the data across stores) is not useful either, because it ignores important differences across markets. However, I believe one could reach a compromise between these two extremes with a finite-mixture model that would lead to demand systems for latent classes of stores. Whether store-level estimates are more useful than group-level ones depends on the managerial purposes of the model, which I will be discussing later on.

Previous Section

Next Section

Go to Table of Contents

Go to written version of paper