See here, here and here for marketing and statistical examples. All fine sources that helped. But it didn't click until I read Bayesian Statistics by F. J. Anscombe in The American Statistician [Vol. 15, No. 1. (Feb., 1961), pp. 21-24]. Get it here.
The fact is, decisions in marketing (or drug development, political campaigns, manufacturing, investment ...) are rarely long run repeat games. The "long run" of orthodox statistics is nice to know. In absence of any other decision frameworks, it is better than nothing. But long run odds of 1/6 don't mean a thing to the Russian Roulette player who just blew out his brains. Or to the brand manager who marketed a dud.
From the 3rd page of the article:
"The [orthodox] statistician will tell [the decision maker] that he is basing a decision on a random sample, and that is rather like playing roulette. In fact, the statistician's recommendations amount to a policy of play, rather like the getting policy of an inveterate gambler--or better, the betting policy of an insurance company. Chances mean relative frequencies of occurrence in a long series of trials.... If the executive had to make a long sequence of decisions about different products, the break-even value for p and the value of n being always the same, then by following the statistician's policy he could be sure that, whatever values of p might occur, his average [opportunity cost] from wrong decisions would not be very large ... the worst it could be being as low as possible.... useful if the executive is at loggerheads with his board of directors or under fire from other executives in the company."That's a mouthful. But that's it in a nutshell. In the real world, these decisions are not repeat games. Especially true when it comes to the manager's career. So how can she rely on long run frequencies?
Russian Roulette. That metaphor came to me in my MBA "orthodox" statistics class. If I had brought it up, my professor probably [prior: beta (μ | 4, 2)] would have given us a little digression on Bayesian principles.
But I didn't. And it has bugged me ever since. But because we naturally think and decide better with information like, "there's an 83% that the landing page will convert better" as opposed to, "we reject the hypothesis that the new page is the same as the old page with p < .05," I am ready to dive into better decisions.
As I absorb and experiment, I'll post the results.