The evolution of strategy

Both the Harvard Business Review and the New York times have recent posts on the subject. In HBR, Justin Fox tells of a presentation by Vivek Ranadive, who said, “I believe that math is trumping science. What I mean by that is you don’t really have to know why, you just have to know that if a and b happen, c will happen.”

He further speculates that US monetary policy might do better being guided by an algorithm rather than bankers: “The fact is, you can look at information in real time, and you can make minute adjustments, and you can build a closed-loop system, where you continuously change and adjust, and you make no mistakes, because you’re picking up signals all the time, and you can adjust.”

The NY Time’s Steve Lohr also talks about the recent enthusiasm for a quantitative approach to management, evangelized by Erik Brynjolfsson, Director of the MIT Center for Digital Business, who says Big Data will “replace ideas, paradigms, organizations and ways of thinking about the world.”

However, Lohr and Fox (who wrote the excellent book, The Myth of the Rational Market) caution about the oversimplifications inherent in modeling. Take, for example, some of the potentially flawed assumptions in Ranadive’s version of an algorithmically driven monetary policy:

  • Something as complex as monetary policy can be contained in a closed loop system
  • The past can reliably predict the future
  • If it doesn’t and things do head into uncharted territory, you’ll be able to “tweak” things into place as new information becomes available.

Fox uses the analogy of a Landing Page A/B (or multivariate) test as an example of the new quantitative approach to the world. In theory, page design could be left to a totally automated and testable process, where real-time feedback from users eventually decides the optimal layout. It sounds good in theory, but here’s the problem with this approach to marketing – you can’t test what you don’t think of. The efficacy of testing depends on the variables you choose to test. And that requires some thinking. Without a solid hypothesis based on a strategic view of the situation, you can quickly go down a rabbit hole of optimizing for the wrong things.

For example, most heavily tested landing pages I’ve seen all reach the same eventual destination – a page optimized for one definition of a conversion. Typically this would be the placement of an order or the submission of a form. There will be reams of data showing why this is the optimal variation. But what about all the prospects that hit that page for which the one offered conversion wasn’t the right choice? How do they get captured in the data? Did anyone even think to include them in the things to test for?

Fox offers a hybrid view of strategic management that more closely aligns with where I see this all going – call it Bayesian Strategic management. Traditional qualitative strategic thinking is required to set the hypothetical view of possible outcomes, but then we apply a quantitative rigor to measure, test and adjust based on the data we collect. This treads the line between the polarities of responses gathered by last week’s column – it puts the “strategic” horse before the “big data” cart. More importantly, it holds our strategic view accountable to the data – a strategy becomes a hypothesis to be tested.

One final thought. Whether we’re talking about Ranadive’s utopian (or dystopian?) vision of a data driven world or any of the other Big Data evangelists, there seems to be one assumption that I believe is fundamentally flawed, or at least, overly optimistic – that human behaviors can be adequately contained in a predictable, rational, controlled closed loop system. When it comes to understanding human behavior, the capabilities of our own brain far outstrip any algorithmically driven model ever created – yet we still get it wrong all the time.

If Big Data could really reliably predict human behaviors, do you think we’d be in financial situation we are?