Monday, March 15, 2010

Why forecasts fail: simple ones are better

As regular readers may recall (one of my favourite expressions!)... I regularly ponder my subscription to the MIT Sloan Management Review. After a few dull issues - which leave me wondering if it is worth the money - it has once again come up with an issue with a couple of articles which justify the price for the whole year.

One of these is “Why Forecasts fail. What to do instead.

The insights I find illuminating here are:
  • Sophisticated, complex, models are good at fitting past data (“forecasting wit hindsight”) but they only not very accurate in predicting what will happen in the future (they tend to extrapolate)
  • Simple models are not so good at explaining past data but are better at forecasting the future
  • Human judgement is worse than statistical models at predicting the future
  • Experts don’t predict any better than the average person
  • Averaging the predictions of several independent individuals is more accurate
The article cites research to back up these assertions.

This fits in well with what I tell teams when I’m coaching and training in Agile methods. The simple estimation and velocity measures I advocate are better than the complex models which are too often preferred.

And it is no use asking a system expert (architect, senior developer, what-ever) how long it will take, their estimate is no better than anyone else. But, getting several different estimates (e.g. using planning poker or similar) is.

One technique suggested by the article is something I’ve tried in “future-spectives.” You say to the team, or individual, “Imagine we are at the end of the project, we have finished on time, in budget, everyone is very happy, what did we do right?” and the opposite: “Another failed project, what did we do wrong?” Imagining yourself in that situation can produce useful insights.

No comments:

Post a Comment

Note: only a member of this blog may post a comment.