Corporations and governments spend staggering amounts of money on forecasting, and one might think they would be keenly interested in determining the worth of their purchases and ensuring they are the very best available. But most aren’t. They spend little or nothing analyzing the accuracy of forecasts and not much more on research to develop and compare forecasting methods. Some even persist in using forecasts that are manifestly unreliable. … This widespread lack of curiosity … is a phenomenon worthy of investigation.
An eye-opening article by Robin Hanson, called Who Cares About Forecast Accuracy?, describes how and why corporations spend heaps of money on making predictions... but almost none on testing whether those predictions actually worked out.
The article discusses the fact that actually recording the accuracy of pie-in-the-sky forecasts means that it's much harder to hide a poor track-record for leadership. That a lot of forecasting is more about signaling affiliation to the company ("yeah, we're doing great, I predict we'll be done by tomorrow!") than actual accuracy. Quite possibly just another effect of bosses preferring overconfidence to accuracy.
From my own experience - I find that many "leaders" don't want you to say that the project might be in trouble... and the article discusses how this can happen because the middle-managers don't want somebody to go on record as having questioned their vision... because it'll reflect badly on them down the line should something go wrong. It's much harder to sweep losses under the carpet if nobody officially takes notice of how often it happens.
I think there's also another reason, not mentioned in the article. That being the long-term vs short-term payoffs. The strategic echelons of an organisation need your predictions so that they can make their own overall predictions of how long a plan will take to implement. But they don't want to bother with the long-term cost of implementing a full system with all the process-change and downtime that it will take. I've personally found it extremely difficult to "sell" both management and fellow-programmers on changing to to a methodology which will allow for recording and feedback of prediction accuracy. These are methodologies that aren't particularly heavy or difficult to implement (compared, say, with six-sigma), but which provide a quick and easy way to get better at providing an accurate estimate of your times.
The problem seems to be, that it takes more time and effort to implement, and people don't want to do the "work". They want to make their guesses *and* make t more accurate... but not to have it impact the "bottom ilne" in any way. This is a bit like saying "we'd like to buy faster computers, but won't give you any budget for their increased cost". You somehow have to just improve anyway.
What normally ends up happening is that you don't properly implement any system of accuracy, so you end up simply guessing and just hope for the best. Actually spending the effort to make those guesses meaningful is too much like hard work, and nobody wants to add to their workload. This is understandable... but not exactly helpful (or realistic). Instead of improving over time, we continually wallow in a world of wild guesstimates and "fudge-factors" that never seem to quite cover the extra time it always seems to take.
To get back to the article, Hanson suggests that we use a betting-system to literally "put your money where your mouth is"... the intention being that if you personally stand to win/lose by your prediction, then it would be in your best interest to start being more accurate.
I think I'd balk at my own actual money, but I'm sure some kind of system could be set up for "points" where you "win" various rewards (eg an extra day off, or even just the ability to work on "the fun project") if you do especially well.
Has anybody had any experience with estimates and measuring their accuracy? Either from the trenches or the strategic side of things?