Showing posts with label estimation. Show all posts
Showing posts with label estimation. Show all posts

Saturday, 18 February 2012

Industry-average productivity?

An interesting point here that is quite true. Effort isn't enough... however - is it really possible to compare groups of IT guys against "industry average"? What data do we have on "industry average"?

Have we gathered reliable data on any of these? How would be go about collecting it and is it even really possible to do that reliably?

I guess it's possible if you're doing work that's already been done (eg building a better mousetrap)... maybe if you're churning out cookie-cutter brochure-ware websites perhaps? Or Yet Another Spreadsheet App?

Even so - I'd be curious how you'd determine the productivity of other groups in your industry - what is it exactly that we're averaging? Time to complete features? Code quality? Lines Of Code? :|

Even if we decide on a metric, asking people in your own company about their past experiences will only give you a few datapoints - possibly distorted by 20/20 hindsight (or whether you want to come off looking better than reality might show up). I don't know of any reliable info out there apart from this kind of self-gathered stuff.

How do we go about gathering data across companies and how do we trust that data? It'd still be self-reported and therefore open to various forms of bias ("Yeah, my *old* team used to be awesome! nothing like this group of idiots!"), differing levels of slave-driving ("Yes, *my* team got it done in a week... just ignore the shackles on their legs!"), differing levels of quality ("we got it done over a weekend! security? testing? what's that?") and downright manipulation ("yeah, we're Team Awesome and can code up a custom-built website in only 24hours!!!!")

Not that it wouldn't be worth giving it a go.. but of course you still fall back on the problem of metrics. What metrics reliably indicate productivity (especially when we need to hold code quality constant)?

"Features" produced per week perhaps? but what is a "feature"? how big is it? how do we compare across companies and even across industries? It's a tough call - made even harder when you add the "quality" constraint. Is a cavalier team churning out unplanned, untested code full of security holes *more* productive than a careful one that designs something elegant and simple, robust and well-tested?

Measuring quality is always a tough nut to crack.

I've seen people try for "customer satisfaction" - but in our industry, it also pays to ask "which customer?" - the one paying the bill or the one that is forced to actually use the product? Another example of Who is the real user here?

So... are there any resources out there? and what's people's experiences of their quality?

Monday, 6 February 2012

Estimates and hiking

A fairly standard question on quora: Why are software development task estimations regularly off by a factor of 2-3? has sparked some discussion.

The best answer given was an extremely entertaining hiking analogy (it's the top-voted answer - go see), explaining the software development process in terms of the fractal-nature of a coastline, with unexpected delays along the way.

I think it's a great analogy as it goes and would be extremely helpful in getting the idea of unexpected scope-creep across to a complete IT-newbie.

The answer then spawned a plethora of comments and even a counter-post that is also really interesting: Why Software Development Estimations Are Regularly Off, which explains that the hiking analogy is way off, and that software estimation is more like inventing.

I agree... though I still see great value in the hiking analogy for explaining "what goes wrong" on the kind of project that encounters the problems described.

The counter-post has itself sparked a discussion on Y-combinator which goes into a lot more detail about estimation issues in general.

It's all provided me with an entertaining read on an otherwise well-picked-over subject.

Tuesday, 31 January 2012

Link: Rough Estimation Oath

OMG how much do I love this oath!!!

I might put it in my standard contract going forward :)

Sunday, 24 July 2011

Who Cares About Forecast Accuracy?

Corporations and governments spend staggering amounts of money on forecasting, and one might think they would be keenly interested in determining the worth of their purchases and ensuring they are the very best available. But most aren’t. They spend little or nothing analyzing the accuracy of forecasts and not much more on research to develop and compare forecasting methods. Some even persist in using forecasts that are manifestly unreliable. … This widespread lack of curiosity … is a phenomenon worthy of investigation.

An eye-opening article by Robin Hanson, called Who Cares About Forecast Accuracy?, describes how and why corporations spend heaps of money on making predictions... but almost none on testing whether those predictions actually worked out.

The article discusses the fact that actually recording the accuracy of pie-in-the-sky forecasts means that it's much harder to hide a poor track-record for leadership. That a lot of forecasting is more about signaling affiliation to the company ("yeah, we're doing great, I predict we'll be done by tomorrow!") than actual accuracy. Quite possibly just another effect of bosses preferring overconfidence to accuracy.

From my own experience - I find that many "leaders" don't want you to say that the project might be in trouble... and the article discusses how this can happen because the middle-managers don't want somebody to go on record as having questioned their vision... because it'll reflect badly on them down the line should something go wrong. It's much harder to sweep losses under the carpet if nobody officially takes notice of how often it happens.

I think there's also another reason, not mentioned in the article. That being the long-term vs short-term payoffs. The strategic echelons of an organisation need your predictions so that they can make their own overall predictions of how long a plan will take to implement. But they don't want to bother with the long-term cost of implementing a full system with all the process-change and downtime that it will take. I've personally found it extremely difficult to "sell" both management and fellow-programmers on changing to to a methodology which will allow for recording and feedback of prediction accuracy. These are methodologies that aren't particularly heavy or difficult to implement (compared, say, with six-sigma), but which provide a quick and easy way to get better at providing an accurate estimate of your times.

The problem seems to be, that it takes more time and effort to implement, and people don't want to do the "work". They want to make their guesses *and* make t more accurate... but not to have it impact the "bottom ilne" in any way. This is a bit like saying "we'd like to buy faster computers, but won't give you any budget for their increased cost". You somehow have to just improve anyway.

What normally ends up happening is that you don't properly implement any system of accuracy, so you end up simply guessing and just hope for the best. Actually spending the effort to make those guesses meaningful is too much like hard work, and nobody wants to add to their workload. This is understandable... but not exactly helpful (or realistic). Instead of improving over time, we continually wallow in a world of wild guesstimates and "fudge-factors" that never seem to quite cover the extra time it always seems to take.

To get back to the article, Hanson suggests that we use a betting-system to literally "put your money where your mouth is"... the intention being that if you personally stand to win/lose by your prediction, then it would be in your best interest to start being more accurate.

I think I'd balk at my own actual money, but I'm sure some kind of system could be set up for "points" where you "win" various rewards (eg an extra day off, or even just the ability to work on "the fun project") if you do especially well.

Has anybody had any experience with estimates and measuring their accuracy? Either from the trenches or the strategic side of things?

Friday, 14 January 2011

Bosses prefer overconfidence to accuracy

Well, this should come as no surprise, but it's confirmed that bosses prefer overconfidence to accuracy when asked for estimations.

I've spoken before about how I personally prefer Truth over Harmony, with estimation as a prime example. Mainly because, when it turns out that an estimate was inaccurate... you know who's going to get the blame for "coming in late".

And yet I've found in the past that if I try for accuracy, the truth is taken as "pessimism" (by comparison with the overconfident, optimistic estimates given by others).

What I found particularly scary (and accurate) was the final quote of the article:

"Thus bosses of software managers take accurate estimates to be a signal of a lack of competence."

This is not how the world should work :(