An interesting point here that is quite true. Effort isn't enough... however - is it really possible to compare groups of IT guys against "industry average"? What data do we have on "industry average"?
Have we gathered reliable data on any of these? How would be go about collecting it and is it even really possible to do that reliably?
I guess it's possible if you're doing work that's already been done (eg building a better mousetrap)... maybe if you're churning out cookie-cutter brochure-ware websites perhaps? Or Yet Another Spreadsheet App?
Even so - I'd be curious how you'd determine the productivity of other groups in your industry - what is it exactly that we're averaging? Time to complete features? Code quality? Lines Of Code? :|
Even if we decide on a metric, asking people in your own company about their past experiences will only give you a few datapoints - possibly distorted by 20/20 hindsight (or whether you want to come off looking better than reality might show up). I don't know of any reliable info out there apart from this kind of self-gathered stuff.
How do we go about gathering data across companies and how do we trust that data? It'd still be self-reported and therefore open to various forms of bias ("Yeah, my *old* team used to be awesome! nothing like this group of idiots!"), differing levels of slave-driving ("Yes, *my* team got it done in a week... just ignore the shackles on their legs!"), differing levels of quality ("we got it done over a weekend! security? testing? what's that?") and downright manipulation ("yeah, we're Team Awesome and can code up a custom-built website in only 24hours!!!!")
Not that it wouldn't be worth giving it a go.. but of course you still fall back on the problem of metrics. What metrics reliably indicate productivity (especially when we need to hold code quality constant)?
"Features" produced per week perhaps? but what is a "feature"? how big is it? how do we compare across companies and even across industries? It's a tough call - made even harder when you add the "quality" constraint. Is a cavalier team churning out unplanned, untested code full of security holes *more* productive than a careful one that designs something elegant and simple, robust and well-tested?
Measuring quality is always a tough nut to crack.
I've seen people try for "customer satisfaction" - but in our industry, it also pays to ask "which customer?" - the one paying the bill or the one that is forced to actually use the product? Another example of Who is the real user here?
So... are there any resources out there? and what's people's experiences of their quality?