What are good metrics for IT development?

measure-up

How should our development activities be measured? What should matter? Mary Poppendieck, inventor of the term “lean software development” has a view:

“The right measurements for software development are delivered business value, cycle time to deliver that value, and customer satisfaction*

I like this because its a small number of metrics and it promotes a focus on value, speed and customers.  Two of the 12 principles behind the Agile Manifesto are:

  • Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
  • Working software is the primary measure of progress.

So it seems like these metrics will promote these principles. There is no standard on how to measure these things practically, or who the customer is, or when to start the clock on cycle time, or how to quantify business value, or how to avoid nasty behavioural side-effects. One particularly difficult area is to decide when business value is delivered. Is this when a feature is actually been used or is when the feature is ready to launch in production but is held back because of a business decision  (know as earned business value in IT industry jargon)?

What else? All experts say it’s better to have a small number of metrics. Even so, there are things other lean-agile practitioners like to add on top of this. Perhaps these are of primary interest to the delivery/portfolio manager/delivery team but not beyond that:

  • Velocity –  This is equivalent to throughput – i.e. how much functionality is delivered per month. It’s hard to measure if one doesn’t have a stable way of estimating the size/complexity of requirements and it’s impossible to compare across delivery streams.
  • Defects/Errors – Both open and fixed.
  • Unit test coverage – For all common programming languages there are tools which can calculate what percentage of the lines of code are executed by the unit tests. Perhaps this does need more focus as coverage in many of the teams I have worked with is 0% (depending on the language/feature, min 80% coverage is normal) and there are very few teams who actually know what their coverage is.
  • Technical debt – How much has been added and how much has been removed per month. Most of the teams I have worked with don’t track technical debt so that makes this an interesting one to work on.
  • Work in progress – This can be visualised on the team kanban board. Work-in-progress and queuing times are leading indicator for cycle time (whereas cycle time is always a lagging indicator) which allows action earlier.
  • Lean-agile practice adoption maturity

Perhaps this gives some inspiration to adjusting key performance indicators next time round?

More reading:

 

Leave a Reply