Archives for Chris

Learning mind-set? Me? Of course!

brain“Do you have a continuous learning mind-set?”

“Yes, I’m always looking to learn more, me.”

Well, I thought so to, then something happened that makes me not so sure.

Newbies are from Another Planet

The company I am currently working with will have lots of people retire in the next few years. They are massively beefing up their graduate programme to compensate – almost every team seems to have a graduate attached. What I’ve noticed is that when these graduates come across something they don’t know, their response is dead straight: “I don’t know that. Tell me about that.” Sounds kind of banal. Yet, the outcome is that they out-learn anybody else. Now I’ve spotted this pattern, the difference between this group and the people I normally see is so immense it takes my breath away. Really. It’s massive. I can’t stop thinking about it.

The people I normally see in large companies, usually view new learning with suspicion. I know they don’t say this out loud of course but they feel it anyway. Learning leads to change and change is in their experience usually painful and involves failure and loss. They have invested a lot to get where they are and don’t want to loose it. They don’t see it as a major part of their role to learn new things – that’s for the new guys. Not knowing about your job makes you look incompetent. The job is to deliver stuff. In any case the system does not really support learning – nobody gets their bonus just because they learnt a lot.

On the other hand, the graduates know they are here to learn. It’s OK not to know. It’s OK to spend time on finding out. There is nothing to loose by learning.

Do I have a Learning Disability?

Consider this: experienced people have what might be termed a learning disability. How they relate to their wealth of experience hinders them from learning.

Zen Buddhism provides a suggestion about how to overcome this disability.  A “Beginners Mind” is dropping expectations and preconceived ideas about something, and seeing things with an open mind, fresh eyes, just like a beginner. Something to practice – one doesn’t get good at adopting a beginners mind overnight.

So far, so good. Then it hit me. The way I look at all these people and think “Poor saps, they don’t even know they have a learning disability, let alone what to do about it.” Perhaps somebody else is looking at me and thinking the same? I can’t see what is holding me back from learning – if I did, I’d be addressing it! So, what’s my disability?

Thinking further

Couple of things you could join me in thinking about further…

  • Peter Senge, of Fifth Discipline fame, has some interesting suggestions for common learning disabilities in an organisation. They will make you smile, as you will recognise them all in other people. The challenge you may share with me is to find them in yourself.
  • Consider what systems/policies/procedures are in place in your context to prevent people, even if they wanted to, from learning effectively. It might not all be you after all, it might be the system you exist within that creates the learning disability! What would need to change to fix this?

Agile Scorecard? No! A better idea…

scorecarddOh no, yet another management team has asked for an agile adoption score card so they can “know how agile the teams are.” Wrong question! Here is why…

What gets measured gets manipulated

What happens next is that all teams are measured. To increase our score, my team need to do something called retrospectives. Fine let’s put another useless meeting in for the whole team. We need somebody called a scrum master? Fine, Dave can do that in addition to his already crazy workload. He can also spend the time writing post-its and putting it on a board (Yaahaay! A board = more points on our scorecard). Destroys value. Frustrates everybody. Yet another thing management put in the way of actually doing the work. Sigh.

Why do management care about how agile the teams are? Surely management really cares about things like return on investment, customer satisfaction, speed of response to the market. Measure these things if you like – although these are lagging indicators so aren’t very practical for decision making but this is a topic for another day.

Two better questions management might ask

Surely, if a team has a good understanding of agile practices, they can be trusted to adopt the practices that add value. Instead of a score card, here’s the first question management should ask each team:

Q1: Does the team have access to agile experience?

Experience means “having done it before successfully” – be they developers, product owners, scrum masters, agile coaches, etc. Agile practices are context specific so it’s best if the team have access to somebody or somebodies who have done it multiple times before. Somebody who has already had the “aha moment” that what worked in one team, does really work in the second team – it needs adapting.

Not everything is under the team’s control. So the second question management needs to form a view on:

 Q2: What barriers to experimenting with agile adoption is the team experiencing that are outside the team’s control?

Don’t just have somebody send a survey out. Don’t delegate it to a “transformation” team. Management must Go See to get close to the work and have a personal experience of what is really happening.

Set up the Systemic Improvement Service

When management have an initial list of systemic barriers to adoption, prioritise them and get to work removing them –  keep going forever. This will need quality time from management – the issues thrown up are likely to be tricky to resolve. LESS calls this an Improvement Service which anchors it firmly as a service management provide to the teams  Here’s some classics you might find near the top of your improvement backlog:

  • No widespread alignment about why we need to be agile/how urgent it is for the organisation/what it means to us
  • Senior management believe they can define in advance where value is and when it should be delivered by
  • Loads of handoffs to people external to the team
  • Teams unwilling to experiment due to delivery pressure, fear of failure, …
  • Technical debt rarely paid back
  • Product owner not empowered to prioritise
  • Temporary project teams
  • Teams too big
  • Slow and painful interaction with enterprise architects
  • Infrastructure and deployment processes sub-optimal

Think bottlenecks

If you think your bottleneck is that most of the teams don’t have access to agile experience, then get going with recruiting more scrum masters, coaches, etc. More likely is that the bottleneck is the capacity of the organisation to remove systemic impediments (i.e. chew through the list above). Work on expanding this capacity! The teams will become more agile when their environment lets them. The job of management is to shape this environment. Increasing your capacity to do this shaping might just be your best investment.

The Good, The Bad and The Ugly of Large Scale Scrum (LeSS)

lssHaving just attended Craig Larman’s 3 day LeSS (Large Scale Scrum) training course, I’m full of thoughts:

The Good

LeSS comes from a good place – a scaled version of scrum! There is a real effort not to add anything when scaling to multiple teams that is really not absolutely necessary and the spirit of scrum is preserved. Craig has poured a lot of relevant experience into LeSS. I sharpened up a few misaligned assumptions that I had with scrum – Eh, I didn’t know single team scrum as well as I thought. Then there are all sorts of practical implications of multi-team scaling which Craig has lots of really good thoughts on: continuous integration, communities of practices, the implication of self-organisation on hiring and firing practices etc. There was a lot for me to learn there and I think I need to reread the LeSS books again (oh, despite having read them, it seems like I could get more juice out by reading them again – see at the end of this post for links. Read these books!).

As with scrum. LeSS has an appealing purity – it describes the perfect end state (as opposed to the more messy approach of Scale Agile Framework) . LeSS is a marginally better than Scrum in describing how to facilitate some of the steps on the journey.

The Bad

LeSS proposes the immediate elimination of all overhead roles (project managers, testing groups, component development groups) as a first step and the forming of feature teams aligned to customer-value. Whilst a laudable idea and a direction to look in, in the organisations I have worked with, the complexity of the business domain and the state of the legacy code, means a wholesale adoption of approach would be bounced out immediately “You just don’t understand our business” they would say “Take your theories elsewhere.” Remember Conway’s Law? Systems resemble the organisations which built them. Slow, large complex organisations lead to slow, large complex software. “Our business is slow, large and complex – changing our product development organisation will just create a mismatch?”

When pressed for examples of where Craig’s approach had worked, Craig provided a lot of general references but was a bit short on details. I’d like to hear from large organisations who are not primarily software product focussed who have actually done this. It doesn’t seem to me that LeSS is well matched to the kinds of organisations I have worked with. It seems to be for small sub-set of software R&D focussed organisations who are so desperate they will try extreme measures.

And The Ugly

There will be massive resistance from middle management to introducing LeSS – since most of their roles are eliminated. Craig’s solution is to ensure support from senior management. I (and I project everybody in the room except Craig) have never come across this situation where senior management would be willing to provide this necessary support. They just don’t see it as enough of a problem to go through the pain. Craig walks away from organisations that don’t provide the senior support for the change – otherwise he he will just be “rearranging the deckchairs on the Titanic.” OK, fair enough. I project there are, maybe, only 1 in 1000 organisations who have the senior management support for LeSS. What about the other 99.9% of the market? How does LeSS help them take the next step on their agile journey (By the way, Continuous Improvement is one of the ten principles of LeSS)? It doesn’t. LeSS doesn’t even provide any help with this since Craig has no experience about making the problem visible to senior management to the extent that they are ready to take action.

In short, cherry pick ideas from LeSS. As a generalised scaling approach – a big thumbs down! The industry is still waiting for a good scaling approach!

larman2 larman1

Starting Scrum: Inception Phase & Sprint 0

2015-01-23 12_19_44Scrum, as the predominant agile approach, is maddeningly simple – have a single team deliver a potentially shippable increment every couple of weeks. It says nothing about how to get this promised land! So, how do we get there from a standing start? The transition to regular value delivery can be split into three phases:

  • Prior to forming the team (Inception)
  • Team formed but not yet sprinting (Sprint 0)
  • Team sprinting

Prior to forming the team (Inception)

What kind of team do I want? Does it make commercial sense? How will this fit in with all the other stuff going on round here?  Are business stakeholders aligned that we should do this? These are all good questions to ask up front so many companies have a process for this – often very PRINCE2 Project oriented.

Where can we get agile inspiration for this? The portfolio level in the Scaled Agile Framework(SAFe) provides some guidance (Lightweight Business Cases, Epics etc.). It  separates the flow of work from the creation of capacity to do the work (motto “move the work to the people, not the people to the work”). SAFe almost kills off the project notion entirely in the interest of have stable (and therefore high performing) teams. This is a good direction to look in. If this is a step to far for you, Discipline Agile Development (DaD) has a specific name for this early phase: Inception. In the DaD Inception Phase:

  • Form Initial Team
  • Develop Common Vision
  • Align with Enterprise Direction
  • Explore Initial Scope
  • Identify Initial Technical Strategy
  • Develop Initial Release Plan
  • Secure Funding
  • Form Work Environment
  • Identify Risks

Sounds like the right kind of things, doesn’t it? The challenge is that these are all unbound activities – how do we prevent endless polishing that ultimately delays finding out whether we have something of value or not (by building a bit and getting it out there).

I’ve had reasonable experience with setting a target time for these Inception activities but anything that involves a gateway (e.g. typically Secure Funding) can not be fully time-boxed because, if the funding committee say “No! Not good enough” then the proposal is bounced back to be further refined. (By the way it’s a good principle of process design that you should not have a stage gate without a way of limiting work-in-progress upstream.)

Team formed but not yet sprinting (Sprint 0)

The team are now here (incidentally, recruitment of team members can typically take 3+ months so Inception can be quite long). Should they start sprinting immediately? Most practitioners use a Sprint 0 whereby the team prepares itself to be delivering value. What should be in sprint 0? Here are some suggestions:


  • Architectural goals/approach identified and made visible.
  • High level architectural milestones understood
  • Dependencies and risks have been identified and made visible.
  • High-level conceptual design has been completed.


  • Network requirements arranged
  • Minimum environments ready (Development/test)
  • Development machines ready (Local development environments)
  • Logistic requirements in place (phone, desk, etc.)
  • Tools for testing, coding, integrating, and building have been selected and installed


  • The team has received required training
  • Roles and responsibility have been defined
  • Team board is set up
  • Stakeholder map created
  • Definition of done agreed.

When is sprint 0 done? Is it a fixed scope (all these things must be done) or fixed time (do as much as you can in 2 weeks). My encouragement is that it should be primarily on a fixed-time basis (as much as possible in 2 weeks) except for one item. This one:

AS A: Scrum team
I WANT TO: Create a Definition-of-Done and a tiny bit of working software (“hello world”) that fully meets this Definiton-of-Done
IN ORDER TO: Ensure the development pipeline works end-to-end
- Demonstrated/Reviewed by Product Owner


The Definition-of-Done should include releasing software into a production-like environment (or even better, to production itself). The reason I like this a lot is that it drives out all the problems around environments,  documentation, version control, testing, security, release management, etc.

If you are not able to demonstrated in Sprint 0 that you can release to a production-like environment, you’re simply not ready to sprint.

Team sprinting

Once the team is sprinting, then the sprint retrospective within Scrum is the mechanism whereby the team takes time to check whether they are (still?) in the promised land of regular value delivery and make changes to how they work as necessary.


Ready for agile? A test for your organisation

I heard a tale of an agile coach who had a rule as follows: If an organization is using Internet Explorer version 6 then they are uncoachable (latest is version 11). This was based on his experience – he had never got anywhere with a company who was so far behind the curve. Adoption of Internet Explorer is an indicator of something about the organization that is directly related to their hunger to absorb new ideas about work (i.e. the agile revolution).

Adoption of new ideas is characterised by the technology adoption lifecycle shown below:

chasmcurveThis curve suggests that a small number of people/organisations leap on new technology.  The majority take some more time and a handful are really really slow to jump onboard. (See this fun 3 minute video of people dancing at a festival to get this).

Individuals who champion agile within large organisations are typically early adopters. The primary need of these individuals is to see that it works – ideally much better, not just a bit better than what went before. Well, agile really does work – much better than anything else we know of so far. So these people totally “get” agile.

Often these agile champions are bemused and confused by the pushback from their organisation when they try introducing agile ideas. This is because their organisation as a whole is not an early adopter of an agile approach. Their organisation is in the early majority, late majority or laggard category.

In Crossing the Chasm, Geoffery Moore suggests viewing individuals and organisations using this model leads to two insights:

  • Pick off a group at a time. Moore suggests its most effective to work the curve from left to right, targeting sales and marketing efforts at one group at a time. Once this group is on board, move to the group to the right. With agile adoption, we could say that the early adopters are on board and it is only now the early majority that should be in focus.
  • Early adopters and the early majority have different needs. This is the “chasm” which agile needs to cross. Early adopters care only whether agile works or not. They want to get ahead of the competition and don’t mind disruption to the organisation or an approach which is not perfect.  The early majority have other concerns – they are looking primarily for a productivity gain to their existing way of doing things and don’t want major disruption. They need to hear that others in their industry are adopting it. They want to be sure their “agile supplier” is a market leader with a good reputation and they feel comfortable if there is choice/competition between different suppliers/approaches. Above all they prize the stability and effectiveness of their organisation as it is today and want to minimise any rocking of the boat. They prefer a series of small  changes to one large bang  – unlike the the early adopter who are looking primarily for large step changes in performance (which normally implies a big bang).

So, to return to versions of Internet Explorer. Perhaps organisations who are, say, in the early majority for one thing, tend to be in the early majority for everything? If you organisation is a laggard with browser versions, it will also be a laggard with respect to agile adoption? What else might be correlated with this? I don’t have enough data to validate the test below (and it is culturally specific), but score your organisation anyway. Give yourself one point for each of the following:

  1. Green tea is available at the office.
  2. 360 degree feedback is the primary form of appraisal.
  3. You can get a new laptop within a day of requesting it.
  4. iPhones not Blackberrys.
  5. The corporate intranet is a Wiki, not Sharepoint.
  6. Free bowls of fruit available in the office.
  7. Guest wifi freely and easily available.
  8. Widespread use of open video technology (Skype, Google Hangouts, Facetime).
  9. A new access badge is issued within half a day of requesting it.
  10. Not using Internet Explorer 6!

Score as follows

  • Score 9-10: Your company is an early adopter and likely to already be doing agile!
  • Score 4-8: Your company is in the early majority. Fair chance of successful wide-scale agile adoption in the near-future.
  • Score 0-3:  Oh dear, your organisation is in the late majority or a laggard. Move to another company or wait  (possibly a long time). Agile is unlikely to take hold in your organisation any time soon.

Cracking SAFe

safeEvaluating whether the Scaled Agile Framework (SAFe) could be something to experiment with in your organization? If so, here is a personal view.

If you want the one line summary: fundamentally flawed but will add value in most organizations.

Hats off to Dean Leffingwell et al. for “making early and meaningful contact with the enemy.” The product isn’t perfect but they have got it out there and are getting feedback from the market. Now the SAFE authors need to demonstrate their agile credentials by rapidly iterating this product to become even better. I hope that the market success the framework currently enjoys will stimulate a period of rapid innovation in approaches to “scaling agile,” both within and outside of the SAFe framework.

The good

  • Defines a program level heartbeat. The program layer in SAFe provides a way of scaling up scrum – a sort of scrum for a team of teams. It provides roles, events, artifacts etc. for this level of activity.
  • Encompasses lots of other valuable frameworks. Much of the agile good stuff appears somewhere: most of extreme programming, an adapted version of scrum, some kanban stuff, some devops and a nod to lean product development all appear somewhere. It’s good to get an overview picture of this.
  • Acknowledges the realities facing many companies. SAFe proposes, for example, a releasable product at least every 3 months, which is perhaps a more realistic target for many enterprises than scrum’s every 1 month or less.
  • Its probably a lot better than what most companies are doing Agile thought leaders might deride SAFe because it represents a step back from where their thinking is. Yet it could well be a mega-step in the right direction for the people it is aimed at: large companies struggling with large scale software development.

The bad

  • Assumes big is beautiful. The SAFe approach assumes you have a program of 5-12 teams. There is no solution proposed for a 1-4 teams. There is no questioning of whether you need this many teams or what you can do to reduce the number of teams over time. There is broad agreement that scaling the number of teams up from one is the very last thing you should try when you are really out of other options. In SAFe there is no encouragement or support for smaller programs.
  • Stomps on scrum. SAFe talks a lot about scrum but breaks the core scrum rules that: a) the product owner is one person (in SAFe the product manager shares some of this role), b)a potentially useable product is produced within a month (SAFe says 8-12 weeks) and c) there are no dependencies outside a team of 9 people or less. Many companies struggle to implement these but at least with true scrum, they know at least know where they should be heading. A real fear is that companies will do SAFe because they don’t have the courage to do scrum. Consequently they won’t really get much juice out of the agile revolution.
  • Not much help for getting from here to there SAFe is so massive and sprawling that it is hard to know what is important, where to focus first and what we can leave to later.  There is no process for the adoption of SAFe. It seems like it’s implied that it’s a big bang – i.e. SAFe framework adoption is not an iterative adaptive process in itself. Where is the inspect and adapt activities on the adoption of the framework itself? There is no help for the organisation to “uncover better ways of developing software
  • Prescriptive tone SAFe says… Do this. Do that. Very little of how it is set up refer back to any underlying principles. It’s also a one size fits all model. SAFe is rooted in the experience of its authors and the tone is authoritarian.  Like all of us, they have limited experience. How many companies have they really implemented all of this with? Contrast that with the work of Larman & Vodde who present their experience in fairly humble tones as patterns that worked for them that others could try.

The indifferent

Some minor gripes…

  • The top (portfolio) layer is pretty thin. The portfolio layer is a valiant attempt to complete the enterprise picture. In most companies, how projects and programs get started is a murky political affair. Initiatives with significant backing will always circumvent defined processes. I see projects and programs as being like laws and sausages – its better not to see them being made. I can’t see how imposing a simple standard process model at this level adds any value.
  • Weighted-shortest-job-first prioritization is not implemented correctly. This economic approach and as such the cost of delay needs estimating in $ or £ or whatever. Relative estimation of cost of delay is just wrong. This gives no information to the team(s) what the company might be willing to pay to expedite the work. Using relative values is the easy way out.
  • Not clear what is and isn’t in SAFe Is it a toolkit? A source of inspiration? When can I say I am doing SAFe? Not clear.
  • Process over Individuals & Collaborations SAFe does have values like transparency and alignment but most of its thrust is around the big process -which doesn’t seem terribly agile. This is also, ahem, a criticism you could make of scrum – yet scrum says so little this can easily be justified as “just enough process”. The process picture appeals to management. Is this just pandering? Giving them what they want, rather than what they need?
  • Slow (8-12 week) program adapt & inspect This seems an awfully long learning loop. Contrast this with Larman & Voddes Framework 1 & 2 which have joint sprint retrospective at the end of each sprint.
  • Responsibities between Scrum teams, System team and DevOps.  As written, it seems like the DevOps team is the Ops part of DevOps and the System team is the Dev part. Doesn’t feel like true DevOps. Also, the system team seem to have a lot of responsibility for testing the system etc.  This will promote a lack of ownership of system issues in the scrum team.
  • The different agile approaches in SAFe don’t join up. The SAFe training material includes, for example a summary of lean product development (e.g. batch sizes, queues,…) yet this work is rarely cross referenced in any of the other chapters.


Five reasons why you might not really doing scrum after all

scrum-hardScrum is the best known and most widely adopted agile approach. When a manager in a large organisation says to me that their team(s) are doing scrum, my suspicion is that they don’t really know what scrum is because, well, it’s really hard for most large organisations to do scrum.

The formal definition for what scrum is is the scrum guide. The key sticking point in the guide are:



  1. End-2-end cycle time of less than 1 month “The heart of Scrum is a Sprint, a time-box of one month or less during which a “Done”, useable, and potentially releasable product Increment is created.”  This means that if marketing decides that the product is good enough at the end of the sprint, it can go out to customers without negligable further technical work. Useable doesn’t mean a prototype or a product that needs further testing in some way (regression, security, …) since this would mean that the output of the sprint wasn’t useable in itself. Scrum is in stark contrast to SAFe which suggests that one can have hardening sprints (HIP sprints) to sort these kind of things out before every release. Scrum effectively says; be ready to do a release at the end of every sprint.  So, can your scrum team(s) really go from prioritising a feature to a potential product release in a month or less?
  2. No dependencies outside of team “Cross-functional teams have all competencies needed to accomplish the work without depending on others not part of the team.” To deliver the product, there are no dependencies outside the team. Everybody needed is in the team; infrastructure, architecture, documentation, … Note that the scrum guide also says: “Having more than nine members requires too much coordination.” so you can’t make the team very big to solve this problem. Large organisations with lots of departments responsible for different parts of the product development process struggle with this.
  3. Fully empowered product ownerThe Product Owner is one person, not a committee.”  Scrum is very clear – the one person who is the product owner has full authority on product prioritisation decisions. Most companies have competing departments who all want their say in how the product should be and struggle to devolve responsibility to one person who is so low in the hierarchy that he/she has time to fulfil the product owner role.
  4. Team members all have the title “developer” “[Development team is] self-organizing. Scrum recognizes no titles for Development Team members other than Developer, regardless of the work being performed by the person; there are no exceptions to this rule;” A key barrier to effective self-organisation is typically entrenched job roles. “I’m a tester,” or “I’m a business analyst.” Human Resources(HR) departments in large companies love this  as  somebody can be a Junior Tester and somebody else cane a  Senior Tester which maps to HR’s job and pay scales. Some people like this (typically those with “senior” in their title) because it highlights their skills, makes them feel special and makes it easier to justify why they should  be paid more than the other guys. It can also affect reporting lines (e.g. all the “testers” need to report into the testing manager). It’s hard to find people who will actively support the notion that we are all “developers.”
  5. Scrum Master as facilitator “The Scrum Master is a servant-leader for the Scrum Team” Most companies struggle with both the notion of servant-leadership and  the facilitating/coaching nature of the Scrum Master role. Sometimes, Scrum Masters are seen as project managers and accountable for team success which is really not what is intended. Other times they are simply ignored. Scrum defines them as being responsible for ensuring scrum is understood and enacted – they are process coaches – making sure the scrum process framework is being adopted. They are not part of the product development process itself but a facilitator of how that process runs and adapts itself in the light of new learnings or changing demand.

So, are you really, really doing scrum?




Agile PMO: Four Questions for IT Management

pmo-dartboard (1)Just finished reading “Best Business: The Agile PMO – Leading the Effective, Value Driven, Project Management Office, a practical guide” by Michael Nir. Bit lightweight really. The most amusing bit is the way he lists how PMO’s often destroy value:

  • Focused on tools
  • Focused on processes
  • Focused on standardisation
  • Focused on managements needs (for reports, policing, the appearance of being in control, …)
  • etc.

… and so on. His key point is that a PMO needs to be focused on maximising the value generated by the organisation it is serving. Yeap. I buy that. Everybody in the IT department needs this focus. The issue  is that… well… the book is a bit short on answers about how to do this.

Lets start from basics. Do we need a PMO? IT management should also be primarily concerned with maximising value delivered. Four questions that projects by themselves struggle to answer may help with this:

  1. Do we consistently work only on the most valuable projects (and not too many of them)?
  2. Do we consistently ensure bottleneck resources are allocated to projects in the best interest of the company?
  3. Do we consistently identify bottleneck resources and what can be done to increase their capacity?
  4. Are we effective in transferring learning from one project to another?

The questions are not whether IT management do all these things but more whether they know that these things are happening properly (“work on the system, not in the system”).

I’m guessing most IT management teams would answer “Don’t know” to these four questions.

I suggest that responsibility for these questions can not be delegated to a permanent side organisation (a PMO) since these issues are all too difficult to solve without management’s direct involvement – which is why projects struggle by themselves.

I can see a case for having a temporary change organisation charged with helping the organisation adopt practices which help with the above. Some examples might be: Kanban (helps identify and manage bottlenecks), T-shaped individuals (increases capacity at bottleneck), Cadenced resource scheduling (helps with allocation of scarce resource), Cost-of-delay (identifies the most urgent requirements – good for all types of prioritization discussions), Retrospectives (good for capturing learning) etc.

None of these practices are magic bullets and so, in my vision, it goes on in endless waves: management review the 4 questions, decide which one(s) most impede value delivery, identify some practices to embed in the organisation which might help, set up a temporary change programme to drive this embedding and, after a while, the programme is over and  it’s time to review the 4 questions again.



Not all variation is evil: Six Sigma and Product Development

devilNot all variation is evil. Did I really say that? Many companies that have implemented Six Sigma will be shocked. The Six Sigma police will be knocking on my door pretty soon. This post examines an idea I find fascinating that sometimes in a product development context (as opposed to a manufacturing or even a service context) increasing variation can actually create value.

Benefits variability

Let’s look at variability in the benefits enabled by a development pipeline. The fundamental discovery nature of product development (customer doesn’t know what they want, developer doesn’t know how to build it, things change) means that we might know, at best, some kind of probability for the likely benefits.

Consider the following project types:

  • Project A (low reward, low risk) has a 90% chance of making a profit of $0.1m
  • Project B (higher reward, higher risk) has a 20% chance of making a profit of $10m

(These two projects have the same cost.)

If I was interested in minimising variability of benefits, then I would only invest in projects of type A (low reward, low risk). If I invest in 10 of these projects, 9 of them will likely give me a profit of $0.1m and one will give me a profit of $0. My total benefits would likely be 9 * $0.1m = $0.9m

On the other hand, if I invest in 10 projects of type B (higher reward, higher risk) Then it is likely that 2 of them will generate $10m and the other 8 will generate $0m. Total benefits would then likely be 2 x $10m = $20m but with much greater variability.

The variability of our benefits has increased but so to has the value to the company! So in this situation increasing variability maximises value.

Investing in product development is like poker in this respect – champions don’t win every hand but they do maximise their overall winnings!

Change the economic payoff function is easier than reducing the underlying variability

The example above highlights that we are not actually interested in variability per se – this is just a proxy variable for what we are really interested in – the economic costs of variability. This economic cost depends on how variability is transformed by an economic payoff function. In the example above the economic payoff of Project A ($0.1m) was tiny compared to Project B ($10m) which prompted us to choose the greater variability of Project B and still maximise value.

It’s often easier to change the underlying payoff function than it is to reduce the underlying variability.  To take a different example; reducing the amount of errors generated quickly gets tricky and expensive. An alternative strategy is to improve the speed at which errors are fixed which would reduce the economic impact of an error occurring. Improving the speed at which errors are fixed is normally (relatively) cheap and easy.

As a general rule, fast feedback is the key to changing payoff functions. For example, within a development pipeline, the best lever is usually to kill bad ideas early (and not try to eliminate them from entering the pipeline altogether) – so make sure the process is quickly able to find out whether ideas are bad!

Development pipeline have a lot of inherent variation

Now I come to think of it, variation is inherent in a development pipeline. No variation = No value creation. Consider:

  • We are dealing with ideas. They are unique in in their size, their scope and when they occur. Can’t do much about this except to hold them upstream until there is capacity to deal with them.
  • We are effectively creating recipe’s for solution, not actual solutions themselves. The recipes we produce are by their nature are also unique.
  • The value created by a solution is the most unpredictable and unmanageable aspect of a development pipeline. Understanding the market risk associated with a solution is an inherently tricky problem (if you excel at predicting market needs, stop reading this and go play the stock-market).

No wonder that IT best practice and the whole lean-agile movement is a lot about practical ways of applying the philosophy of: “try a bit and see!”

Oversimplifications do not help

I hope I’ve given you a hint that it doesn’t make sense to resolve all uncertainty associated with IT development – only that which it is in our economic interest to resolve. Oversimplifications like “eliminate variation” don’t help – there is actually an optimum uncertainty rate which we can’t reach if we focus only on minimising variability and some forms of variation are inherent in the generation of value in a product development context.

Read more on this topic in Don Reneirtsen’s The Principles of Product Development Flow (tough read but worth it!)

Trouble prioritising work across teams?

now-signpostI typically advocate using cost-of-delay as the basis for prioritisation since it ensures a development pipeline is optimized for maximising return-on-investment(ROI). The basic CD3 model assumes one development pipeline without any dependencies. This post looks at how three simple rules can be used to adapt the model for multiple delivery pipelines.

CD3 prioritisation model: a quick recap

To briefly summarise the basic CD3 model… prioritisation is about deciding what to do first and what to do later. If a team is to prioritise between doing task A (cost-of-delay $10k/week) and task B (cost-of-delay $20k/week) clearly it makes sense to to choose task B (all other things being equal).


On the other hand, considering only the size of the tasks, task A (4 weeks work) should be scheduled before task B (16 weeks work) since we are getting value out earlier.


So a high priority task should either i) have a high cost-of-delay, ii) be of short duration or, even better, iii) both of these things. To do practically this we assign a numerical priority score to a task using the formula:



We call this CD3 (cost-of-delay-divided-by-duration). Prioritising this way can be shown mathematically to maximise return-on-investment.

In this example case, the priority score for task A is $10k/4 weeks = 2.5 and for task B is $20k/16 weeks = 1.3 so we do task A first as it has a higher CD3 score!

Note: some teams find it simpler to use either cost or effort as a proxy for duration. When this is done correctly, it doesn’t change the validity of the calculation. (The subject of using proxies for duration is outside the scope of this blog post).


CD3 for multiple development streams

Meet Kate. Kate is a project manager who wants to implement a business feature. Most importantly, Kate has already determine that the feature can’t be broken down into smaller pieces  – i.e. its a  minimum viable product . Kate’s business feature requires changes in both GMM and FTC in order to release business value.  Kate has calculated that this feature has a cost-of-delay of $30k/week and she knows that to implement the feature will be 6 weeks work for the GMM team and 3 weeks work for the FTC team.


GMM and FCT have separate dynamic priority lists (DPL – the backlog of prioritised task that the team pulls from when they have capacity). The two teams need to calculate the CD3 priority score in order to know where this task fits in their DPLs compared to the other tasks they could work on.

To calculate the CD3 priority scores, both the GMM team and the FCT team should use the full overall cost-of-delay $30k. It doesn’t need apportioning between them – this makes sense as both changes are needed to realise the business value so delaying any one of them will delay the feature.

Rule 1: Lower level tasks have the same cost-of-delay as the parent business feature

On the other hand, each team uses their own duration. So the priority score for the GMM task in the GMM DPL is $30k/6 = 5 and the FCT task in the FCT DPL is $30k/3 = 10.

Rule 2: Duration is always the duration for the local delivery stream

This rule might come as a surprise – since it implies that there is no overall priority score for the feature that can be used everywhere. Indeed, prioritisation is inherently local since we are always competing for local resources. This needs some thinking about – it implies management directives along the lines of “this feature is top priority” do not maximise ROI (sorry Kate, don’t ask senior management for this). To maximise the return for the organisation, management should say “the cost-of-delay for this is $3m/week” and let teams then work out their own priorities.


Critical path dependencies

There is a further complication that Kate needs to take account of. What if the FCT team already have a lot of high priority work in their DPL and won’t be able to start work on their task for 10 weeks? If the GMM team start work on their task immediately, they will be done within 6 weeks. It will be a further 7 weeks before the FCT team finish and any business value is realised.


Clearly, not a good idea – the GMM team could have worked on something more urgent for 7 weeks and then started on their task. So it would seem that if a piece of work is not (yet) on the critical path for the delivery of business value then the cost-of-delay is zero. In the case of the GMM task, it become on the critical path after 7 weeks and at this point it should then be assigned the full cost-of-delay given above ($30k).


In reality, we probably wouldn’t wait the full 7 weeks to start the GMM work. The risk is too great that if there is the slightest problem with the GMM work, there will be a delay in the delivery of the overall feature. We’d probably start the work after maybe 5-6 weeks or so to be sure.

Rule 3: Cost-of-delay is zero until a task is near to being on the critical path for the business feature

In practice, Kate should identify when tasks are near to being on the critical path and coordinate this between delivery streams. It’s a tricky job because dynamic priority lists(DPLs) are, well, dynamic and the project manager needs to be able to predict them (crystal ball, anybody?).

In the very worst case, Kate might communicate that the GMM team should wait 5-6 weeks before starting only to find out that a flood of higher priority tasks had then suddenly arrived in the GMM DPL. It is now no longer FCT which is on Kate’s critical path but now it’s GMM. If Kate had been able to predict this from the start, she could have avoided this situation by ensuring the GMM task was completed before the sudden rush of higher priority items. Kate might not always get it right but prioritisation is not about getting right every time – its about making the best decision with the information available at the time.


Hopefully, this post has demonstrated that the simple CD3 model can be expanded to multiple delivery streams using three rules:

  • Rule 1: Lower level tasks have the same cost-of-delay as the parent business feature
  • Rule 2: Duration is always the duration for the local delivery stream
  • Rule 3: Cost-of-delay is zero until a task is near to being on the critical path for the feature