Articles

Five reasons why you might not really doing scrum after all

scrum-hardScrum is the best known and most widely adopted agile approach. When a manager in a large organisation says to me that their team(s) are doing scrum, my suspicion is that they don’t really know what scrum is because, well, it’s really hard for most large organisations to do scrum.

The formal definition for what scrum is is the scrum guide. The key sticking point in the guide are:

 

 

  1. End-2-end cycle time of less than 1 month “The heart of Scrum is a Sprint, a time-box of one month or less during which a “Done”, useable, and potentially releasable product Increment is created.”  This means that if marketing decides that the product is good enough at the end of the sprint, it can go out to customers without negligable further technical work. Useable doesn’t mean a prototype or a product that needs further testing in some way (regression, security, …) since this would mean that the output of the sprint wasn’t useable in itself. Scrum is in stark contrast to SAFe which suggests that one can have hardening sprints (HIP sprints) to sort these kind of things out before every release. Scrum effectively says; be ready to do a release at the end of every sprint.  So, can your scrum team(s) really go from prioritising a feature to a potential product release in a month or less?
  2. No dependencies outside of team “Cross-functional teams have all competencies needed to accomplish the work without depending on others not part of the team.” To deliver the product, there are no dependencies outside the team. Everybody needed is in the team; infrastructure, architecture, documentation, … Note that the scrum guide also says: “Having more than nine members requires too much coordination.” so you can’t make the team very big to solve this problem. Large organisations with lots of departments responsible for different parts of the product development process struggle with this.
  3. Fully empowered product ownerThe Product Owner is one person, not a committee.”  Scrum is very clear – the one person who is the product owner has full authority on product prioritisation decisions. Most companies have competing departments who all want their say in how the product should be and struggle to devolve responsibility to one person who is so low in the hierarchy that he/she has time to fulfil the product owner role.
  4. Team members all have the title “developer” “[Development team is] self-organizing. Scrum recognizes no titles for Development Team members other than Developer, regardless of the work being performed by the person; there are no exceptions to this rule;” A key barrier to effective self-organisation is typically entrenched job roles. “I’m a tester,” or “I’m a business analyst.” Human Resources(HR) departments in large companies love this  as  somebody can be a Junior Tester and somebody else cane a  Senior Tester which maps to HR’s job and pay scales. Some people like this (typically those with “senior” in their title) because it highlights their skills, makes them feel special and makes it easier to justify why they should  be paid more than the other guys. It can also affect reporting lines (e.g. all the “testers” need to report into the testing manager). It’s hard to find people who will actively support the notion that we are all “developers.”
  5. Scrum Master as facilitator “The Scrum Master is a servant-leader for the Scrum Team” Most companies struggle with both the notion of servant-leadership and  the facilitating/coaching nature of the Scrum Master role. Sometimes, Scrum Masters are seen as project managers and accountable for team success which is really not what is intended. Other times they are simply ignored. Scrum defines them as being responsible for ensuring scrum is understood and enacted – they are process coaches – making sure the scrum process framework is being adopted. They are not part of the product development process itself but a facilitator of how that process runs and adapts itself in the light of new learnings or changing demand.

So, are you really, really doing scrum?

 

 

 

Agile PMO: Four Questions for IT Management

pmo-dartboard (1)Just finished reading “Best Business: The Agile PMO – Leading the Effective, Value Driven, Project Management Office, a practical guide” by Michael Nir. Bit lightweight really. The most amusing bit is the way he lists how PMO’s often destroy value:

  • Focused on tools
  • Focused on processes
  • Focused on standardisation
  • Focused on managements needs (for reports, policing, the appearance of being in control, …)
  • etc.

… and so on. His key point is that a PMO needs to be focused on maximising the value generated by the organisation it is serving. Yeap. I buy that. Everybody in the IT department needs this focus. The issue  is that… well… the book is a bit short on answers about how to do this.

Lets start from basics. Do we need a PMO? IT management should also be primarily concerned with maximising value delivered. Four questions that projects by themselves struggle to answer may help with this:

  1. Do we consistently work only on the most valuable projects (and not too many of them)?
  2. Do we consistently ensure bottleneck resources are allocated to projects in the best interest of the company?
  3. Do we consistently identify bottleneck resources and what can be done to increase their capacity?
  4. Are we effective in transferring learning from one project to another?

The questions are not whether IT management do all these things but more whether they know that these things are happening properly (“work on the system, not in the system”).

I’m guessing most IT management teams would answer “Don’t know” to these four questions.

I suggest that responsibility for these questions can not be delegated to a permanent side organisation (a PMO) since these issues are all too difficult to solve without management’s direct involvement – which is why projects struggle by themselves.

I can see a case for having a temporary change organisation charged with helping the organisation adopt practices which help with the above. Some examples might be: Kanban (helps identify and manage bottlenecks), T-shaped individuals (increases capacity at bottleneck), Cadenced resource scheduling (helps with allocation of scarce resource), Cost-of-delay (identifies the most urgent requirements – good for all types of prioritization discussions), Retrospectives (good for capturing learning) etc.

None of these practices are magic bullets and so, in my vision, it goes on in endless waves: management review the 4 questions, decide which one(s) most impede value delivery, identify some practices to embed in the organisation which might help, set up a temporary change programme to drive this embedding and, after a while, the programme is over and  it’s time to review the 4 questions again.

 

 

Not all variation is evil: Six Sigma and Product Development

devilNot all variation is evil. Did I really say that? Many companies that have implemented Six Sigma will be shocked. The Six Sigma police will be knocking on my door pretty soon. This post examines an idea I find fascinating that sometimes in a product development context (as opposed to a manufacturing or even a service context) increasing variation can actually create value.

Benefits variability

Let’s look at variability in the benefits enabled by a development pipeline. The fundamental discovery nature of product development (customer doesn’t know what they want, developer doesn’t know how to build it, things change) means that we might know, at best, some kind of probability for the likely benefits.

Consider the following project types:

  • Project A (low reward, low risk) has a 90% chance of making a profit of $0.1m
  • Project B (higher reward, higher risk) has a 20% chance of making a profit of $10m

(These two projects have the same cost.)

If I was interested in minimising variability of benefits, then I would only invest in projects of type A (low reward, low risk). If I invest in 10 of these projects, 9 of them will likely give me a profit of $0.1m and one will give me a profit of $0. My total benefits would likely be 9 * $0.1m = $0.9m

On the other hand, if I invest in 10 projects of type B (higher reward, higher risk) Then it is likely that 2 of them will generate $10m and the other 8 will generate $0m. Total benefits would then likely be 2 x $10m = $20m but with much greater variability.

The variability of our benefits has increased but so to has the value to the company! So in this situation increasing variability maximises value.

Investing in product development is like poker in this respect – champions don’t win every hand but they do maximise their overall winnings!

Change the economic payoff function is easier than reducing the underlying variability

The example above highlights that we are not actually interested in variability per se – this is just a proxy variable for what we are really interested in – the economic costs of variability. This economic cost depends on how variability is transformed by an economic payoff function. In the example above the economic payoff of Project A ($0.1m) was tiny compared to Project B ($10m) which prompted us to choose the greater variability of Project B and still maximise value.

It’s often easier to change the underlying payoff function than it is to reduce the underlying variability.  To take a different example; reducing the amount of errors generated quickly gets tricky and expensive. An alternative strategy is to improve the speed at which errors are fixed which would reduce the economic impact of an error occurring. Improving the speed at which errors are fixed is normally (relatively) cheap and easy.

As a general rule, fast feedback is the key to changing payoff functions. For example, within a development pipeline, the best lever is usually to kill bad ideas early (and not try to eliminate them from entering the pipeline altogether) – so make sure the process is quickly able to find out whether ideas are bad!

Development pipeline have a lot of inherent variation

Now I come to think of it, variation is inherent in a development pipeline. No variation = No value creation. Consider:

  • We are dealing with ideas. They are unique in in their size, their scope and when they occur. Can’t do much about this except to hold them upstream until there is capacity to deal with them.
  • We are effectively creating recipe’s for solution, not actual solutions themselves. The recipes we produce are by their nature are also unique.
  • The value created by a solution is the most unpredictable and unmanageable aspect of a development pipeline. Understanding the market risk associated with a solution is an inherently tricky problem (if you excel at predicting market needs, stop reading this and go play the stock-market).

No wonder that IT best practice and the whole lean-agile movement is a lot about practical ways of applying the philosophy of: “try a bit and see!”

Oversimplifications do not help

I hope I’ve given you a hint that it doesn’t make sense to resolve all uncertainty associated with IT development – only that which it is in our economic interest to resolve. Oversimplifications like “eliminate variation” don’t help – there is actually an optimum uncertainty rate which we can’t reach if we focus only on minimising variability and some forms of variation are inherent in the generation of value in a product development context.

Read more on this topic in Don Reneirtsen’s The Principles of Product Development Flow (tough read but worth it!)

Trouble prioritising work across teams?

now-signpostI typically advocate using cost-of-delay as the basis for prioritisation since it ensures a development pipeline is optimized for maximising return-on-investment(ROI). The basic CD3 model assumes one development pipeline without any dependencies. This post looks at how three simple rules can be used to adapt the model for multiple delivery pipelines.

CD3 prioritisation model: a quick recap

To briefly summarise the basic CD3 model… prioritisation is about deciding what to do first and what to do later. If a team is to prioritise between doing task A (cost-of-delay $10k/week) and task B (cost-of-delay $20k/week) clearly it makes sense to to choose task B (all other things being equal).

image

On the other hand, considering only the size of the tasks, task A (4 weeks work) should be scheduled before task B (16 weeks work) since we are getting value out earlier.

image

So a high priority task should either i) have a high cost-of-delay, ii) be of short duration or, even better, iii) both of these things. To do practically this we assign a numerical priority score to a task using the formula:

 

cost-of-delay/duration

We call this CD3 (cost-of-delay-divided-by-duration). Prioritising this way can be shown mathematically to maximise return-on-investment.

In this example case, the priority score for task A is $10k/4 weeks = 2.5 and for task B is $20k/16 weeks = 1.3 so we do task A first as it has a higher CD3 score!

image
Note: some teams find it simpler to use either cost or effort as a proxy for duration. When this is done correctly, it doesn’t change the validity of the calculation. (The subject of using proxies for duration is outside the scope of this blog post).

 

CD3 for multiple development streams

Meet Kate. Kate is a project manager who wants to implement a business feature. Most importantly, Kate has already determine that the feature can’t be broken down into smaller pieces  – i.e. its a  minimum viable product . Kate’s business feature requires changes in both GMM and FTC in order to release business value.  Kate has calculated that this feature has a cost-of-delay of $30k/week and she knows that to implement the feature will be 6 weeks work for the GMM team and 3 weeks work for the FTC team.

gmmvsfct1

GMM and FCT have separate dynamic priority lists (DPL – the backlog of prioritised task that the team pulls from when they have capacity). The two teams need to calculate the CD3 priority score in order to know where this task fits in their DPLs compared to the other tasks they could work on.

To calculate the CD3 priority scores, both the GMM team and the FCT team should use the full overall cost-of-delay $30k. It doesn’t need apportioning between them – this makes sense as both changes are needed to realise the business value so delaying any one of them will delay the feature.

Rule 1: Lower level tasks have the same cost-of-delay as the parent business feature

On the other hand, each team uses their own duration. So the priority score for the GMM task in the GMM DPL is $30k/6 = 5 and the FCT task in the FCT DPL is $30k/3 = 10.

Rule 2: Duration is always the duration for the local delivery stream

This rule might come as a surprise – since it implies that there is no overall priority score for the feature that can be used everywhere. Indeed, prioritisation is inherently local since we are always competing for local resources. This needs some thinking about – it implies management directives along the lines of “this feature is top priority” do not maximise ROI (sorry Kate, don’t ask senior management for this). To maximise the return for the organisation, management should say “the cost-of-delay for this is $3m/week” and let teams then work out their own priorities.

 

Critical path dependencies

There is a further complication that Kate needs to take account of. What if the FCT team already have a lot of high priority work in their DPL and won’t be able to start work on their task for 10 weeks? If the GMM team start work on their task immediately, they will be done within 6 weeks. It will be a further 7 weeks before the FCT team finish and any business value is realised.

gmmvsfct2

Clearly, not a good idea – the GMM team could have worked on something more urgent for 7 weeks and then started on their task. So it would seem that if a piece of work is not (yet) on the critical path for the delivery of business value then the cost-of-delay is zero. In the case of the GMM task, it become on the critical path after 7 weeks and at this point it should then be assigned the full cost-of-delay given above ($30k).

gmmvsfct3

In reality, we probably wouldn’t wait the full 7 weeks to start the GMM work. The risk is too great that if there is the slightest problem with the GMM work, there will be a delay in the delivery of the overall feature. We’d probably start the work after maybe 5-6 weeks or so to be sure.

Rule 3: Cost-of-delay is zero until a task is near to being on the critical path for the business feature

In practice, Kate should identify when tasks are near to being on the critical path and coordinate this between delivery streams. It’s a tricky job because dynamic priority lists(DPLs) are, well, dynamic and the project manager needs to be able to predict them (crystal ball, anybody?).

In the very worst case, Kate might communicate that the GMM team should wait 5-6 weeks before starting only to find out that a flood of higher priority tasks had then suddenly arrived in the GMM DPL. It is now no longer FCT which is on Kate’s critical path but now it’s GMM. If Kate had been able to predict this from the start, she could have avoided this situation by ensuring the GMM task was completed before the sudden rush of higher priority items. Kate might not always get it right but prioritisation is not about getting right every time – its about making the best decision with the information available at the time.

Conclusion

Hopefully, this post has demonstrated that the simple CD3 model can be expanded to multiple delivery streams using three rules:

  • Rule 1: Lower level tasks have the same cost-of-delay as the parent business feature
  • Rule 2: Duration is always the duration for the local delivery stream
  • Rule 3: Cost-of-delay is zero until a task is near to being on the critical path for the feature

Lean-agile and PRINCE2

prince2I have been involved in recent discussions on structured project management methodologies. In particular PRINCE2 (which I’ll focus on here but the discussion could apply to any traditional project management approach). Are traditional methodologies like PRINCE2 compatible with a lean or agile way of working?

There are plenty of blogs discuss the experience of making PRINCE2 and lean-agile ways of working fit together and it’s certainly possible. My issue is not that PRINCE2 is wrong. It’s more that, for a IT development project, it encourages us to put our focus in the wrong place.

Discovery

Lean-agile ways of working emphasis the discovery nature of IT development. The tricky thing about projects involving IT development is that they are characterised by large uncertainties in both what is needed and how to build it. In this way, it’s more like other creative disciplines like marketing, R&D etc. where progress cannot be pictured as a linear function of the resources applied over time. We could call this the discovery mindset:

  • the customer doesn’t know what they want,
  • the developer doesn’t know how to build it,
  • things change.

IT development projects do have one redeeming feature – they can normally be delivered in small pieces which enables discovery to be done collaboratively in short learning cycles whereby we find out what is needed and how to build it by trying a bit in a short cycle.

Risk and Learning

PRINCE2 implies that the risks that need most of our attention are cost, time and scope overruns. Not really. The biggest risk in IT development is building something which isn’t used (ref. Mary Poppendieck) What needs our attention most is answers to the question of how we can learn faster about whether our solution will be used. Thick contractual requirements documents which are produced upfront, beloved of so many PRINCE2 practitioners, actually increase the risk that we build something that isn’t used since they increase the amount of work done before we find out.

Making learning our primary focus has all sorts of implications. PRINCE2 implies that change happens rarely and need to be strictly controlled. Yet if we are learning all the time, change will happen, well, all the time. Lean-agile practices embraces change and treats it as “business and usual”. Take retrospectives – these happen often, unlike the PRINCE2 lessons learnt report which happens only at the end. Or planning. This is a core activity in the PRINCE2 world, yet planning is merely helpful in the lean-agile. Planning is of limited value because there is always so much we don’t know yet – we typically can plan quite well the next couple of weeks but beyond that it gets vague. We need to iterate on our plan as we learn more.

Flow

PRINCE2 typically has no real concept of flow beyond the level of the GANT chart. Usually, each step of the GANT chart is executed only once. Lean-agile practitioners frame a project more in terms of setting up a production pipeline and then flowing small work items smoothly through it. This allows for adjustment and learning – it becomes a repeat game and we don’t have to get it right first time. Lean-agile thinking acknowledges that output actually goes up if a team is not working on too much at any one time. As such, lean-agile teams tend to pull work into the pipeline when there is capacity (and not when the GANT chart says they should. As Scott Ambler says “friends don’t let friends use Microsoft Project”).

Managing smooth flow (i.e. not having work piling up someplace; in testing, for example) also has implication for recruitment. Lean-agile teams tend to value people who are multi-skilled since they can move to where the bottleneck is building up in the pipeline – the location of which is impossible to predict in advance. PRINCE2 practitioners tend to prefer engaging specialists as these are the most effective at particular tasks that have been planned for them to do.

Value

If we are good at prioritisation, we might learn at some point that we have implemented 80% of the value for 20% of the cost and want to stop the project now. Scope is typically variable in a lean-agile project. PRINCE2 tries to nail down scope, cost, quality up front since it assumes these can be understood well enough before work starts – a questionable assumption for IT development.

Quality

PRINCE2 has a very contractual definition of quality. If we could usefully specify quality contractually in IT development e.g. “1 bug per 1000 lines of code”, we would. Alas not! Lean-agile thinking addresses this in a more practical way – quality is in a lean-agile context is about getting quick and complete feedback on our activities which then allow us to adjust and improve – quality is built into the process itself.

Collaboration

Stakeholder management in PRINCE2 is “contractual” with clearly defined roles etc. Lean-agile thinking focuses more on collaboration, face-2-face communication, joint problem solving etc. which is some way from the formal mindset of PRINCE2. It also typically emphasises self-organising teams since the team is closest to the work and hence are the ones who are learning the most about who is best to do what.

Recommendations

So here’s my top five points for a PRINCE2 project manager who wants to maximises his/her chances that an IT development project will deliver value:

  1. Make your primary focus to enable fast learning everywhere in your project (particular about whether the solution is actually used. Get started on this immediately by getting the smallest chunk of value possible in front of them straight away). Learn both about the customer needs and how to build the solution. Be fanatical about this.
  2. Frame the project as quickly setting up a “production pipeline” through which there is a smooth, fast flow of small requirements.
  3. Be OK with not pretending you know too much about the future. Educate your stakeholders as to why this makes sense.
  4. Charter your team to deliver as much value as possible with a given timeframe/cost and the let them work out how to do this.
  5. Face-2-face close collaboration is your dominant communication mode.

It’s KPI time again!

measure-upThe Key Perfomance Indicators(KPI)/Measurable Objectives setting process triggers the discussion as to whether the way KPI’s are done within a company adds or destroys value. A key plank of a lean-agile approach is systems thinking i.e. focus on optimising the whole end-2-end process, not the individual parts which are KPI’s often do. Deming, who you might say is father of this way of thinking, was specifically against the way to do KPIs. Number 12 in his 14 key principles is:

Remove barriers that rob people in management and in engineering of their right to pride of workmanship. This means, inter alia, abolishment of the annual or merit rating and of management by objective

Enough! It’s easy to criticize KPIs – it’s better to improve. Here’s a summary of the usual suspects and how they can be improved upon.

Variable Typical measure Usual outcome Alternative measures
Time Delivering on a predicted date Incentivises hidden time buffers and slower delivery Maximise speed in getting to the point where value starts to be realised
Scope Delivering all of the originally predicted scope Incentivises gold plating and discourages exploitation of learning. Minimize size of work packages to maximize both learning and early release of value
Cost Delivering at or below a predicted development cost Incentivises hidden cost contingencies, pushing costs up. Maximize value delivered (trade development cost against the opportunity cost of delay)
Quality Delivering changes with zero downtime and no errors Fear of change. Overinvestment in testing and documentation. Shorten feedback cycles at many levels (coding, defects…)

 

In short, the suggestion is that by over-focussing on the typical measures in the table above, we get a pipeline which is slow, expensive and wasteful. Explore the alternative measures instead!

Try…

Perhaps the table above can be used for inspiration when setting KPIs in your team?

Read more…

Improving estimate accuracy

targetDon’t we just get frustrated when our estimates or guesstimates on timescales or costs turn out to be different from what actually happens? So how do we improve the accuracy of our estimates?

What do we need estimates for?

This is a golden question – ask yourself this whenever you do an estimate. It’s a lean principle to remove waste (i.e. non-value adding activity). If the estimate information you are producing isn’t vital for making a decision, then take a good hard look as to whether you need it. Some of the teams I have worked with have stopped some kinds of estimates as these estimates were not changing the decision.

Why are estimates uncertain?

Estimates are uncertain because we lack information. We’ve never built this particular feature before. We don’t exactly know what we need to do. As we work further with the feature, we gain more information and our uncertainty about how much work is required reduces. This leads to the idea of the cone of uncertainty shown below which suggests that uncertainty decreases (and hence accuracy of estimate increases) throughout the life of a feature.

image

 

What helps improve estimating accuracy?

1. Break work down

If you only do one thing, break the work down into smaller pieces (this is a core lean principle!) It’s much easier to estimate smaller activities than larger ones. By simply breaking down large work items into smaller pieces, your estimating accuracy will improve.

2. Increase the rate at which the estimator gains experience

Ability to estimate comes from learning from past experience of estimating. So get more experience! The way you increase your learning rate is to shorten the cycle time i.e. the time it takes to produce something. Doubling the cycle time doubles the rate of learning! Of course, the experience gained needs to be effectively captured (retrospectives, low staff turnover, etc.).

3. Ensure estimators are the people who are doing the work

Seems simple but in a lot of cases, the people doing the estimating are far from the actual work (or, sometimes, estimates are adjusted by people far from the work!)

4. Reduce dependencies

Typically if a task requires 90% of one person and 10% of another, the 10% person will be the bottleneck because his focus is not on the task. This means that it’s difficult for him/her to be available precisely when needed. Eliminating this dependency will reduce variation in timelines. Feature teams help with this.

5. Get multiple inputs

Agile teams often use planning poker to harness the brainpower of the entire team to improve the estimate – it’s unlikely that one person has all the best information. This is a fun and easy thing to try.

6. Agree on which estimate we want

Many timeline estimates are simply the earliest possible time for which the probability of delivery is not zero. Did you want the most likely date (50% chance of being early, 50% chance of being late) or the date for which it is almost certain delivery will have happened by? Make sure everybody agrees which estimate we are talking about.

image

7. Understand the politics

There are also all sorts of human dynamics that can skew your estimate. For example:

  • If there are “punishments” for getting it wrong, folk tend to add buffers to their estimates.
  • If folk believe that the uncertainty associated with the estimate will not be communicated clearly (for example to senior management) then they tend to add buffers. A solution to this that a lot of agile teams use when they are doing an early guesstimate is to use “T-shirt” sizing whereby the estimate is classified as Small, Medium, Large or Extra Large. Communicating guesstimates like this makes it clear that they are not very accurate.
  • If there is already a target declared (“we want this to be under $100k” or, “we want to complete this by June”) then folk tend to be anchored by that and don’t like to say it’s not possible. We all like to please our stakeholders.
  • Many fascinating studies show that people have a natural subconscious tendency to be overconfident in their estimating skills. If the estimator(s) have little data on past performance to help correct this, then its likely they will underestimate tasks.

 

Setting expectations on estimate accuracy

It’s tempting to set a team a goal to improve estimate accuracy. The risk with this (which I have seen in some teams) is that the quickest and simplest way of improving estimating accuracy is either to add undeclared buffers or to invest more time and resource before making the estimate. The latter effectively moves estimate creation to later in the lifecycle which destroys the purpose of the estimate (which is to make decisions early on in the lifecycle).

A better way of improving estimating accuracy is to set the team goals on reducing cycle time and reducing the size of items flowing thro’ the pipeline (as per the first two points above). By focusing on these general lean-agile practices, we also get an improvement in estimating accuracy!

How to deal with “I want it earlier”

deadlineDo you recognise this dialog?

 

Senior Executive (SE): “In this plan you’re showing me, you say it will be ready by December?”

Project Manager (PM): “Well that’s the best estimate we have now based on what we know.”

SE: “I want it by end of June”

PM: “Er, OK. Er but the team don’t believe it can be done.”

SE: “Take your black hat off. It’s your job to make it happen.”

PM: “Er…”

 

I was thinking about how to apply a lean-agile mindset to this imaginary situation. How about something like this:

 

Senior Executive (SE): “In this plan you’re showing me, you say it will be ready by December?”

Project Manager (PM): “Well that’s the best estimate we have now based on what we know.”

SE: “I want it by end of June”

PM: “Why do you want it by end of June?”

SE: “I consider it important that we get this done quickly and I’m sure you are capable of doing it. It’s top priority!”

PM: “Why is it important to be done quickly?”

SE: “Well, this project affects bottom line. It significantly improves our revenues and decreases our cost.”

PM: “So there is nothing which happens at midnight on 30th June which means that if we deliver a minute later then this really matters. It is more like you’d like this to happen earlier than later?”

SE: “Yes, that’s right.”

PM: “I understand that it’s top priority. It will help if we can characterising what top priority means since there are always other projects who are claiming to be top priority. The business case says that this change will have a benefit of $52m per year which is the same as $1m per week. Does that mean that for every week we are able to get this project done early, the company would realise an extra $1m?”

SE: “Yes, that’s true in this case.”

PM: “Does that mean we’d be willing to spend up to $1m to get the project a week earlier?”

SE: “I never thought of it like that but, yes.”

PM: “Great. Now we can make our case about being top priority. Every time we have to wait in a queue for some bottlenecked resource that is on our critical path, we take the amount we have to wait (in weeks) and multiply by $1m and this is the opportunity cost to the organisation of us not going first!”

SE: “I like the sound of that!”

PM: “Of course, the other projects will calculate the opportunity cost they incur if we go first. We will compare opportunity costs and the biggest will win.”

SE: “Em. I suppost that makes sense. Although I don’t like the idea of waiting.”

PM: “That I can understand. We are trying to to what’s right for the company overall. This calculation (called CD3 = cost-of-delay divided by duration or weighted shortest job first) ensures this.”

SE: “Yes. You’re right.”

PM: “Going back to the $1m per week. If we agree on this figure, and the team identify something they can do to bring the delivery forward by a week that costs less than $1m, do we have the authority to spend the money to make this happen?”

SE: “I’m not sure. They can always escalate to me if necessary.”

PM: “So you agree to make yourself available quickly if we have something we need to escalate.”

SE: “Well I am very busy and important.”

PM: “Yes. If you are unable to make the time and unwilling to delegate then the cost to the company will be $1m per week.”

SE: “Yes. You are right. I will be available.”

PM: “With regards to the delivery dates. Our estimated date was arrived at by harnessing the brainpower of the whole team – the people who are closest to the work. This is the best estimate we can get with the information we have today – there is no better way.

At $1m per week, I understand the urgency of realising value as soon as possible, so we will set up our delivery pipeline in order to break work down into small chunks of value and then realise that value, starting with the most valuable chunks. Using this approach, you will see that the team is delivering value before end of June and you will also have ongoing opportunities to change the priorities as we go and we learn more.

This also allows us to check assumptions about the value of the solution and how to build it.”

SE: “I don’t understand. Can’t you just double the team size and deliver earlier?”

PM: “Both IT industry experience and our own previous project experience suggest that this doesn’t help. Larger teams tend to deliver less.”

SE: “That doesn’t sounds right?”

PM: “In any process, it only makes sense to add resource at the process bottleneck. Adding resource anywhere else will slow the team down. IT development is a complex human intensive process. If there are 10 people on the team and we add one more, then there is an overhead on the original 10 in bringing the 11th up to speed, maintaining communication and alignment etc. This is particularly true when the team processes are new. Until there is a running process that is demonstrably creating value then it doesn’t make sense to scale the process and add lots more people.”

SE: “Can’t we just start a lot of work in parallel and have a lot of different teams?”

PM: “Yes this would be possible if the nature of the work was such that there were no dependencies between the teams but the dependencies we have will create bottlenecks. In any case, the most scarce resource in any organisation is usually management attention. Its better to do a smaller number of activities and work on doing them fast than a large number at once and do them all slow.”

SE: “You said that you want a running process that is demonstrably delivering value. When will that be?”

PM: “We aim to begin realising business value within 30 to 90 days.”

SE: “We’ll that is sooner than I expected and I guess I’ll see whether the approach you propose is working by then. I’m looking forward to that.”

Maybe I’m a dreamer but if only all these conversations went like this…

Always busy?

Which is most important? To ensure our teams are fully busy or to maximise the value delivered by the development pipeline? I’m hoping to have you consider that there may be times in when it is in the company’s interest to pay people to “look out of the window” for periods of time.

Throughput decreases as capacity utilisation approaches 100%

Think about a highway – cars flow fine when there is little traffic. When the volume of traffic goes above 80%, we start to get traffic jams. As the loading on the highway approaches 100% then we end up almost no flow at all. The overall value delivered (i.e. people getting to where they want to go) is less. Models have been developed which give the underlying maths of why this is. Highway designers have actually taken these model’s to heart and there are now places were cars are held by traffic lights on the sliproad at peak times in order to maintain a smooth flow on the main highway. The funny thing is this improves everybody’s end-2-end journey time including the cars which were held on the sliproad for a period. Its counter-intuitive, as it probably doesn’t feel like that when you are sitting on the sliproad not going anywhere.

A development pipeline has similar characteristics to a highway. The inherent variation in both demand and supply means that throughput is maximised when the pipeline is run on average below full capacity.

An example

One of the teams I coach is effectively doing a whole release which is dominated by a technical upgrade. To simplify the situation, what that means is that it requires the developers and testers but not the analysts. What should the analysts do whilst the developers/testers are busy with this release?

Options are:

  • Worst option: Do more analysis. Since the bottleneck is not analysis work, all this will do is pile up a lot of work for the developers/testers to work on. This pile will probably never disappear because as soon as the developers remove something from it, the analysts will continue adding a new item. We haven’t increased the value generated by the pipeline but we’ve increased the cycle time and hence decreased agility and speed of feedback/learning. Think of all the management and communication that with be required to look after this pile of half finished work? The overall value generated is almost certainly negative.
  • Better option: Analysts look out the window for a while. This has no negative effects on the value delivered. Of course, if this is a permanent thing (which in this case it isn’t) we should look into reducing the number of analysts.
  • Best option: Analysts learn over time how to help with development and test. They may not be the most efficient at these tasks. T-shaped individuals can move to the bottleneck and increase it’s capacity. This would enable the technical release to be ready earlier and hence increases the value delivered by the pipeline.

It’s counter-intuitive for us to accept that there are situations where the most value people can generate in the short-term is to do nothing. We all like to be busy. It’s not about the productivity of the individual but the productivity of the pipeline as a whole.  What do you think?

 

A pipeline near you?

image

Are you T-shaped?

t-shapedA common problem in increasing the flow of work through a development pipeline is over-specialisation. The bottleneck might be in getting the analysis done, yet we have plenty of developers who have nothing to do since they have no analysis skills. Then the bottleneck moves to the front-end development yet we discover we have plenty of back-end developers but none who know how to code the front-end. And so on. Enter the T-shaped individual which is a common lean-agile practice applied to address this issue and increase throughput through the development pipeline.

What is a T-Shaped Individual?

A T-shaped individual is not the same as a generalist. He or she has deep expertise in one area but is able and willing to turn his/her hand to other things.

How can I promote this approach?

Teams I have worked with that have been committed to this practice have found the following helpful:

  1. Hire people who want to develop outside their specialism An expert in, say, workflow, is not going to get an opportunity to learn just about workflow. In developing T-shaped individuals, the learning opportunities might be in rules or service bus, business analysis, performance testing etc. New hires have to want this.
  2. Organise around the flow of value If the team are not organised around the flow of value but instead around developing components which are only part of the whole, then there will be little opportunity to develop skills outside one’s own specialism.
  3. Actively manage the skill map One way of doing this is to identify all the different skill areas required to produce value. Then the entire team score themselves against this in one of 4 ways. This skill map is then used helping the team decide who does what and for managing skill development. This also enables team members to learn and develop in the areas that interest them and helps with minimising dependencies on key individuals.

 

I can teach others
I can do the job with help from others
I can do the job alone
I am eager to learn

 

Fig. 1. A simple rating of skill level

Try…

Would T-shaped individuals help improve the throughput of your delivery pipeline? Are you willing to make any necessary changes to your hiring practice and organisation?

Read more about T-shaped individuals…