Uncategorized

Not all variation is evil: Six Sigma and Product Development

devilNot all variation is evil. Did I really say that? Many companies that have implemented Six Sigma will be shocked. The Six Sigma police will be knocking on my door pretty soon. This post examines an idea I find fascinating that sometimes in a product development context (as opposed to a manufacturing or even a service context) increasing variation can actually create value.

Benefits variability

Let’s look at variability in the benefits enabled by a development pipeline. The fundamental discovery nature of product development (customer doesn’t know what they want, developer doesn’t know how to build it, things change) means that we might know, at best, some kind of probability for the likely benefits.

Consider the following project types:

  • Project A (low reward, low risk) has a 90% chance of making a profit of $0.1m
  • Project B (higher reward, higher risk) has a 20% chance of making a profit of $10m

(These two projects have the same cost.)

If I was interested in minimising variability of benefits, then I would only invest in projects of type A (low reward, low risk). If I invest in 10 of these projects, 9 of them will likely give me a profit of $0.1m and one will give me a profit of $0. My total benefits would likely be 9 * $0.1m = $0.9m

On the other hand, if I invest in 10 projects of type B (higher reward, higher risk) Then it is likely that 2 of them will generate $10m and the other 8 will generate $0m. Total benefits would then likely be 2 x $10m = $20m but with much greater variability.

The variability of our benefits has increased but so to has the value to the company! So in this situation increasing variability maximises value.

Investing in product development is like poker in this respect – champions don’t win every hand but they do maximise their overall winnings!

Change the economic payoff function is easier than reducing the underlying variability

The example above highlights that we are not actually interested in variability per se – this is just a proxy variable for what we are really interested in – the economic costs of variability. This economic cost depends on how variability is transformed by an economic payoff function. In the example above the economic payoff of Project A ($0.1m) was tiny compared to Project B ($10m) which prompted us to choose the greater variability of Project B and still maximise value.

It’s often easier to change the underlying payoff function than it is to reduce the underlying variability.  To take a different example; reducing the amount of errors generated quickly gets tricky and expensive. An alternative strategy is to improve the speed at which errors are fixed which would reduce the economic impact of an error occurring. Improving the speed at which errors are fixed is normally (relatively) cheap and easy.

As a general rule, fast feedback is the key to changing payoff functions. For example, within a development pipeline, the best lever is usually to kill bad ideas early (and not try to eliminate them from entering the pipeline altogether) – so make sure the process is quickly able to find out whether ideas are bad!

Development pipeline have a lot of inherent variation

Now I come to think of it, variation is inherent in a development pipeline. No variation = No value creation. Consider:

  • We are dealing with ideas. They are unique in in their size, their scope and when they occur. Can’t do much about this except to hold them upstream until there is capacity to deal with them.
  • We are effectively creating recipe’s for solution, not actual solutions themselves. The recipes we produce are by their nature are also unique.
  • The value created by a solution is the most unpredictable and unmanageable aspect of a development pipeline. Understanding the market risk associated with a solution is an inherently tricky problem (if you excel at predicting market needs, stop reading this and go play the stock-market).

No wonder that IT best practice and the whole lean-agile movement is a lot about practical ways of applying the philosophy of: “try a bit and see!”

Oversimplifications do not help

I hope I’ve given you a hint that it doesn’t make sense to resolve all uncertainty associated with IT development – only that which it is in our economic interest to resolve. Oversimplifications like “eliminate variation” don’t help – there is actually an optimum uncertainty rate which we can’t reach if we focus only on minimising variability and some forms of variation are inherent in the generation of value in a product development context.

Read more on this topic in Don Reneirtsen’s The Principles of Product Development Flow (tough read but worth it!)

Lean-agile and PRINCE2

prince2I have been involved in recent discussions on structured project management methodologies. In particular PRINCE2 (which I’ll focus on here but the discussion could apply to any traditional project management approach). Are traditional methodologies like PRINCE2 compatible with a lean or agile way of working?

There are plenty of blogs discuss the experience of making PRINCE2 and lean-agile ways of working fit together and it’s certainly possible. My issue is not that PRINCE2 is wrong. It’s more that, for a IT development project, it encourages us to put our focus in the wrong place.

Discovery

Lean-agile ways of working emphasis the discovery nature of IT development. The tricky thing about projects involving IT development is that they are characterised by large uncertainties in both what is needed and how to build it. In this way, it’s more like other creative disciplines like marketing, R&D etc. where progress cannot be pictured as a linear function of the resources applied over time. We could call this the discovery mindset:

  • the customer doesn’t know what they want,
  • the developer doesn’t know how to build it,
  • things change.

IT development projects do have one redeeming feature – they can normally be delivered in small pieces which enables discovery to be done collaboratively in short learning cycles whereby we find out what is needed and how to build it by trying a bit in a short cycle.

Risk and Learning

PRINCE2 implies that the risks that need most of our attention are cost, time and scope overruns. Not really. The biggest risk in IT development is building something which isn’t used (ref. Mary Poppendieck) What needs our attention most is answers to the question of how we can learn faster about whether our solution will be used. Thick contractual requirements documents which are produced upfront, beloved of so many PRINCE2 practitioners, actually increase the risk that we build something that isn’t used since they increase the amount of work done before we find out.

Making learning our primary focus has all sorts of implications. PRINCE2 implies that change happens rarely and need to be strictly controlled. Yet if we are learning all the time, change will happen, well, all the time. Lean-agile practices embraces change and treats it as “business and usual”. Take retrospectives – these happen often, unlike the PRINCE2 lessons learnt report which happens only at the end. Or planning. This is a core activity in the PRINCE2 world, yet planning is merely helpful in the lean-agile. Planning is of limited value because there is always so much we don’t know yet – we typically can plan quite well the next couple of weeks but beyond that it gets vague. We need to iterate on our plan as we learn more.

Flow

PRINCE2 typically has no real concept of flow beyond the level of the GANT chart. Usually, each step of the GANT chart is executed only once. Lean-agile practitioners frame a project more in terms of setting up a production pipeline and then flowing small work items smoothly through it. This allows for adjustment and learning – it becomes a repeat game and we don’t have to get it right first time. Lean-agile thinking acknowledges that output actually goes up if a team is not working on too much at any one time. As such, lean-agile teams tend to pull work into the pipeline when there is capacity (and not when the GANT chart says they should. As Scott Ambler says “friends don’t let friends use Microsoft Project”).

Managing smooth flow (i.e. not having work piling up someplace; in testing, for example) also has implication for recruitment. Lean-agile teams tend to value people who are multi-skilled since they can move to where the bottleneck is building up in the pipeline – the location of which is impossible to predict in advance. PRINCE2 practitioners tend to prefer engaging specialists as these are the most effective at particular tasks that have been planned for them to do.

Value

If we are good at prioritisation, we might learn at some point that we have implemented 80% of the value for 20% of the cost and want to stop the project now. Scope is typically variable in a lean-agile project. PRINCE2 tries to nail down scope, cost, quality up front since it assumes these can be understood well enough before work starts – a questionable assumption for IT development.

Quality

PRINCE2 has a very contractual definition of quality. If we could usefully specify quality contractually in IT development e.g. “1 bug per 1000 lines of code”, we would. Alas not! Lean-agile thinking addresses this in a more practical way – quality is in a lean-agile context is about getting quick and complete feedback on our activities which then allow us to adjust and improve – quality is built into the process itself.

Collaboration

Stakeholder management in PRINCE2 is “contractual” with clearly defined roles etc. Lean-agile thinking focuses more on collaboration, face-2-face communication, joint problem solving etc. which is some way from the formal mindset of PRINCE2. It also typically emphasises self-organising teams since the team is closest to the work and hence are the ones who are learning the most about who is best to do what.

Recommendations

So here’s my top five points for a PRINCE2 project manager who wants to maximises his/her chances that an IT development project will deliver value:

  1. Make your primary focus to enable fast learning everywhere in your project (particular about whether the solution is actually used. Get started on this immediately by getting the smallest chunk of value possible in front of them straight away). Learn both about the customer needs and how to build the solution. Be fanatical about this.
  2. Frame the project as quickly setting up a “production pipeline” through which there is a smooth, fast flow of small requirements.
  3. Be OK with not pretending you know too much about the future. Educate your stakeholders as to why this makes sense.
  4. Charter your team to deliver as much value as possible with a given timeframe/cost and the let them work out how to do this.
  5. Face-2-face close collaboration is your dominant communication mode.

It’s KPI time again!

measure-upThe Key Perfomance Indicators(KPI)/Measurable Objectives setting process triggers the discussion as to whether the way KPI’s are done within a company adds or destroys value. A key plank of a lean-agile approach is systems thinking i.e. focus on optimising the whole end-2-end process, not the individual parts which are KPI’s often do. Deming, who you might say is father of this way of thinking, was specifically against the way to do KPIs. Number 12 in his 14 key principles is:

Remove barriers that rob people in management and in engineering of their right to pride of workmanship. This means, inter alia, abolishment of the annual or merit rating and of management by objective

Enough! It’s easy to criticize KPIs – it’s better to improve. Here’s a summary of the usual suspects and how they can be improved upon.

Variable Typical measure Usual outcome Alternative measures
Time Delivering on a predicted date Incentivises hidden time buffers and slower delivery Maximise speed in getting to the point where value starts to be realised
Scope Delivering all of the originally predicted scope Incentivises gold plating and discourages exploitation of learning. Minimize size of work packages to maximize both learning and early release of value
Cost Delivering at or below a predicted development cost Incentivises hidden cost contingencies, pushing costs up. Maximize value delivered (trade development cost against the opportunity cost of delay)
Quality Delivering changes with zero downtime and no errors Fear of change. Overinvestment in testing and documentation. Shorten feedback cycles at many levels (coding, defects…)

 

In short, the suggestion is that by over-focussing on the typical measures in the table above, we get a pipeline which is slow, expensive and wasteful. Explore the alternative measures instead!

Try…

Perhaps the table above can be used for inspiration when setting KPIs in your team?

Read more…

Improving estimate accuracy

targetDon’t we just get frustrated when our estimates or guesstimates on timescales or costs turn out to be different from what actually happens? So how do we improve the accuracy of our estimates?

What do we need estimates for?

This is a golden question – ask yourself this whenever you do an estimate. It’s a lean principle to remove waste (i.e. non-value adding activity). If the estimate information you are producing isn’t vital for making a decision, then take a good hard look as to whether you need it. Some of the teams I have worked with have stopped some kinds of estimates as these estimates were not changing the decision.

Why are estimates uncertain?

Estimates are uncertain because we lack information. We’ve never built this particular feature before. We don’t exactly know what we need to do. As we work further with the feature, we gain more information and our uncertainty about how much work is required reduces. This leads to the idea of the cone of uncertainty shown below which suggests that uncertainty decreases (and hence accuracy of estimate increases) throughout the life of a feature.

image

 

What helps improve estimating accuracy?

1. Break work down

If you only do one thing, break the work down into smaller pieces (this is a core lean principle!) It’s much easier to estimate smaller activities than larger ones. By simply breaking down large work items into smaller pieces, your estimating accuracy will improve.

2. Increase the rate at which the estimator gains experience

Ability to estimate comes from learning from past experience of estimating. So get more experience! The way you increase your learning rate is to shorten the cycle time i.e. the time it takes to produce something. Doubling the cycle time doubles the rate of learning! Of course, the experience gained needs to be effectively captured (retrospectives, low staff turnover, etc.).

3. Ensure estimators are the people who are doing the work

Seems simple but in a lot of cases, the people doing the estimating are far from the actual work (or, sometimes, estimates are adjusted by people far from the work!)

4. Reduce dependencies

Typically if a task requires 90% of one person and 10% of another, the 10% person will be the bottleneck because his focus is not on the task. This means that it’s difficult for him/her to be available precisely when needed. Eliminating this dependency will reduce variation in timelines. Feature teams help with this.

5. Get multiple inputs

Agile teams often use planning poker to harness the brainpower of the entire team to improve the estimate – it’s unlikely that one person has all the best information. This is a fun and easy thing to try.

6. Agree on which estimate we want

Many timeline estimates are simply the earliest possible time for which the probability of delivery is not zero. Did you want the most likely date (50% chance of being early, 50% chance of being late) or the date for which it is almost certain delivery will have happened by? Make sure everybody agrees which estimate we are talking about.

image

7. Understand the politics

There are also all sorts of human dynamics that can skew your estimate. For example:

  • If there are “punishments” for getting it wrong, folk tend to add buffers to their estimates.
  • If folk believe that the uncertainty associated with the estimate will not be communicated clearly (for example to senior management) then they tend to add buffers. A solution to this that a lot of agile teams use when they are doing an early guesstimate is to use “T-shirt” sizing whereby the estimate is classified as Small, Medium, Large or Extra Large. Communicating guesstimates like this makes it clear that they are not very accurate.
  • If there is already a target declared (“we want this to be under $100k” or, “we want to complete this by June”) then folk tend to be anchored by that and don’t like to say it’s not possible. We all like to please our stakeholders.
  • Many fascinating studies show that people have a natural subconscious tendency to be overconfident in their estimating skills. If the estimator(s) have little data on past performance to help correct this, then its likely they will underestimate tasks.

 

Setting expectations on estimate accuracy

It’s tempting to set a team a goal to improve estimate accuracy. The risk with this (which I have seen in some teams) is that the quickest and simplest way of improving estimating accuracy is either to add undeclared buffers or to invest more time and resource before making the estimate. The latter effectively moves estimate creation to later in the lifecycle which destroys the purpose of the estimate (which is to make decisions early on in the lifecycle).

A better way of improving estimating accuracy is to set the team goals on reducing cycle time and reducing the size of items flowing thro’ the pipeline (as per the first two points above). By focusing on these general lean-agile practices, we also get an improvement in estimating accuracy!

Always busy?

Which is most important? To ensure our teams are fully busy or to maximise the value delivered by the development pipeline? I’m hoping to have you consider that there may be times in when it is in the company’s interest to pay people to “look out of the window” for periods of time.

Throughput decreases as capacity utilisation approaches 100%

Think about a highway – cars flow fine when there is little traffic. When the volume of traffic goes above 80%, we start to get traffic jams. As the loading on the highway approaches 100% then we end up almost no flow at all. The overall value delivered (i.e. people getting to where they want to go) is less. Models have been developed which give the underlying maths of why this is. Highway designers have actually taken these model’s to heart and there are now places were cars are held by traffic lights on the sliproad at peak times in order to maintain a smooth flow on the main highway. The funny thing is this improves everybody’s end-2-end journey time including the cars which were held on the sliproad for a period. Its counter-intuitive, as it probably doesn’t feel like that when you are sitting on the sliproad not going anywhere.

A development pipeline has similar characteristics to a highway. The inherent variation in both demand and supply means that throughput is maximised when the pipeline is run on average below full capacity.

An example

One of the teams I coach is effectively doing a whole release which is dominated by a technical upgrade. To simplify the situation, what that means is that it requires the developers and testers but not the analysts. What should the analysts do whilst the developers/testers are busy with this release?

Options are:

  • Worst option: Do more analysis. Since the bottleneck is not analysis work, all this will do is pile up a lot of work for the developers/testers to work on. This pile will probably never disappear because as soon as the developers remove something from it, the analysts will continue adding a new item. We haven’t increased the value generated by the pipeline but we’ve increased the cycle time and hence decreased agility and speed of feedback/learning. Think of all the management and communication that with be required to look after this pile of half finished work? The overall value generated is almost certainly negative.
  • Better option: Analysts look out the window for a while. This has no negative effects on the value delivered. Of course, if this is a permanent thing (which in this case it isn’t) we should look into reducing the number of analysts.
  • Best option: Analysts learn over time how to help with development and test. They may not be the most efficient at these tasks. T-shaped individuals can move to the bottleneck and increase it’s capacity. This would enable the technical release to be ready earlier and hence increases the value delivered by the pipeline.

It’s counter-intuitive for us to accept that there are situations where the most value people can generate in the short-term is to do nothing. We all like to be busy. It’s not about the productivity of the individual but the productivity of the pipeline as a whole.  What do you think?

 

A pipeline near you?

image

Retrospectives: Looking back to move forward

Retrospectives are a practice used by a lot of software teams – particularly popular with those using SCRUM, since the “sprint retrospective” is part of the SCRUM methodology. They are a effective and simple framework for continuous improvement (Kaizen) for any team – not just those delivering software.

How retrospectives work

The team meets to look at what they have been doing and what they can do differently going forward. Mindset is important – in order for team learning to be effective, there is no room for finger-pointing. The retrospective prime directive captures this:

“Regardless of what we discover, we understand and truly believe that everyone did the best job they could, given what they knew at the time, their skills and abilities, the resources available, and the situation at hand.”

Frequency of retrospectives

Many teams organise a “lessons learnt” session after the project is done (or after a major release). This is too late! It’s too late to change anything, some events have been forgotten and energy levels are low. Retrospectives need to be scheduled at least monthly.

How retrospectives work in our coaching team?

We “eat our own dog food” so we have always had monthly retrospectives in the coaches team. For reasons that are lost in the mists of time (creating a relaxed environment maybe?) we always hold our retrospectives in a bar. The facilitator of the session is not our team leader (me) but is the same as the person who is facilitating our team kanban board (a role which rotates so that all team members get to do it sooner or later).

image

 

As a warm up, we create a timeline of the key events which have happened since the last retrospective – handy to bring what’s been going on to the front of our minds:

image

We then answer the 4 key questions – there are other possible sets of questions out there which we have experimented with (e.g. the retrospective starfish) but these four seem to work best for us:

  • What did we do well, that if we don’t discuss we might forget?
  • What did we learn?
  • What should we do differently next time?
  • What still puzzles us?

We are very low tech – we have one sheet of A3, divided into four areas, which we all write on at the same time.

 

image

When we are all finished writing, we go fairly slowly through everything everybody has written and agree any actions which we transfer to our team kanban board. The whole thing normally takes us about 2 hours – we’ve tried to do it in less but it just hasn’t worked out for us.

So, lets cut to the chase. What have we got of value out this? I’d say two things. Firstly the concrete actions we take as a result of the retrospective have improved the way we work. This has been in many, many small ways. Also, by including the whole team, we have perhaps identified earlier than we otherwise things that are about to become large problems but right now are just small irritations.  Secondly, it has helped instil a mindset in which the whole team is responsible for improving the way we work and given us a sense of common ownership for our team process. It is taking us closer to that ever elusive goal: the self-managed team.

Try…

So does your team have a structure for regularly taking time together to look backwards to work out how best to go forwards? Do you like the idea of continuously improving the way the team is working but you don’t know how to approach this? Why not try running retrospectives with your team?

More about retrospectives at retrospectives.com and Seapine.

How do we prioritise technical debt?

Technical debt is one of the great “inventions” in software engineering of the 1990s. The idea, if you are not familiar with it, is that every time you cut a corner in the interest of delivering value now i.e. delivering on the immediate needs of the project, you record it as a debt. The reason it is a debt, is that if you don’t pay it back then it will impede your ability to deliver value in the future. Examples of technical debt include:

  • Lack of adequate documentation
  • Code that is unused; untested; uncommented; hard to understand
  • Non-compliance with Non Functional Requirements or Principles, Standards & Guidelines

Incurring technical debt can be the right thing to do, providing its incurred in a prudent and deliberate manner.

A fascinating reason for taking debt repayments seriously is the broken window theory. The theory has it’s roots in experiments made in the 80’s. Experiments showed that disorder and crime are linked – if a windows in a building is broken and left unrepaired, all the rest will soon be broken. People seemingly get the message that nobody cares. The insanely complex nature of software systems means that similar behaviour is observed – nothing accelerates the speed of the descent into un-maintainability than allowing technical debt to build up.

Process-wise, what is required is:

  • A process for recording technical debt and making it visible to stakeholders
  • A process for deciding when to paying it back

The first bit is easy. The second is tricky. The solution for the second bit that some teams I have worked with hasve settled on allocating 50% of the development effort to paying down technical debt. Why 50%? Why not 40% or 60%. I’m guessing that 50% was the maximum the team felt they could ask for without having an unacceptable impact on the delivery of value now.

A better approach would be to use an economics-based prioritisation algorithm (like cost-of-delay divided by duration). Its not easy, since we would need to estimate the benefits/savings from paying down each element of the debt, but it’s worth it. An effective technical debt process avoids creating solutions that, just a few years after they were first created, we can no longer effectively change because the cost and risk is too high.

For the curious, see http://c2.com/cgi/wiki?TechnicalDebt for more about technical debt.

The usefulness of cost-of-delay: a real-life story

Here’s a interesting real-life story of a requirement from one of the delivery teams that I have worked with which emphasises how a focus on cost-of-delay helps.

The story concerns requirement RQ-0672 which took 46 weeks to go from being captured as a idea to being live in production.  We have a fair idea of which weeks there were activities happening (green) and which weeks nothing happened (red); shown in the value stream map below (See here to learn more about value stream mapping techniques):

 

image

Observation 1: It’s pretty clear there was a lot of “waiting waste”. Most weeks, nothing happened!

We introduced the concept of cost-of-delay to this team – i.e. measuring the opportunity cost to the company for every week this requirement was not implemented. This turned out to be $214,000 per week. Cost-of-delay figures often have high levels of uncertainty (particularly if the benefits of the requirements are around increasing revenue as in this case) but in this case, this number is fairly certain.

This particular delivery stream is capable of delivering a requirement like this within 13 weeks. Had this been delivered within 13 weeks instead of 46 i.e. 33 weeks earlier, this would have improved the company’s revenue by:

$214,000 x 33 = $7,062,000

Wow. Big number, particularly in the content the actual cost of making the change was around $4,200.

Obseveration 2: There can be significant value in reducing cycle time.

So why didn’t we make it happen earlier? The pipeline was setup without a focus on cost-of-delay which means there is no sense of the urgency associated with each requirement – they are all treated the same. Once we know the urgency, then we know what trade-offs to make.

In this case, you could say that it makes sense for the company overall to spend up to $214,000 for every week we could bring forward the go-live date for this requirement. You can also see from the value stream mapping that there was a proof-of-concept done for this requirement. This proof-of-concept took a week to execute – plus some mobilisation time – probably several weeks – hard to know exactly. Lets say 2 weeks mobilisation for the sake of argument. The cost of delaying the go-live by 3 weeks to do the proof-of-concept was:

3 x $214,000 = $642,000

The purpose of this proof-of-concept was to be able to accurately estimate the cost of the change. Remember, this change actually came in at a cost of $4,200. Given the enormous benefits, it would still have made sense for the company to implement this change if the cost had come in at $4,200,000. Perhaps we should have skipped the proof-of-concept stage for this requirement!

Observation 3: Not all estimating activity is truly value-adding

I have selected an extreme example of course. Even so, its real. It happened. It serves to emphasis that some requirements have high cost-of-delay which needs to be managed. A good innovation process design should  expedite these types of requirements by:

  • Prioritising quickly based on cost-of-delay (ideally within a week)
  • A frequent “pull” of the top requirement from the top of the dynamic prioritised list of requirement
  • Fast cycle time from the time of “pull” to go-live through a pipeline which is not clogged up with too many other requirements

Lean-agile risk management. Is there such a thing?

An interesting area of discussion is to whether a lean-agile approach contributes anything to effective risk management. Do lean-agile practices effectively mitigate risk? Some say yes and some say no.

No!

There are, I believe, currently no lean-agile practices which are designed to identify the most important risks and then focus on mitigating and tracking them. Other (traditional) project management approaches have tools and practices for this – for example by the creation and ongoing review of a risk log. Yet this is missing in the lean-agile toolkit. Lean-agile practices need supplementing by a traditional risk management practice.

Yes!

The traditional risk log approach is of limited effectiveness. If you ask a team what the top three risks the team faces are (go on, try it), it’s rare that the team comes with a consistent answer. Even when a risk log helps identify risks, analysing the true root cause of a risk in any particular case can be difficult.

By implementing lean-agile practices, teams get access to ways of mitigating the highest risks that an IT development project faces as evaluated by broad industry experience.

Take the Agile Manifesto. It was created by a group of experienced people. Apparently, it was tough to find things they agreed on (even though they were experienced, the number of projects they have worked on is still small so the number of data points they have is also small). They did manage to agree on the agile principles behind the manifesto. Each principle addresses one or more key underlying risk facing an IT development project. I’ve had a go at identifying the implied risk below. Would you think of these when you start a IT development project? Would you agree that these are the key risks?

Principle Implied risk
Our highest priority is to satisfy the customer
through early and continuous delivery
of valuable software.
Solution deliver generates no or little value. Unused features.
Deliver working software frequently, from a
couple of weeks to a couple of months, with a
preference to the shorter timescale.
Solution could not be built as intended. Solution delivers no value. Things external to the project change.
Business people and developers must work
together daily throughout the project.
Business people and developers do not communicate quickly & effectively.
Build projects around motivated individuals.
Give them the environment and support they need,
and trust them to get the job done
Environments and support not available.
Teams not empowered.
The most efficient and effective method of
conveying information to and within a development
team is face-to-face conversation.
Geographical distribution of teams and over-reliance on electronic communication leads to ineffective communication.
Working software is the primary measure of progress. Focus on internal goals instead of meaningful ones (delivery of working software)
Agile processes promote sustainable development.
The sponsors, developers, and users should be able
to maintain a constant pace indefinitely.
Unsustainable development (pace, architecture, …)
Continuous attention to technical excellence
and good design enhances agility.
Lack of focus on technical excellence.
Simplicity–the art of maximizing the amount
of work not done–is essential.
Complex code. Doing work that doesn’t need to be done.
The best architectures, requirements, and designs
emerge from self-organizing teams.
Ineffective teams due to poor team structure/task management and the team not being empowered to change this.

What do you think?

8 tips on managing requirements

requirements-boxI have been looking recently at managing requirements for new projects. I thought it would be interesting to share our top lean-agile tips/encouragements for for managers of requirements (i.e. the product owner role in agile terminology):

Tip #1: Do only enough requirements work up front to enable you to build the first (small) feature. Then learn and adjust!

How much do we need to work out up front and how much can evolve as we build the solution? Defining all the requirements work up front (Big Requirements Up Front) seems attractive. It means that there can be a contractual style relationship between business folk and solution builders. Commitments can be made and, if an external vendor is involved, fixed price deals can be made. If dates, budgets etc. are not met then punishment can be handed out etc.

Yet this is not the way the IT industry is going. The biggest risk in IT development is that the solution delivers little or no value (unused features). The whole agile movement has this in focus. The IT industry has learnt that Big Requirements Up Front actually increase this risk because the delay the point when we are going to get feedback on our requirements. There are always large uncertainties in what the customer needs and how to build it. Its never been done before! A lean-agile approach emphasises learning – build a bit and get a response so we don’t have to get it right first time.

Wait, I hear you say. Doesn’t this mean that you might not take all business requirements into account from the start and have to refactor/reimplement certain features later? Yep, it probably does mean this. IT industry experience suggests that these refactoring costs are typically much smaller than the costs required to get it right first time. (See here for more details).

Tip #2: See requirements as a placeholder for a conversation, not the conversation itself.

Consider requirements as a placeholder for a conversation between the people who have a business need and the developer writing the code. They are not the conversation itself.

To align everybody around the direction we want to head in, a short up-to-date document describing the project vision (better a short document which everybody has read than a longer one that hasn’t been read at all). Beyond this, user stories are usually the best format to capture requirements since they capture the intention of the requirement (“in order to”) and who wants this requirement (“as a”). Otherwise, these things are often lost. e.g.

AS A: Flickr member
I WANT TO: be able to assign different privacy levels to my photos
IN ORDER TO: control who I share which photos with

Tip #3: Break requirements down

User stories can be arranged into hierarchies with top level stories which are broken down into smaller stories. Keep breaking the work down until you get to a feature that can’t be broken any further and still retain some business value (a minimal marketable feature in the jargon).

Tip #4: Define acceptance criteria up front

How will we know when a requirement has been delivered? Ensure user stories have acceptance criteria so we know when we are done.

AS A: Flickr member
I WANT TO: be able to assign different privacy levels to my photos
IN ORDER TO: control who I share which photos with
ACCEPTANCE CRITERIA
-

A user cannot submit a form without completing all the mandatory fields
– Information from the form is stored in the registrations database
– Protection against spam is working
– Payment can be made via credit card
– An acknowledgment email is sent to the user after submitting the form.


Tip #5: Collaborate like crazy, preferably face-2-face

Two of the the principles behind the agile manifesto are: “the most efficient and effective method of conveying information to and within a development team is face-to-face conversation.” and “Business people and developers must work together daily throughout the project” Geographic constraints may hinder this but try and get as close to this ideal as you can. Be available throughout development/testing/ etc. for questions (e.g. attend the daily stand-up). If you don’t do this, the fast feedback model won’t work.

Tip #6: Use CD3 for prioritising and be prepared to stop early

Use the economic model cost-of-delay divided by duration as the basis for prioritising requirements. This also means that we have the option to stop early when we have delivered the most valuable requirements.

Tip #7: Smooth the end2end requirements flow

Requirements need to flow into a development team. There is no point developing requirements too early (requirements go stale very quickly). Equally, there is no point having a development team standing idle. Ensure the people who are actually going to build the solution (i.e. coders, not managers) participate the the refining of the user stories. It these people are not available now, wait until they are. It’s counterproductive to create a large queue of requirements that are waiting to be implemented.

 

Tip #8: Explain to business stakeholders how they need to behave

It’s helpful to explain to business stakeholders what kind of behaviour from them would support a successful outcome:

  • Charter the team to deliver the best possible value in a fixed time scale. Be cautious about asking for a fixed scope by a fixed date. The team will probably have to do Big-Requirements-Up-Front (BRUF) in order to have enough information to commit which significantly raises the risk of failure. It may also mean that invisible factors, like quality, are compromised.
  • Provide $ cost-of-delay/week and trust the team to make the right trade offs. Saying that “this is top priority” or “the board wants this” etc., gives the team no guidance about how to trade off against other high priority items.
  • Trust the team.
  • Make sure business people are available to quickly give feedback.
  • Insist on getting to see something of business value working in production quickly (30-90 days from project start) .
  • Insist on a weekly demonstration of progress.
  • Insist on regularly attending the team stand-up.
  • Insist that the team commit to dates on near-term work items (typically for the next 2-4 weeks).
  • Insist on an up-to-date roadmap covering a longer period. Don’t expect a lot of detail on this.
  • Stop/abandon the project early if you feel you’ve got enough value.