Uncategorized

The Good, The Bad and The Ugly of Large Scale Scrum (LeSS)

lssHaving just attended Craig Larman’s 3 day LeSS (Large Scale Scrum) training course, I’m full of thoughts:

The Good

LeSS comes from a good place – a scaled version of scrum! There is a real effort not to add anything when scaling to multiple teams that is really not absolutely necessary and the spirit of scrum is preserved. Craig has poured a lot of relevant experience into LeSS. I sharpened up a few misaligned assumptions that I had with scrum – Eh, I didn’t know single team scrum as well as I thought. Then there are all sorts of practical implications of multi-team scaling which Craig has lots of really good thoughts on: continuous integration, communities of practices, the implication of self-organisation on hiring and firing practices etc. There was a lot for me to learn there and I think I need to reread the LeSS books again (oh, despite having read them, it seems like I could get more juice out by reading them again – see at the end of this post for links. Read these books!).

As with scrum. LeSS has an appealing purity – it describes the perfect end state (as opposed to the more messy approach of Scale Agile Framework) . LeSS is a marginally better than Scrum in describing how to facilitate some of the steps on the journey.

The Bad

LeSS proposes the immediate elimination of all overhead roles (project managers, testing groups, component development groups) as a first step and the forming of feature teams aligned to customer-value. Whilst a laudable idea and a direction to look in, in the organisations I have worked with, the complexity of the business domain and the state of the legacy code, means a wholesale adoption of approach would be bounced out immediately “You just don’t understand our business” they would say “Take your theories elsewhere.” Remember Conway’s Law? Systems resemble the organisations which built them. Slow, large complex organisations lead to slow, large complex software. “Our business is slow, large and complex – changing our product development organisation will just create a mismatch?”

When pressed for examples of where Craig’s approach had worked, Craig provided a lot of general references but was a bit short on details. I’d like to hear from large organisations who are not primarily software product focussed who have actually done this. It doesn’t seem to me that LeSS is well matched to the kinds of organisations I have worked with. It seems to be for small sub-set of software R&D focussed organisations who are so desperate they will try extreme measures.

And The Ugly

There will be massive resistance from middle management to introducing LeSS – since most of their roles are eliminated. Craig’s solution is to ensure support from senior management. I (and I project everybody in the room except Craig) have never come across this situation where senior management would be willing to provide this necessary support. They just don’t see it as enough of a problem to go through the pain. Craig walks away from organisations that don’t provide the senior support for the change – otherwise he he will just be “rearranging the deckchairs on the Titanic.” OK, fair enough. I project there are, maybe, only 1 in 1000 organisations who have the senior management support for LeSS. What about the other 99.9% of the market? How does LeSS help them take the next step on their agile journey (By the way, Continuous Improvement is one of the ten principles of LeSS)? It doesn’t. LeSS doesn’t even provide any help with this since Craig has no experience about making the problem visible to senior management to the extent that they are ready to take action.

In short, cherry pick ideas from LeSS. As a generalised scaling approach – a big thumbs down! The industry is still waiting for a good scaling approach!

larman2 larman1

Starting Scrum: Inception Phase & Sprint 0

2015-01-23 12_19_44Scrum, as the predominant agile approach, is maddeningly simple – have a single team deliver a potentially shippable increment every couple of weeks. It says nothing about how to get this promised land! So, how do we get there from a standing start? The transition to regular value delivery can be split into three phases:

  • Prior to forming the team (Inception)
  • Team formed but not yet sprinting (Sprint 0)
  • Team sprinting

Prior to forming the team (Inception)

What kind of team do I want? Does it make commercial sense? How will this fit in with all the other stuff going on round here?  Are business stakeholders aligned that we should do this? These are all good questions to ask up front so many companies have a process for this – often very PRINCE2 Project oriented.

Where can we get agile inspiration for this? The portfolio level in the Scaled Agile Framework(SAFe) provides some guidance (Lightweight Business Cases, Epics etc.). It  separates the flow of work from the creation of capacity to do the work (motto “move the work to the people, not the people to the work”). SAFe almost kills off the project notion entirely in the interest of have stable (and therefore high performing) teams. This is a good direction to look in. If this is a step to far for you, Discipline Agile Development (DaD) has a specific name for this early phase: Inception. In the DaD Inception Phase:

  • Form Initial Team
  • Develop Common Vision
  • Align with Enterprise Direction
  • Explore Initial Scope
  • Identify Initial Technical Strategy
  • Develop Initial Release Plan
  • Secure Funding
  • Form Work Environment
  • Identify Risks

Sounds like the right kind of things, doesn’t it? The challenge is that these are all unbound activities – how do we prevent endless polishing that ultimately delays finding out whether we have something of value or not (by building a bit and getting it out there).

I’ve had reasonable experience with setting a target time for these Inception activities but anything that involves a gateway (e.g. typically Secure Funding) can not be fully time-boxed because, if the funding committee say “No! Not good enough” then the proposal is bounced back to be further refined. (By the way it’s a good principle of process design that you should not have a stage gate without a way of limiting work-in-progress upstream.)

Team formed but not yet sprinting (Sprint 0)

The team are now here (incidentally, recruitment of team members can typically take 3+ months so Inception can be quite long). Should they start sprinting immediately? Most practitioners use a Sprint 0 whereby the team prepares itself to be delivering value. What should be in sprint 0? Here are some suggestions:

Architecture

  • Architectural goals/approach identified and made visible.
  • High level architectural milestones understood
  • Dependencies and risks have been identified and made visible.
  • High-level conceptual design has been completed.

Environments

  • Network requirements arranged
  • Minimum environments ready (Development/test)
  • Development machines ready (Local development environments)
  • Logistic requirements in place (phone, desk, etc.)
  • Tools for testing, coding, integrating, and building have been selected and installed

Team

  • The team has received required training
  • Roles and responsibility have been defined
  • Team board is set up
  • Stakeholder map created
  • Definition of done agreed.

When is sprint 0 done? Is it a fixed scope (all these things must be done) or fixed time (do as much as you can in 2 weeks). My encouragement is that it should be primarily on a fixed-time basis (as much as possible in 2 weeks) except for one item. This one:

AS A: Scrum team
I WANT TO: Create a Definition-of-Done and a tiny bit of working software (“hello world”) that fully meets this Definiton-of-Done
IN ORDER TO: Ensure the development pipeline works end-to-end
ACCEPTANCE CRITERIA
- Demonstrated/Reviewed by Product Owner

 

The Definition-of-Done should include releasing software into a production-like environment (or even better, to production itself). The reason I like this a lot is that it drives out all the problems around environments,  documentation, version control, testing, security, release management, etc.

If you are not able to demonstrated in Sprint 0 that you can release to a production-like environment, you’re simply not ready to sprint.

Team sprinting

Once the team is sprinting, then the sprint retrospective within Scrum is the mechanism whereby the team takes time to check whether they are (still?) in the promised land of regular value delivery and make changes to how they work as necessary.

 

Ready for agile? A test for your organisation

I heard a tale of an agile coach who had a rule as follows: If an organization is using Internet Explorer version 6 then they are uncoachable (latest is version 11). This was based on his experience – he had never got anywhere with a company who was so far behind the curve. Adoption of Internet Explorer is an indicator of something about the organization that is directly related to their hunger to absorb new ideas about work (i.e. the agile revolution).

Adoption of new ideas is characterised by the technology adoption lifecycle shown below:

chasmcurveThis curve suggests that a small number of people/organisations leap on new technology.  The majority take some more time and a handful are really really slow to jump onboard. (See this fun 3 minute video of people dancing at a festival to get this).

Individuals who champion agile within large organisations are typically early adopters. The primary need of these individuals is to see that it works – ideally much better, not just a bit better than what went before. Well, agile really does work – much better than anything else we know of so far. So these people totally “get” agile.

Often these agile champions are bemused and confused by the pushback from their organisation when they try introducing agile ideas. This is because their organisation as a whole is not an early adopter of an agile approach. Their organisation is in the early majority, late majority or laggard category.

In Crossing the Chasm, Geoffery Moore suggests viewing individuals and organisations using this model leads to two insights:

  • Pick off a group at a time. Moore suggests its most effective to work the curve from left to right, targeting sales and marketing efforts at one group at a time. Once this group is on board, move to the group to the right. With agile adoption, we could say that the early adopters are on board and it is only now the early majority that should be in focus.
  • Early adopters and the early majority have different needs. This is the “chasm” which agile needs to cross. Early adopters care only whether agile works or not. They want to get ahead of the competition and don’t mind disruption to the organisation or an approach which is not perfect.  The early majority have other concerns – they are looking primarily for a productivity gain to their existing way of doing things and don’t want major disruption. They need to hear that others in their industry are adopting it. They want to be sure their “agile supplier” is a market leader with a good reputation and they feel comfortable if there is choice/competition between different suppliers/approaches. Above all they prize the stability and effectiveness of their organisation as it is today and want to minimise any rocking of the boat. They prefer a series of small  changes to one large bang  – unlike the the early adopter who are looking primarily for large step changes in performance (which normally implies a big bang).

So, to return to versions of Internet Explorer. Perhaps organisations who are, say, in the early majority for one thing, tend to be in the early majority for everything? If you organisation is a laggard with browser versions, it will also be a laggard with respect to agile adoption? What else might be correlated with this? I don’t have enough data to validate the test below (and it is culturally specific), but score your organisation anyway. Give yourself one point for each of the following:

  1. Green tea is available at the office.
  2. 360 degree feedback is the primary form of appraisal.
  3. You can get a new laptop within a day of requesting it.
  4. iPhones not Blackberrys.
  5. The corporate intranet is a Wiki, not Sharepoint.
  6. Free bowls of fruit available in the office.
  7. Guest wifi freely and easily available.
  8. Widespread use of open video technology (Skype, Google Hangouts, Facetime).
  9. A new access badge is issued within half a day of requesting it.
  10. Not using Internet Explorer 6!

Score as follows

  • Score 9-10: Your company is an early adopter and likely to already be doing agile!
  • Score 4-8: Your company is in the early majority. Fair chance of successful wide-scale agile adoption in the near-future.
  • Score 0-3:  Oh dear, your organisation is in the late majority or a laggard. Move to another company or wait  (possibly a long time). Agile is unlikely to take hold in your organisation any time soon.

Cracking SAFe

safeEvaluating whether the Scaled Agile Framework (SAFe) could be something to experiment with in your organization? If so, here is a personal view.

If you want the one line summary: fundamentally flawed but will add value in most organizations.

Hats off to Dean Leffingwell et al. for “making early and meaningful contact with the enemy.” The product isn’t perfect but they have got it out there and are getting feedback from the market. Now the SAFE authors need to demonstrate their agile credentials by rapidly iterating this product to become even better. I hope that the market success the framework currently enjoys will stimulate a period of rapid innovation in approaches to “scaling agile,” both within and outside of the SAFe framework.

The good

  • Defines a program level heartbeat. The program layer in SAFe provides a way of scaling up scrum – a sort of scrum for a team of teams. It provides roles, events, artifacts etc. for this level of activity.
  • Encompasses lots of other valuable frameworks. Much of the agile good stuff appears somewhere: most of extreme programming, an adapted version of scrum, some kanban stuff, some devops and a nod to lean product development all appear somewhere. It’s good to get an overview picture of this.
  • Acknowledges the realities facing many companies. SAFe proposes, for example, a releasable product at least every 3 months, which is perhaps a more realistic target for many enterprises than scrum’s every 1 month or less.
  • Its probably a lot better than what most companies are doing Agile thought leaders might deride SAFe because it represents a step back from where their thinking is. Yet it could well be a mega-step in the right direction for the people it is aimed at: large companies struggling with large scale software development.

The bad

  • Assumes big is beautiful. The SAFe approach assumes you have a program of 5-12 teams. There is no solution proposed for a 1-4 teams. There is no questioning of whether you need this many teams or what you can do to reduce the number of teams over time. There is broad agreement that scaling the number of teams up from one is the very last thing you should try when you are really out of other options. In SAFe there is no encouragement or support for smaller programs.
  • Stomps on scrum. SAFe talks a lot about scrum but breaks the core scrum rules that: a) the product owner is one person (in SAFe the product manager shares some of this role), b)a potentially useable product is produced within a month (SAFe says 8-12 weeks) and c) there are no dependencies outside a team of 9 people or less. Many companies struggle to implement these but at least with true scrum, they know at least know where they should be heading. A real fear is that companies will do SAFe because they don’t have the courage to do scrum. Consequently they won’t really get much juice out of the agile revolution.
  • Not much help for getting from here to there SAFe is so massive and sprawling that it is hard to know what is important, where to focus first and what we can leave to later.  There is no process for the adoption of SAFe. It seems like it’s implied that it’s a big bang – i.e. SAFe framework adoption is not an iterative adaptive process in itself. Where is the inspect and adapt activities on the adoption of the framework itself? There is no help for the organisation to “uncover better ways of developing software
  • Prescriptive tone SAFe says… Do this. Do that. Very little of how it is set up refer back to any underlying principles. It’s also a one size fits all model. SAFe is rooted in the experience of its authors and the tone is authoritarian.  Like all of us, they have limited experience. How many companies have they really implemented all of this with? Contrast that with the work of Larman & Vodde who present their experience in fairly humble tones as patterns that worked for them that others could try.

The indifferent

Some minor gripes…

  • The top (portfolio) layer is pretty thin. The portfolio layer is a valiant attempt to complete the enterprise picture. In most companies, how projects and programs get started is a murky political affair. Initiatives with significant backing will always circumvent defined processes. I see projects and programs as being like laws and sausages – its better not to see them being made. I can’t see how imposing a simple standard process model at this level adds any value.
  • Weighted-shortest-job-first prioritization is not implemented correctly. This economic approach and as such the cost of delay needs estimating in $ or £ or whatever. Relative estimation of cost of delay is just wrong. This gives no information to the team(s) what the company might be willing to pay to expedite the work. Using relative values is the easy way out.
  • Not clear what is and isn’t in SAFe Is it a toolkit? A source of inspiration? When can I say I am doing SAFe? Not clear.
  • Process over Individuals & Collaborations SAFe does have values like transparency and alignment but most of its thrust is around the big process -which doesn’t seem terribly agile. This is also, ahem, a criticism you could make of scrum – yet scrum says so little this can easily be justified as “just enough process”. The process picture appeals to management. Is this just pandering? Giving them what they want, rather than what they need?
  • Slow (8-12 week) program adapt & inspect This seems an awfully long learning loop. Contrast this with Larman & Voddes Framework 1 & 2 which have joint sprint retrospective at the end of each sprint.
  • Responsibities between Scrum teams, System team and DevOps.  As written, it seems like the DevOps team is the Ops part of DevOps and the System team is the Dev part. Doesn’t feel like true DevOps. Also, the system team seem to have a lot of responsibility for testing the system etc.  This will promote a lack of ownership of system issues in the scrum team.
  • The different agile approaches in SAFe don’t join up. The SAFe training material includes, for example a summary of lean product development (e.g. batch sizes, queues,…) yet this work is rarely cross referenced in any of the other chapters.

 

Five reasons why you might not really doing scrum after all

scrum-hardScrum is the best known and most widely adopted agile approach. When a manager in a large organisation says to me that their team(s) are doing scrum, my suspicion is that they don’t really know what scrum is because, well, it’s really hard for most large organisations to do scrum.

The formal definition for what scrum is is the scrum guide. The key sticking point in the guide are:

 

 

  1. End-2-end cycle time of less than 1 month “The heart of Scrum is a Sprint, a time-box of one month or less during which a “Done”, useable, and potentially releasable product Increment is created.”  This means that if marketing decides that the product is good enough at the end of the sprint, it can go out to customers without negligable further technical work. Useable doesn’t mean a prototype or a product that needs further testing in some way (regression, security, …) since this would mean that the output of the sprint wasn’t useable in itself. Scrum is in stark contrast to SAFe which suggests that one can have hardening sprints (HIP sprints) to sort these kind of things out before every release. Scrum effectively says; be ready to do a release at the end of every sprint.  So, can your scrum team(s) really go from prioritising a feature to a potential product release in a month or less?
  2. No dependencies outside of team “Cross-functional teams have all competencies needed to accomplish the work without depending on others not part of the team.” To deliver the product, there are no dependencies outside the team. Everybody needed is in the team; infrastructure, architecture, documentation, … Note that the scrum guide also says: “Having more than nine members requires too much coordination.” so you can’t make the team very big to solve this problem. Large organisations with lots of departments responsible for different parts of the product development process struggle with this.
  3. Fully empowered product ownerThe Product Owner is one person, not a committee.”  Scrum is very clear – the one person who is the product owner has full authority on product prioritisation decisions. Most companies have competing departments who all want their say in how the product should be and struggle to devolve responsibility to one person who is so low in the hierarchy that he/she has time to fulfil the product owner role.
  4. Team members all have the title “developer” “[Development team is] self-organizing. Scrum recognizes no titles for Development Team members other than Developer, regardless of the work being performed by the person; there are no exceptions to this rule;” A key barrier to effective self-organisation is typically entrenched job roles. “I’m a tester,” or “I’m a business analyst.” Human Resources(HR) departments in large companies love this  as  somebody can be a Junior Tester and somebody else cane a  Senior Tester which maps to HR’s job and pay scales. Some people like this (typically those with “senior” in their title) because it highlights their skills, makes them feel special and makes it easier to justify why they should  be paid more than the other guys. It can also affect reporting lines (e.g. all the “testers” need to report into the testing manager). It’s hard to find people who will actively support the notion that we are all “developers.”
  5. Scrum Master as facilitator “The Scrum Master is a servant-leader for the Scrum Team” Most companies struggle with both the notion of servant-leadership and  the facilitating/coaching nature of the Scrum Master role. Sometimes, Scrum Masters are seen as project managers and accountable for team success which is really not what is intended. Other times they are simply ignored. Scrum defines them as being responsible for ensuring scrum is understood and enacted – they are process coaches – making sure the scrum process framework is being adopted. They are not part of the product development process itself but a facilitator of how that process runs and adapts itself in the light of new learnings or changing demand.

So, are you really, really doing scrum?

 

 

 

Agile PMO: Four Questions for IT Management

pmo-dartboard (1)Just finished reading “Best Business: The Agile PMO – Leading the Effective, Value Driven, Project Management Office, a practical guide” by Michael Nir. Bit lightweight really. The most amusing bit is the way he lists how PMO’s often destroy value:

  • Focused on tools
  • Focused on processes
  • Focused on standardisation
  • Focused on managements needs (for reports, policing, the appearance of being in control, …)
  • etc.

… and so on. His key point is that a PMO needs to be focused on maximising the value generated by the organisation it is serving. Yeap. I buy that. Everybody in the IT department needs this focus. The issue  is that… well… the book is a bit short on answers about how to do this.

Lets start from basics. Do we need a PMO? IT management should also be primarily concerned with maximising value delivered. Four questions that projects by themselves struggle to answer may help with this:

  1. Do we consistently work only on the most valuable projects (and not too many of them)?
  2. Do we consistently ensure bottleneck resources are allocated to projects in the best interest of the company?
  3. Do we consistently identify bottleneck resources and what can be done to increase their capacity?
  4. Are we effective in transferring learning from one project to another?

The questions are not whether IT management do all these things but more whether they know that these things are happening properly (“work on the system, not in the system”).

I’m guessing most IT management teams would answer “Don’t know” to these four questions.

I suggest that responsibility for these questions can not be delegated to a permanent side organisation (a PMO) since these issues are all too difficult to solve without management’s direct involvement – which is why projects struggle by themselves.

I can see a case for having a temporary change organisation charged with helping the organisation adopt practices which help with the above. Some examples might be: Kanban (helps identify and manage bottlenecks), T-shaped individuals (increases capacity at bottleneck), Cadenced resource scheduling (helps with allocation of scarce resource), Cost-of-delay (identifies the most urgent requirements – good for all types of prioritization discussions), Retrospectives (good for capturing learning) etc.

None of these practices are magic bullets and so, in my vision, it goes on in endless waves: management review the 4 questions, decide which one(s) most impede value delivery, identify some practices to embed in the organisation which might help, set up a temporary change programme to drive this embedding and, after a while, the programme is over and  it’s time to review the 4 questions again.

 

 

Not all variation is evil: Six Sigma and Product Development

devilNot all variation is evil. Did I really say that? Many companies that have implemented Six Sigma will be shocked. The Six Sigma police will be knocking on my door pretty soon. This post examines an idea I find fascinating that sometimes in a product development context (as opposed to a manufacturing or even a service context) increasing variation can actually create value.

Benefits variability

Let’s look at variability in the benefits enabled by a development pipeline. The fundamental discovery nature of product development (customer doesn’t know what they want, developer doesn’t know how to build it, things change) means that we might know, at best, some kind of probability for the likely benefits.

Consider the following project types:

  • Project A (low reward, low risk) has a 90% chance of making a profit of $0.1m
  • Project B (higher reward, higher risk) has a 20% chance of making a profit of $10m

(These two projects have the same cost.)

If I was interested in minimising variability of benefits, then I would only invest in projects of type A (low reward, low risk). If I invest in 10 of these projects, 9 of them will likely give me a profit of $0.1m and one will give me a profit of $0. My total benefits would likely be 9 * $0.1m = $0.9m

On the other hand, if I invest in 10 projects of type B (higher reward, higher risk) Then it is likely that 2 of them will generate $10m and the other 8 will generate $0m. Total benefits would then likely be 2 x $10m = $20m but with much greater variability.

The variability of our benefits has increased but so to has the value to the company! So in this situation increasing variability maximises value.

Investing in product development is like poker in this respect – champions don’t win every hand but they do maximise their overall winnings!

Change the economic payoff function is easier than reducing the underlying variability

The example above highlights that we are not actually interested in variability per se – this is just a proxy variable for what we are really interested in – the economic costs of variability. This economic cost depends on how variability is transformed by an economic payoff function. In the example above the economic payoff of Project A ($0.1m) was tiny compared to Project B ($10m) which prompted us to choose the greater variability of Project B and still maximise value.

It’s often easier to change the underlying payoff function than it is to reduce the underlying variability.  To take a different example; reducing the amount of errors generated quickly gets tricky and expensive. An alternative strategy is to improve the speed at which errors are fixed which would reduce the economic impact of an error occurring. Improving the speed at which errors are fixed is normally (relatively) cheap and easy.

As a general rule, fast feedback is the key to changing payoff functions. For example, within a development pipeline, the best lever is usually to kill bad ideas early (and not try to eliminate them from entering the pipeline altogether) – so make sure the process is quickly able to find out whether ideas are bad!

Development pipeline have a lot of inherent variation

Now I come to think of it, variation is inherent in a development pipeline. No variation = No value creation. Consider:

  • We are dealing with ideas. They are unique in in their size, their scope and when they occur. Can’t do much about this except to hold them upstream until there is capacity to deal with them.
  • We are effectively creating recipe’s for solution, not actual solutions themselves. The recipes we produce are by their nature are also unique.
  • The value created by a solution is the most unpredictable and unmanageable aspect of a development pipeline. Understanding the market risk associated with a solution is an inherently tricky problem (if you excel at predicting market needs, stop reading this and go play the stock-market).

No wonder that IT best practice and the whole lean-agile movement is a lot about practical ways of applying the philosophy of: “try a bit and see!”

Oversimplifications do not help

I hope I’ve given you a hint that it doesn’t make sense to resolve all uncertainty associated with IT development – only that which it is in our economic interest to resolve. Oversimplifications like “eliminate variation” don’t help – there is actually an optimum uncertainty rate which we can’t reach if we focus only on minimising variability and some forms of variation are inherent in the generation of value in a product development context.

Read more on this topic in Don Reneirtsen’s The Principles of Product Development Flow (tough read but worth it!)

Lean-agile and PRINCE2

prince2I have been involved in recent discussions on structured project management methodologies. In particular PRINCE2 (which I’ll focus on here but the discussion could apply to any traditional project management approach). Are traditional methodologies like PRINCE2 compatible with a lean or agile way of working?

There are plenty of blogs discuss the experience of making PRINCE2 and lean-agile ways of working fit together and it’s certainly possible. My issue is not that PRINCE2 is wrong. It’s more that, for a IT development project, it encourages us to put our focus in the wrong place.

Discovery

Lean-agile ways of working emphasis the discovery nature of IT development. The tricky thing about projects involving IT development is that they are characterised by large uncertainties in both what is needed and how to build it. In this way, it’s more like other creative disciplines like marketing, R&D etc. where progress cannot be pictured as a linear function of the resources applied over time. We could call this the discovery mindset:

  • the customer doesn’t know what they want,
  • the developer doesn’t know how to build it,
  • things change.

IT development projects do have one redeeming feature – they can normally be delivered in small pieces which enables discovery to be done collaboratively in short learning cycles whereby we find out what is needed and how to build it by trying a bit in a short cycle.

Risk and Learning

PRINCE2 implies that the risks that need most of our attention are cost, time and scope overruns. Not really. The biggest risk in IT development is building something which isn’t used (ref. Mary Poppendieck) What needs our attention most is answers to the question of how we can learn faster about whether our solution will be used. Thick contractual requirements documents which are produced upfront, beloved of so many PRINCE2 practitioners, actually increase the risk that we build something that isn’t used since they increase the amount of work done before we find out.

Making learning our primary focus has all sorts of implications. PRINCE2 implies that change happens rarely and need to be strictly controlled. Yet if we are learning all the time, change will happen, well, all the time. Lean-agile practices embraces change and treats it as “business and usual”. Take retrospectives – these happen often, unlike the PRINCE2 lessons learnt report which happens only at the end. Or planning. This is a core activity in the PRINCE2 world, yet planning is merely helpful in the lean-agile. Planning is of limited value because there is always so much we don’t know yet – we typically can plan quite well the next couple of weeks but beyond that it gets vague. We need to iterate on our plan as we learn more.

Flow

PRINCE2 typically has no real concept of flow beyond the level of the GANT chart. Usually, each step of the GANT chart is executed only once. Lean-agile practitioners frame a project more in terms of setting up a production pipeline and then flowing small work items smoothly through it. This allows for adjustment and learning – it becomes a repeat game and we don’t have to get it right first time. Lean-agile thinking acknowledges that output actually goes up if a team is not working on too much at any one time. As such, lean-agile teams tend to pull work into the pipeline when there is capacity (and not when the GANT chart says they should. As Scott Ambler says “friends don’t let friends use Microsoft Project”).

Managing smooth flow (i.e. not having work piling up someplace; in testing, for example) also has implication for recruitment. Lean-agile teams tend to value people who are multi-skilled since they can move to where the bottleneck is building up in the pipeline – the location of which is impossible to predict in advance. PRINCE2 practitioners tend to prefer engaging specialists as these are the most effective at particular tasks that have been planned for them to do.

Value

If we are good at prioritisation, we might learn at some point that we have implemented 80% of the value for 20% of the cost and want to stop the project now. Scope is typically variable in a lean-agile project. PRINCE2 tries to nail down scope, cost, quality up front since it assumes these can be understood well enough before work starts – a questionable assumption for IT development.

Quality

PRINCE2 has a very contractual definition of quality. If we could usefully specify quality contractually in IT development e.g. “1 bug per 1000 lines of code”, we would. Alas not! Lean-agile thinking addresses this in a more practical way – quality is in a lean-agile context is about getting quick and complete feedback on our activities which then allow us to adjust and improve – quality is built into the process itself.

Collaboration

Stakeholder management in PRINCE2 is “contractual” with clearly defined roles etc. Lean-agile thinking focuses more on collaboration, face-2-face communication, joint problem solving etc. which is some way from the formal mindset of PRINCE2. It also typically emphasises self-organising teams since the team is closest to the work and hence are the ones who are learning the most about who is best to do what.

Recommendations

So here’s my top five points for a PRINCE2 project manager who wants to maximises his/her chances that an IT development project will deliver value:

  1. Make your primary focus to enable fast learning everywhere in your project (particular about whether the solution is actually used. Get started on this immediately by getting the smallest chunk of value possible in front of them straight away). Learn both about the customer needs and how to build the solution. Be fanatical about this.
  2. Frame the project as quickly setting up a “production pipeline” through which there is a smooth, fast flow of small requirements.
  3. Be OK with not pretending you know too much about the future. Educate your stakeholders as to why this makes sense.
  4. Charter your team to deliver as much value as possible with a given timeframe/cost and the let them work out how to do this.
  5. Face-2-face close collaboration is your dominant communication mode.

It’s KPI time again!

measure-upThe Key Perfomance Indicators(KPI)/Measurable Objectives setting process triggers the discussion as to whether the way KPI’s are done within a company adds or destroys value. A key plank of a lean-agile approach is systems thinking i.e. focus on optimising the whole end-2-end process, not the individual parts which are KPI’s often do. Deming, who you might say is father of this way of thinking, was specifically against the way to do KPIs. Number 12 in his 14 key principles is:

Remove barriers that rob people in management and in engineering of their right to pride of workmanship. This means, inter alia, abolishment of the annual or merit rating and of management by objective

Enough! It’s easy to criticize KPIs – it’s better to improve. Here’s a summary of the usual suspects and how they can be improved upon.

Variable Typical measure Usual outcome Alternative measures
Time Delivering on a predicted date Incentivises hidden time buffers and slower delivery Maximise speed in getting to the point where value starts to be realised
Scope Delivering all of the originally predicted scope Incentivises gold plating and discourages exploitation of learning. Minimize size of work packages to maximize both learning and early release of value
Cost Delivering at or below a predicted development cost Incentivises hidden cost contingencies, pushing costs up. Maximize value delivered (trade development cost against the opportunity cost of delay)
Quality Delivering changes with zero downtime and no errors Fear of change. Overinvestment in testing and documentation. Shorten feedback cycles at many levels (coding, defects…)

 

In short, the suggestion is that by over-focussing on the typical measures in the table above, we get a pipeline which is slow, expensive and wasteful. Explore the alternative measures instead!

Try…

Perhaps the table above can be used for inspiration when setting KPIs in your team?

Read more…

Improving estimate accuracy

targetDon’t we just get frustrated when our estimates or guesstimates on timescales or costs turn out to be different from what actually happens? So how do we improve the accuracy of our estimates?

What do we need estimates for?

This is a golden question – ask yourself this whenever you do an estimate. It’s a lean principle to remove waste (i.e. non-value adding activity). If the estimate information you are producing isn’t vital for making a decision, then take a good hard look as to whether you need it. Some of the teams I have worked with have stopped some kinds of estimates as these estimates were not changing the decision.

Why are estimates uncertain?

Estimates are uncertain because we lack information. We’ve never built this particular feature before. We don’t exactly know what we need to do. As we work further with the feature, we gain more information and our uncertainty about how much work is required reduces. This leads to the idea of the cone of uncertainty shown below which suggests that uncertainty decreases (and hence accuracy of estimate increases) throughout the life of a feature.

image

 

What helps improve estimating accuracy?

1. Break work down

If you only do one thing, break the work down into smaller pieces (this is a core lean principle!) It’s much easier to estimate smaller activities than larger ones. By simply breaking down large work items into smaller pieces, your estimating accuracy will improve.

2. Increase the rate at which the estimator gains experience

Ability to estimate comes from learning from past experience of estimating. So get more experience! The way you increase your learning rate is to shorten the cycle time i.e. the time it takes to produce something. Doubling the cycle time doubles the rate of learning! Of course, the experience gained needs to be effectively captured (retrospectives, low staff turnover, etc.).

3. Ensure estimators are the people who are doing the work

Seems simple but in a lot of cases, the people doing the estimating are far from the actual work (or, sometimes, estimates are adjusted by people far from the work!)

4. Reduce dependencies

Typically if a task requires 90% of one person and 10% of another, the 10% person will be the bottleneck because his focus is not on the task. This means that it’s difficult for him/her to be available precisely when needed. Eliminating this dependency will reduce variation in timelines. Feature teams help with this.

5. Get multiple inputs

Agile teams often use planning poker to harness the brainpower of the entire team to improve the estimate – it’s unlikely that one person has all the best information. This is a fun and easy thing to try.

6. Agree on which estimate we want

Many timeline estimates are simply the earliest possible time for which the probability of delivery is not zero. Did you want the most likely date (50% chance of being early, 50% chance of being late) or the date for which it is almost certain delivery will have happened by? Make sure everybody agrees which estimate we are talking about.

image

7. Understand the politics

There are also all sorts of human dynamics that can skew your estimate. For example:

  • If there are “punishments” for getting it wrong, folk tend to add buffers to their estimates.
  • If folk believe that the uncertainty associated with the estimate will not be communicated clearly (for example to senior management) then they tend to add buffers. A solution to this that a lot of agile teams use when they are doing an early guesstimate is to use “T-shirt” sizing whereby the estimate is classified as Small, Medium, Large or Extra Large. Communicating guesstimates like this makes it clear that they are not very accurate.
  • If there is already a target declared (“we want this to be under $100k” or, “we want to complete this by June”) then folk tend to be anchored by that and don’t like to say it’s not possible. We all like to please our stakeholders.
  • Many fascinating studies show that people have a natural subconscious tendency to be overconfident in their estimating skills. If the estimator(s) have little data on past performance to help correct this, then its likely they will underestimate tasks.

 

Setting expectations on estimate accuracy

It’s tempting to set a team a goal to improve estimate accuracy. The risk with this (which I have seen in some teams) is that the quickest and simplest way of improving estimating accuracy is either to add undeclared buffers or to invest more time and resource before making the estimate. The latter effectively moves estimate creation to later in the lifecycle which destroys the purpose of the estimate (which is to make decisions early on in the lifecycle).

A better way of improving estimating accuracy is to set the team goals on reducing cycle time and reducing the size of items flowing thro’ the pipeline (as per the first two points above). By focusing on these general lean-agile practices, we also get an improvement in estimating accuracy!