IIEmodel.png

3 Styles: Iterative, Incremental and Evolutionary Agile (part 1)

When I’m teaching training courses (as I was this week at Skills Matter) or advising clients on the requirements side of software development (which I’m doing a lot of just now) I talk about model I call “3 Styles of Agile”. Incredibly I’ve never blogged about this – although the model is hidden inside a couple of articles over the years.

So now the day has come… I don’t claim the “3 Styles model” is the way it should be, I only claim it is the way I find the world.

While “doing agile” on the code side of software development always comes back to the same things (stand-up meetings, test/behaviour driven development, code review/pair programming, stories, boards, etc.) the requirements side is very very variable. The advice that is given is very variable and the degree to which that advice is compatible with corporate structures and working is very variable.

However I find three reoccurring styles in which the requirements side operates and interfaces to development. I call these styles: Iterative, Incremental and Evolutionary, and I usually draw this diagram:



I say style because I’m looking for a neutral word. I think you can use Scrum, XP and Kanban (or any other method) in any of the three styles. That said, I believe Kanban is a better fit for evolutionary while Scrum/XP are a better fit for Iterative and Incremental.

I try not to be judgemental, I know a lot of agile folk will see Evolutionary as superior, they may even consider Evolutionary to be the only True Agile but actually I don’t think that is always the case. There are times when the other styles are “right.”

Let me describe the three styles:


Iterative

In this style the development team are doing lots of good stuff like: stand up meetings, planning meetings, short iterations or Kanban flow, test driven development, code review, refactoring, continuous integration and so on. I say they are doing it might be better to say “I hope they are doing” because quite often some bit or other is missing. Thats not important for this model. The key thing is the dev team are doing it!

In this model requirements are arrive in a requirements document on mass. In fact, the rest of the organization carries on as if nothing has changed, indeed this may be what the organization wants. In this model you hear people say things like “Agile is a delivery mechanism” and “Agile is for developers”.

The requirement document may even have been written by a consultant or analyst who is now gone. The document is “thrown over the fence” to another analyst or project manager who is expected to deliver everything (scope, features) within some fixed time frame for some budget. Delivery is most likely one “big bang” at the end of the project (when the team may be dissolved.)

In order to do this they use a bacon slicer. I’ve written about this before and called it Salami Agile. The requirements document exists and the job of the “Product Owner” is to slice off small pieces for the team to do every iteration.

The development team is insulated from the rest of the organization. There is probably still a change review board and any increase scope is seen as a problem.

I call this iterative because the team is iterating but thats about it. This is the natural style of large corporations, companies with annual budgets, senior managers who don’t understand IT and in particular banks.


Incremental

This style is mostly the same as Iterative, it looks similar to start with. The team are still (hopefully) doing good stuff and iterating. There is still a big requirements document, the organization still expects it all delivered and it is still being salami sliced.

However in this model the team are delivering the software to customers. At a very lest they are demonstrating the software and listening to feedback. More likely they are deploying the software and (potential) users can start using it today.

As a result the customer/users give feedback about what they want in the software. Sometimes this is extra features and functionality (scope creep!) and sometimes it is about removing things that were requested (scope retreat!). The “project” is still done in the traditional sense of everything in the document is “done” but now some things are crossed out rather than ticked. Plus some additional stuff might be done over and above the requirements document.

I call this incremental because the customers/users/stakeholders are seeing the thing grow in increments – and hopefully early value is being delivered.

I actually believe this is the most common style of software development – whether that work is called agile, waterfall or anything else. However in some environments this is seen as wrong, wrong because the upfront requirements are “wrong” or because multiple deliveries need to be made, or because the team are’t delivering everything they were originally asked to deliver.


Evolutionary

Here again the development team are iterating much as before. However this time there is no requirements document. Work has begun with just an idea. Ideally I would want to see a goal, an objective, an aim, which will guide work and help inform what should be done – and this goal should be stated in a single sentence, a paragraph at most. But sometimes even this is missing, for better or worse.

In this model the requirements guy and developers both start at the beginning. They brainstorm some ideas and select something to do. While Mr Requirements runs off talk to customers and stakeholders about what the problem is and what is needed the tech team (maybe just 1 person) get started on the best idea so far.

Sometime soon (2 weeks tops) they get back together. Mr Requirements talks about what he has found and the developers demonstrate what they have built. They talk some more and decide what to do next.

With that done the developers gets on with building and Mr Requirements gets on his bike again, he shows what has been built and talks to people – some people again and some new people. As soon as possible the team start to push software out to users and customers to use. This delivers value and provides feedback.

And so it goes. It finishes, if it finishes, when the goal is met to the organization decided to use its resources somewhere else.

Evolutionary style is most at home in Palo Alto, Mountain View and anywhere else that start-ups are the norm. Evolutionary is actually a lot more common than is recognised but it is called maintenance or “bug fixing” and seen as something that shouldn’t exist.

Having set out the three styles I’ll leave discussion of how to use the model and why you might use each style to another entry. If you want to know more about each model and how I see Agile as spectrum have a look my 2011 “The Agile Spectrum” from ACCU Overload or the recently revised (expanded but unfinished) version by the same title: “Agile Spectrum” (the 2013 version I suppose, online only).

3 Styles: Iterative, Incremental and Evolutionary Agile (part 1) Read More »

BasicQOnion-2013-09-9-19-32.png

Still banging on about Quality

The last couple of blog entires have been a public investigation – personal think – about software quality – see What is Software Quality? and Software quality? Thanks to all those who have provided feedback and comments, they’ve all contributed to my thought process.

I think I am moving towards some kind of conclusion. I’ll write up that conclusion for Xanpan and post it elsewhere too. Right now I’d like to add some more thoughts and hint at the conclusion.

I’m starting to think it is easier to talk about how to achieve high quality and about the benefits of quality than it is to actually define quality itself. Without actually defining what quality is or what the benefits of quality are then talk of how to achieve it is unlikely to be more than opinion.

I think that defining quality is one thing, and we also need to recognise that there may be multiple ways of achieving that end point. I might not agree with all those ways, I might even think that some are misguided but I will recognise that mine is not the only way.

So far my definition of “Software Quality” has focused on two aspects: defects and maintainability (and particularly extendability).

That is a small view, Tom Gilb and Jerry Weinberg’s suggestions are much wider. Tom would – I think – say my two aspects are but two “qualities” among many. For example: speed performance, ease of use and conformance to specification might all be part of quality.

I’m now thinking of quality like an onion. Defects and maintainability are at the core of the onion, and defects is probably the core.

BasicQOnion-2013-09-9-19-32.png

You, your organization, your team, your customers might choose to added additional qualities to this definition which you are quite at liberty to do. For example, this might be your onion:

EnhancedQOnion-2013-09-9-19-32.png

I believe this model is entirely compatible with Gilb’s concept of quality as qualities and Weinberg’s “Quality is meeting some person’s requirements.” (Thanks to Mark Nilsen for this).
Points here:

  • Quality is multifaceted and finding a single definition to satisfy all situations is tough. (Even if we had such a definition it might be too abstract or academic to be meaningful in everyday work.)
  • Each organization/team/project/product could benefit from defining what they take to be quality. (And I would suggest that few actually do this and that this resulting difference in individuals opinion on “what is quality” is the cause of much friction and mis-aligned aims.)

Quality is itself a statement of values – not necessarily financial value, its as much about personal values. That said financial value should neither be ignored or belittled – after all, most software is developed for commercial reasons, profit is not a dirty word.

Thus let me make another statement I believe is true:

Quality results in business benefit – money saved, money made, happier customers, business aims realised

Once we accept this the statements like “Quality is free”, “High quality save money” and “Quality sells” become axiomatic. The aim of “high quality” is to produce one of these business benefits.

Consequently statements like “We can’t afford this much quality” and beliefs like “Lower quality is acceptable to customers, is faster and saves money” are the result of a mismatch in different individuals understanding of what is quality.

For example: one person believes customers are happy with buggy software as long as it is available soon while another believes that customers want bug-free software later. When such mismatched occur using the word “Quality” is itself a problem because it means different things to the different parties.

That might all be too circular for some, sorry about that, I wish it wasn’t but I suspect that is the nature of the beast.

Now back to my definition and the inner layers of my onion. I think defects and maintainability are at the heart of the onion and need to be present in every definition of software quality because…

Defects: the research I have seen (Jones and elsewhere) and personal experience (as a developer,manager and customer) tells me that defects (bugs) are not a good thing – whether absolute defects or common defects. My experience and intuition tell me Jones is right when he says: “projects with low defect potentials and high defect remove efficiency also have the shortest schedules, lowest costs, and best customer satisfaction levels.”


Maintainability, changeability, and its half-brother extendability, is life. Software which can’t change can’t live.

At a basic level if you have maintainability you can fix defects and you can add any other quality you like. If your software is maintainable but not performant you can change it to have the performance you want. If your software is hard to use but maintainable you can change it to be easy to use.

If your software lacks maintainability, changeability, you will find it hard – expensive but perhaps not impossible – to make any of these changes. When software lacks maintainability it is not soft.

Lets agree to call software which is not (easily) changeable (maintainable) sclero-ware.


Definition: Sclero-ware – software which is not practically changeable.
(Where practically is context dependent.)

“But” you might ask, “are there not some examples of software which does not need to be maintained?”. “Surely,” you say, “there is some software which does its job and it is done. Defects might be core of the onion but maintainability is just another quality you might, or might not want.”

While this I can follow the logic here I am lacking an example. Do you know any? Indeed I suspect there are no such examples.

Because: successful software is used, and because it is used it needs to change.

Software needs to change because the world changes: Windows 8 replaced Windows 7 which replaced Vista which…. the Euro replaced the Deutschmark which replaced Reichsmark, the 747 replaced the 707 which replaced the Queen Mary and Queen Elizabeth I, telephones became mobile, telephones replaced iPods which replaced Discman which replaced the Walkman and along the way iPods replace Hifi systems, Banks replaced Building Societies, Debit cards displaced cash, ….

Must I continue? – the world changes.

Software which is not used does not need to change.

Software which is not used is the past. It is dead. Look here, SourceForge has plenty of dead software projects you can download, try one (here’s a page of examples.)

Successful software lives, it changes, it moves on.

So yes, software which did its job and is no longer used doesn’t need to change. You don’t need to change if you are dead.

Also, “maintenance” starts a lot sooner than is many people recognised. Maintenance starts the day after coding, not the day after the “project” ended. If you need to go back a fix a bug – at anytime! – you are maintaining. If your software lacks maintainability you are going to have a hard time getting any bugs out (and I suspect if it is not maintainable you probably have a lot of defects).

To summarise: I consider software quality to be a function of high maintainability and a low number of defects. I focus on these to attributes because I believe these are invariant. I am happy for add more qualities in a given context.

I accept there are different views on the best way of achieving these qualities. For example:

  • The approach I was taught in the 1980s and 1990s involved detailing what was required, perhaps writing a logical specification, engaging in a design process intended to minimise defects and maximise maintainability, then coding and finally testing the software. Some people call this “Waterfall”, I prefer “Traditional”.
  • The approach I advocate today involves setting a goal, identifying small pieces which can help build towards the goal, setting tests, engineering to those tests and repeating the process again and again and again. Some people call this “Agile”, I believe there are actually three styles better called “Iterative”, “Incremental” and “Evolutionary”.

And I believe that if you lack quality – you have a high number of defects and sclerotic code – sclero-ware – you will fail every time.

Still banging on about Quality Read More »

Software quality?

Since publishing my stream of consciousness about software quality on Friday (What is a software quality?) I’ve continued to think about the subject. In part putting the thoughts down started to structure my own thinking, in part a few comments made on Twitter caused me to think more, and, simply having written something caused me to reflect.

There are four main points I’d add to Friday’s entry.

First, what I wanted, what I needed for Xanpan, was (is) a good enough working definition of software quality, I don’t need a perfect definition. But even this turns out to be surprisingly difficult. And as Steve Smith pointed out this is not confined to software quality, defining “quality” at all tends to lead you in the direction of Zen and the Art of Motorcycle Maintenance.

Next, what I really need are two things: A definition of software quality and a definition of a software defect. I think I’m starting to approach workable definitions of both. But, definitions (by definition almost) should be free of assumptions about how you achieve – or don’t achieve – that aim. In other words, the definitions should not imply or assume an Agile, Waterfall or any other approach.

Third, one of the things I was concerned about was gold plating or over engineering. Really this is part of the “how do you achieve your aim.” A definition of software quality – and low defects – need not rule out (or embrace) any particular approach. I need to separate this out. Similarly the knowledge issue, while important, needs to be put on the side here.

Finally, I’m coming to the conclusion that we need two different terms for software defects. The clue was in that Tom Gilb quote really.

  • An Absolute Defect is, as Gilb says, free from personal opinion or taste. It is objective. For example, function called “Add Two Numbers” always returns 5 – whether you give it 2 and 2, 2 and 3, or 100 and 200.
  • A Common Defect (and I’m prepared to find a better name and I don’t imply any link to Deming) is anything which someone at some time decided to report – or classify – as a defect.

Absolute defects are a perfect subset of common defects, i.e. all absolute defects are also common defects but only some common defects are absolute defects.

I think its important to make this distinction because so much time and energy is spent by people arguing over what is and what is not a “defect”. If we define separate terms then we can at least reason about this situation and the fact that individual, teams and organizations often have incentives for classifying something as a defect whether it is or is not.

Jon Jagger pointed out in his comment on the previous blog that “Jerry Weinberg’s definition of quality is “value to someone” or “value to some persons”.” A common defect most definitely fits that description. Someone, somewhere, considers it valuable to call this thing a defect.

But I’m not sure I go along with that statement as a definition of quality as a whole. It is too open and not specific enough for me – or perhaps for the uses I want to put it to!

Some observations about defects to end:

  • All defects cost time and probably money: even logging a report of a potential defect costs time (and thus money) and action taken as a result (investigation and potential fixing) costs time. Defects themselves – even when not reported – may well costs money, e.g. overpayment or lost customers. (This also implies an element of the unexpected to defects.)
  • Absolute defects cost time and money; is the same true for common defects? Well maybe my opinion of something does result in a different financial outcome than someone else’s but does that make it a defect? Even then it can be surprisingly hard for some organizations to tell what makes, and what looses, money. I’d like this to be the dividing line but I don’t think it is.
  • Common defects need to be administered, managed and possibly fixed just the same as absolute defects. In my opinion, if someone reports a defect it should be considered against other defects – common or not – and action (fixing) taken or not. The debate between whether a particular defect is an absolute defect or a common defect isn’t very useful, it is the software equivalent of “my Dad is bigger than your Dad.”
  • If there exists some pre-code definition (specification, requirement, formula, etc.) of what the code should do it might be easier to classify something as an absolute defect but this isn’t enough reason to create that definition.
  • High quality software has few defects of either sort and is perceived as having few defects.
  • Software which has a lot of reported defects doesn’t quality as high quality.

Software quality? Read More »

What is a software quality?

If any of you have heard me speak in a training session or conference you’ll know I am found of quoting Philip Crosby: “Quality is free!”. Crosby was talking from a background in missile production but the message was picked up by the car industry and silicon chip industry (“The Anderson Bombshell” in 1980 explained how Japanese RAM manufacturers were cheaper than Americans and better quality). I am quite fond of applying the argument to software.

I like to cite Capers Jones: “projects with low defect potentials and high defect remove efficiency also have the shortest schedules, lowest costs, and best customer satisfaction levels.” (Capers Jones, Applied Software Measurement, 2008)

He’s not alone in this, Tom Gilb says: “The reduction in defects … saves ‘rework’, which otherwise is about half of all effort in software projects.” (Tom Gilb, Competitive Engineering, 2005).

Jon Jagger pulled me up on some sloppy discussion of software quality, defects and rework in Xanpan. So I’m trying to come up with a (concise) definition of what I mean by software quality and it turns out that this more difficult than you might think.

To my disappointment Jones doesn’t give a definition.

Gilb offers “Defect: a failure to observe a formal, written, required rule. It is not a personal opinion or personal taste. It is a failure to observe a group norm, or required best practice.”

That sounds good until you take the statement to pieces:

  • What if rules are informal? Well I suppose we can allow informal, tacit, rules, because they are “group norms”.
  • “written” assumes there is something written which isn’t an assumption I can accept. Jones points out that documentation is the second most expensive activity after fixing defects, so I’d hate to eliminate defects at the expense of increase writing costs.
  • “personal opinion or taste” seems fair enough but putting this into practice can be incredibly difficult. I know plenty of times when I would call a defect a personal taste but the person raising the issue wouldn’t
  • “group norm” is particularly difficult when you are developing products which will change group norms
  • And “best practice” …. who says it is best practice? who says it can’t be bettered?

I like Gilb’s definition but I don’t think is enough. Crucially even in saying “not a personal opinion” it does nothing to avoid the “one man’s bug is another man’s feature” problem.

What can we say about software quality and defects?

  • Software quality is inversely proportion to the number of defects in the system: high quality implies few defects and vice versa
  • Defects have undesirable consequences
  • Defects incur costs, in all likelihood financial costs but there are others, time in particular. Even if defects are not fixed they will incur costs, e.g. over payments from a financial system or people ringing the helpline to report a spelling mistake
  • Removing defects requires rework and rework costs time and money

This is probably the start of a longer list, what I am describing are the attributes – or qualities – I attribute to “high quality software”.

The list is also self fulfilling: everything I have said so far implies that low quality, lots of defects, will increase costs, so the quotes from Crosby, Jones and Gilb all become self fulfilling. Perhaps this isn’t a problem, perhaps the quality attributes we want from our software is that costs are kept down.

But there is another quality I would like from high quality software which is insidious. High quality software should be changeable – actually all software is changeable (its soft!) but some code is easier to change than other code.

High quality software as easy to change

Lets leave to one side a definition of easy, I agree it should be quantified but not right now.

What do I mean when I say “change” ? I think the spirit is captured by an old John Vlissidees quote I’ve long been fond of:

“A hallmark – if not the hallmark – of good object oriented design is that you can modify and extend a system by adding code rather than hacking it…. In short, change is additive, not invasive. Additive change is potentially easier, more localized, less error-prone, and ultimately more maintainable than invasive change.” John Vlissides, The C++ Report, February 1998

I’m prepared to generalise this to all software, not just OO software. I might even go as far as focusing on the “rather than hacking it” – although one then needs to define “hacking”. Good software needs to allow for change rather than having change forced into it.

Actually this quote also provides the attributes we need to define “Easy to change”

  • Change is localized
  • Change is less error-prone – perhaps better stated as “change does not inject defect” (Somewhere in Jones writing he suggests 7% of defect fixes inject new defects so high quality software would have a bad fix injection rate less than 7%)
  • Change is more maintainable, i.e. changing software does not detract from the changeability of the software

If have these attributes (qualities if you prefer) then software quality is high and as a result change is cheap(er). The relationship between quality and costs appears again.

But the way I describe the quality-cost link is the reverse of the way many people perceive it: the stereotypical Project Manager views quality as an attribute that can be reduced in order to accelerate development and reduce costs. I have to say I have difficulty in actually understanding this point of view but perhaps its because of the way I am defining quality.

Apart from that there is another danger of approaching: Over engineering.

Given all we have said so far you could make an argument for spending a lot of time designing your software to exhibit all these attributes. You could seek to build, design, software which would not require the design itself to be revisited. That after all is rework isn’t it?

Now I’ve long believed there is rework and there is rework:

  • Rework to fix bugs, defects, is bad and wasteful because you shouldn’t have put the bug in there in the first place.
  • Rework to change software for new requirements, even if that means reworking (refactoring) the design is good, or at least acceptable, because you couldn’t know about this up front therefore any effort to cater for this requirement might be misplaced and could actually end up complicating the design. In other words it is self defeating.

As I see it there is a question of knowledge here: you need to engineer within your knowledge, if you know, or could easily find out, some piece of information which would cause you to work differently then you should. But if there is information you don’t know, and would be time consuming/costly, or even impossible, to find out then it is acceptable to defer knowing and accept rework will be required later.

So the question starts to become one of knowledge acquisition. One way of acquiring more knowledge is through feedback, when feedback is rapid, timely and cheap to get we can rapidly expand the knowledge we are working with.

High quality software should be as free as possible from deficiencies given the current knowledge of what is required, but open to change when new knowledge becomes available which necessitates a change.

It’s tempting to write this as:

Quality(T) = Changeability(T) / Known defects(T)

Where T represents some point in time, as time progresses onwards changeability may well decrease which known defects may go up or down.

Notice I’ve said: “Known defects” not “Known changes”. For a piece of living, successful, software there will be a list of changes people would like made to the software. The existence of this list actually demonstrates another attribute of quality software: people use it and value changes. (Low quality software on the other hand may be so buggy that people avoid using it and thus don’t request changes.)

Excluding non-defect changes like this does leave open the problem of whether a defect report (a bug) is actually a defect report or a request for change. In some organisations such debates are heated but usually they are pointless. Sometimes they are really a Cap Ex v. Op Ex discussion, sometimes they are a “Who will do it?” or a “When will it be done?” discussion, sometimes they are a “Who will pay?” discussion. All these, and more, problems get in the way of this measurement.

While I would like to throw the door open and say: “Its all work to be done, one backlog” to do so would be to blow the equation and argument out of the water because, as I just said, high quality software may well have a longer list of change requests than low quality software.

So now we have to consider the argument about internal v. external quality…. but this blog entry is all ready too long.

Does any of this make sense?

Does any of this help?

Am I any closer to defining software quality?

Perhaps but I don’t think I’ve answered my own question yet!

Writing this entry has helped me. I think I’ve found a possible definition of quality, although I still need a definition of defect. I think we need to consider the attributes of both quality and defects. I think there is a temporal issue here related to knowledge (but I don’t know how to model or define that.) I’m even more confused then ever about the relationship between cost and quality because it appear circular.

Anyone got any better ideas? – or just comments?

What is a software quality? Read More »

Scaling Agile Heuristics – SAFe or not!

If I’m being honest, I feel as if I don’t know much about scaling agile. But when I think about it I think the issue is really: What do you mean when you say “scaling agile?”


It seems to me you might mean one of three things:

  • How do you make agile work with a big team? Not just 7 people but 27 people, or 270 people
  • How do you make agile work with multiple teams within an organisation? i.e. if you have one team working agile how do you make another 2, or 20 or another 200?
  • How do you make a large organisation work agile? – it’s not enough to have a team, even a large team working agile if the governance and budgeting systems them work within are decidedly old-school

When I rephrase the question like that I think I do have some experience and something to say about each of these. Maybe I’ll elaborate.

But first an aside: why have I been thinking about this question?

Well David Anderson pushed out a blog about the Scales Agile Framework (SAFe) which prompted Schalk Cronje to ask what I thought – and at first I didn’t have anything to say! But then Schalk pointed out that Ken Schwaber has blogged about the SAFe too. It’s not often that David and Ken find themselves on the same side of the argument… well, actually… they probably do but too many people are willing to see Kanban and Scrum as sworn rivals. I digress, back to SAFe and scaling.

I still don’t have anything original to say about SAFe, I simply don’t know much about it. However the points David and Ken make would worry me too.

I’m not about to rush out and read SAFe. I find I’m more likely to be told by my clients: “I can see how agile works in big companies, but we are a small company, we can’t devote people like that.” And while I do have, and have had, some larger clients I think that statement says a lot.

I have over the years built up some rules-of-thumb, heuristics if you prefer a fancy term, for “scaling agile”. I’ve never set them down so completely before so here you go:

  1. Inside every large work effort there is a small one struggling to get out: find it, work with it
  2. Make agile work in the small before you attempt to make it work at scale; if you can’t get a team of 5 to work then you won’t find it any easier to get a team of 10 or 100 working. Get a small team working, learn from it, and grow it…
  3. Use piecemeal growth wherever possible: start with something small and grow it
  4. Don’t expect multiple teams to work the same way – one size does not fit all! A new project team might be perfectly suited to Scrum, a maintenance team to Kanban and a BAU+Project team to Xanpan. Forcing one process, one method, one approach one different teams is a sure way to fail.
  5. Manage teams as atomic units, aim for team stability, minimise moves between teams
  6. Split big teams into multiple small, independent, teams with their own dedicated work streams, product focuses and even code bases        
  7. Trust in the teams, delegate as far down as you can; give teams as much independence as possible; staff teams with all the skills they need – vertical teams, include testers, requirements people and anyone else
  8. Minimise dependencies between teams; decouple deliveries, decouple teams and again, vertical teams, staff the team so they don’t need to depend on other teams
  9. Use technical practices to automate as much of the dependencies between teams as possible. e.g. a good TDD suit and frequent CI will by themselves allow two related teams to work much closer together
  10. Overstaff in some roles to reduce dependencies – overstaffing will pay back in terms of reduced dependency management work        
  11. Learn to see Supply and Demand (a future blog entry is in the works on this): supply is limited and is hard to increase, you need to work on the demand side too
  12. Recognise Conway’s Law: work with it or set out to use it; again piecemeal growth and start as small as possible will help
  13. Use Portfolio Management to assess teams, measure them by value added against cost. Put in place a governance structure that expects failure and use portfolio management to fail fast, fail cheap, fail often
  14. Ultimately embrace Beyond Budgeting and change your financial practices
  15. Big projects, big teams are high risk and likely to fail whatever you do; some things are too big to try. Some big projects just shouldn’t be done

    
There is no method, framework, tool out there that will be your silver bullet, but if you think for yourself, and if your people are allowed to think and act then you might just be able to create something for yourself.

Doing back to the three questions I opened with:

  • How do you make agile work with a big team? When the team gets too big split it along product lines or product subsystems; if you can’t then don’t do anything that big
  • How do you make agile work with multiple teams within an organisation? Use multiple independent teams, minimise dependencies, decouple and use technical practices
  • How do you make a large organisation work agile? Portfolio management and beyond budgeting

And remember: Don’t do big projects, do small ones and grow success. If that is not an option for you then brace.

Scaling Agile Heuristics – SAFe or not! Read More »

Commitment considered harmful

Some Agile evangelists are very keen on the idea of “Commitment.” i.e. the development team “commit” to doing an amount of work within a time-box (normally an iteration.) The team do this work come hell or high-water. They do what it takes. Once they’ve said they’ll do it they do it.

I believe the idea of Commitment was baked into Scrum – see the Scrum Primer by Larman, Benefield and Deemer for example. And I’ve heard Jeff Sutherland proclaim the power of Commitment in person. But it seem Scrum is less keen on it these days. The October 2011 “Scrum Guide” by Schwaber and Sutherland does not contain the words Commit or Commitment so I guess “commitment” is no longer part of Scrum. Who knew?

I’ve long harboured my doubts about Commitment (e.g. from last year see “Unspoken Cultural differences in Agile & Scrum” and “My warped, crazy, wrong version of Agile”, and from 2010 “Two ways to fill an iteration”) but now I’m going to go further and say the Commitment protocol for filling an iteration is actively damaging for software development teams in anything other than the very short run. This reassessment has been triggered by a) watching the #NoEstimates discussion on Twitter and b) visiting clients were teams attempt to follow the Commitment protocol.

1) Commitment can lead to gaming by developers who have an incentive to under commit (my argument in “Two ways to fill an iteration”).

2) Commitment can lead to gaming by the need/business side who have an incentive to make the team over commit

3) Commitment combined with velocity measurement and task estimation leads to confused and opaque ways of scheduling work into a sprint, Since (#1 and #2) developers over estimate stories and tasks and the business representatives apply pressure to reduce estimates. This prevents points and velocity of floating free instead they become a battle ground. (Points are a fiat currency, if you don’t allow it to float someone, somewhere, has to provide currency support; overtime from developers, or , more likely, inaccurate measurement.)

4) At one client commitment led to busy developers for the first half of the sprint (with testers under worked) and then as the sprint came to a close very busy testers with developers taking things a little easier. Except developers where also delivering bugs to Testers, the nice pile of bugs kept developers busy but meant that a sprint wasn’t really closed because each sprint contained bugs. On paper, on the day the sprint closed the sprint was done, but it soon required rework. There was also a suspicion that as the end of sprint approached Testers lowered their acceptance threshold and both Developers and Testers asked fewer probing, potentially disruptive, questions later in the sprint.

5) Developers under pressure – even self imposed – to deliver may choose to cut corners, i.e. let bugs through. Bugs are expensive and disruptive.

6) Developers asked to Commit ask for more and more detail before the sprint starts. A “cover your ass” attitude takes hold and stores start to resemble functional requirements of old with all the old problems that brought.

7) Developers become defensive pointing to User Stories and Acceptance Criteria and saying “Your bug isn’t included in the criteria so I didn’t do it” (the other end of a “cover your ass” attitude.)

8) Developers who have not totally mastered Test Driven Development will be tempted – even incentivised – to skip this practice in order to go faster. They may even go faster in the short run – building up “technical debt” – but in the long run will go far far slower.

9) Business/Customers conversely have no motivation to support development adoption of TDD or to invest in automated acceptance test (ATDD, BDD, SbE etc) of their own because, after all, the developers are committed.

Maybe I should say that I currently believe Estimates can work, I have sympathy with the #NoEstimates argument but I have clients where Estimates do work, one manager claims “to be able to bring a project in to the day” using estimates. So I have trouble reconciling #NoEstimates with experience.

Part of the #NoEstimates argument is that “estimates” are too easily mistaken for “commitments” and when they do so teams cannot be expected to honour them but some people do. Obviously if you remove commitment then the transmission mechanism is removed and estimates might still be useful.

While I’ve been suspecting much of the above its taken me a while to come to these conclusions. In part this is because I don’t see that many teams that actually do Commitment. Most of the teams I see are in the UK and I’ve always thought Commitment was a very American idea – it always creates images of American Football teams in my mind – “win one for the Gipper”.

Actually most teams I see are teams I have taught so they don’t do it. (they do some variation on Xanpan if I’m being honest). While I talk about Commitment I teach the Velocity protocol and it is estimation and velocity that is baked into Xanpan. (I hope to be able to push out my notes on Xanpan very soon so join the list.)

Commitment considered harmful Read More »

Three backlogs?

Now I know some people dislike backlogs – queues, wait states, work we want done – and I buy the argument. But the Scrum Sprint (Iteration) and Product Backlog model actually fits for a lot of organizations.

Maybe its a temporary state before moving to continuous flow, continuous value but it might also be a sensible state for many organizations. The Product Backlog is stuff they would like to do but haven’t gotten around to yet.

While I generally accept and teach the Product/Sprint backlog model (all the stuff we might do sometime in the future / the stuff we are of focused on for the next two weeks) I keep thinking its wrong.

There should be Three Backlogs

1. Opportunity backlog: all the ideas which have been suggested ever and have been considered worthwhile for recording. Recording such ideas does not in any way commit anybody to actually undertaking them.

2. Validated backlog: items from the opportunity backlog which have been examined, researched and discussed enough to be considered valid candidates for future development.

3. Iteration/Sprint backlog: the work that will be attempted in the current iteration.

While the iteration/sprint backlog plays the same role as it ever did – setting the agenda for the next iteration – splitting the product backlog allows for a clear separate of “good ideas” and “validated ideas.” Moving from the former to the latter involves checking ideas with stakeholders, measuring them against the over-arching goal, considering the benefits in the market or to the organizations and perhaps conducting experiments to measure benefits.

This three backlog model naturally maps to the three planning horizons I described a while back, and to the commonly used Epic, Story, Task work breakdown used by teams:
  • Opportunity backlog contains big items with little breakdown, Epics. These may be happen sometime in the longer term, over future quarters and years. They may appear on a roadmap or they may be more speculative.
  • Validated backlog items should be at the story size – small enough to be deliverable soon but demonstrating real business value. These items may be developed sometime in the next quarter so they appear on the quarterly plan.
  • Iteration backlog items are here and now, they are task sized and are in the current iteration.

There is no point in doing more work on any item until it moves to the next backlog, into the next planning horizon. At each stage some items will disappear, upon closer examination they will not be judged worthwhile.

Epics need not be broken down in their entirety before any work is undertaken. Ideally the first stories broken out of any epic would be experiments which could test technology options and, more importantly, market and client reaction.

For example the first stories for an epic entitled “Launch French version” might describe a series of data gathering experiments to assess the size of the market and opportunities. Translation, payment and such can wait, they might never need doing.

As for estimation:
  • Items in the Iteration backlog may well have detailed effort estimates arrived at from work breakdown if needed
  • Items in the Validated backlog probably have ballpark estimates and, very importantly, value estimates (how much is this item worth?)
  • Items in the Opportunity backlog might have effort scores taken from historical averages (why spend time estimating something that might not happen?) and can only move to the Validated backlog when a value estimate has been made and the item has been judged as worth doing relative to everything else

Three backlogs. What do you think? Good idea or silly?

Three backlogs? Read More »

Do it right, then Do the Right thing online

I was at the Nordic Developer Conference (NDC) in Oslo last week (well, me and about 1700 other people!). I took a risk, I spoke to the title “Do things right and then do the right thing” (now on SlideShare). I was deliberately challenging accepted management practice.

In many ways this was an experiment itself, the presentation is based on a feeling I’ve had for a few years now – dating back to the Alignment Trap – but never really sat down to argue in full. In part I was looking for people in Oslo to shoot down my argument – or support it! I’m glad to say that so far most of the comments I’ve had have been broadly supportive.

Basically my argument is this:

  • It is incredibly difficult to know “the right thing”
  • Therefore it is better to try something, get feedback (data etc.) and adjust ones aim; and repeat
  • However, in order to do this one needs an effective delivery machine
  • In other words: you need to be able to iterate.
Thus organizations need to “do things right” (be able to iterate rapidly) and then they can home in on the ultimate “right thing.”

I used the analogy of using of trying to hit a target with a gun. First you shoot, then you use what you learned to adjust your aim and shoot again. Again you use the feedback to refine your aim.

You might choose to use a machine gun, rapid fire, rapid feedback. Cover the target in approximate shots.

Alternatively you might use a sniper rifle. One perfectly aimed shot and bang! Hit first time.

While I’m sure many organizations would like to think they are using a snipers rifle (one bullet is cheaper than many) I’d suggest that many are actually using something with significantly poorer performance. An older weapon, one without sharp shooter sights, on which requires manual reloading, one which is prone to breaking.

As a result they try to make every shot count but they just aren’t very good at it.

While one might like to think that organizations make a rational choice between these different approaches I think it is more a question of history – or path dependency to use the economic term. You use the weapon you have used before. You use the weapon which your tools provide for.

I’m not saying this argument is universally true but I think it increasingly looks like the logical conclusion of Agile, Scrum and Lean Start-Up. I am also saying you need to try many times.

In technology our tools have changed: when teams worked with Cobol on OS/360 making every shot from your 1880 vintage gun count was important. But for teams working with newer technologies – especially the likes of Ruby, PHP and such on the web – then a trial and error approach might well be the best way.

Possibly the right answer is nothing to do with your organisation but rather a question of your competitors. You want to choose a weapon which will allow you to out compete the competitor, perhaps through asymmetric competition.

Of course the key to doing this is to work fast, and fail fast, and fail cheap. If you take a long time to fail, or if it is expensive then this technique isn’t going to work.

After the presentation Henrik Ebbeskog sent me this blog post via twitter which is beautiful example of exactly what I’m talking about: You Are Solving The Wrong Problem.

Then I went to the PAM Summit conference in Krakow were James Archer included a wonderful quote in his presentation:

“Faced with the choice between changing one’s mind and proving that there is no need to do so, almost everyone gets busy on the proof.” economist JK Galbarith.

This adds another dimension to my hypothesis: that when we invest a lot of time, energy and/or money in detraining what the “right thing” to do it it becomes more difficult to change our mind.

For example, suppose we invest a lot in building a new feature on our website, and suppose that in the week after launch the new feature performs poorly, or at least less well than other recent new features. Those who have invested a lot in arguing for the feature, those who feel closely associated with the feature may be more prone to say “Give it another week, lets get more data” while those who are more distant might be prepared to review what is happening sooner.

In order to accept “failure” we cannot invest too much of ourselves in any feature, shot or attempt.

I’m still not completely convinced by my own argument. The management doctrine “Do the right thing, then do it right” is so strong in me – and so logical. Although I can build reasonable argument for “Do it right, then do the right thing” that I’m still uncertain that I believe it.

What do you think?

Truly, I’d love to have more comments on this.

Do it right, then Do the Right thing online Read More »

FRTbasic.png

A role for project managers and business analysts in Agile?

I was in Krakow last week at the PAM Summit of the Project Management Institute (PMI) and International Institute of Business Analysts (IIBA). I delivered a presentation entitled “Is there a role for Project Managers and Business Analysts in Agile?” – now online via SlideShare. Those of you in the London area can go even better, I’ll be repeating the presentation at Skills Matter next week. (Its free, its 6.30pm on the 24 June, sign-up on the Skills Matter website.)

For those who can’t I’ll talk through the question a little…

It is helpful to reference the “Iron Triangle” or “Triangle of Constraints” or “Project Constraints Triangle” – call it what you will, here it is again:

Many reader may have seen a slightly different version before. My version has neither cost or quality because I don’t believe these represent trade-offs:

  • Cost for a software development team are overwhelmingly wages, the more people you have, the longer you have them for the more money you spend. So Cost= People x Time. And people and time are both on this triangle so you can calculate cost, or vice versa.

So my preferred version looks like this:

Which leaves Features/Functionality/“the what” as the only place where we negotiate.

Which makes answering the original question very easy for BAs. Business Analysts have skills around exactly this question. There are a number of ways a BA can help: perhaps as a proxy customer, perhaps as a Tactical Product Owner, perhaps as a detail guy, or perhaps working with testers. Every team should have one (almost).

For project managers things are decidedly more complex. Much of their traditional work around “when” is redundant, since we are aiming for stable teams and sizing work to the time work around “who” is also gone. I can imagine, indeed I have seen, small teams dispense with project managers entirely. You can be successful without a project manager.

However for larger teams there is probably a role that needs filling. At a very basic level there is administration and reporting, there might be co-ordination tasks too, they might work with the BA/Product Owner around stakeholders, and when there is a client/supplier relationship both sides will probably want some managers “managing” the relationship.

But while there is management work to do I do not see a role for “projects.”

Successful software lives, it changes, if requires enhancements, adaptations. Only dead – unused – software doesn’t change. Developing a software product is like building a company: if people stop asking for things you are out of business.

Which conflicts with the PRINCE-2 definition of a project: “A temporary organization that is needed to produce a unique and predefined outcome or result at a pre-specified time using predetermined resources.”

Successful software does not have a pre-specified end date, indeed it can be incredibly hard to determine when many projects actually began!

Successful software isn’t temporary and the organizations which support/service it shouldn’t be. They may grown or shrink with time but we should aim for stability.

And since Agile embraces change the outcome isn’t pre-defined either. Indeed since all successful software changes in ways which are difficult for the originators to see there are only short term outcomes.

To me it is obvious that software development does not, and never really has, conformed to the project metaphor. Indeed I think using the project metaphor seriously damages software:
  • It leads to endless, pointless, discussions about “when will it be done”
  • It leads to governance processes that attempt to finish things which are not finished
  • It leads to short term thinking over things like quality
  • It leads companies to disband successful, performing teams – a condition I have termed Corporate Psychopahy.

BAU – business as usual – isn’t a dirty word, it is the normal. Supporting software, adding feature, fixing bugs, enhancing products is Business As Usual, we should be proud of that.

Then if we specifically look at Agile ways of working many of the traditional assumptions of project management are invalidated:
  • Teams are encouraged to self-manage: determine the details of the work to be done and decide how best to tackle it themselves
  • Agile teams are inclusive and non-hierarchical
  • Agile teams communicate peer-to-peer rather through some centralised communications hub (i.e. a manager)

In short Agile assume a “Theory Y” way of working not the “Theory X” which is implicit in too many project management texts.

And if you think I am radical then let me tell you I am a moderate, there are those who will go further. Look at my Scrum-Scrum-Scrum post last year and the discussion which followed, or watch Christin Gorman’s video “Adding a project manager to a Scrum team is like making cake with mayonnaise.”

The net upshot of all this is simple: Project Managers need to reinvent their role. And the reinvented role probably doesn’t include the word “project”.

For any software development team – especially one that wishes to be considered agile – the default choice is probably: no project manager. The onus is on the role holder to demonstrate how they add value to the team and to the wider organisation.

A role for project managers and business analysts in Agile? Read More »

Lego1.jpg

The real lessons of Lego (for software)

How many out there have hear this: “What we want is software which is like Lego, that way we can snap together the pieces we need to build whatever it is we want.”

Yes? Heard that?

Lets leave aside the fact that software is a damn sight more complex than small plastic bricks, lets leave aside too the fact that Lego makes a fine kids toy but on the whole we don’t use it to build anything we use at work (like a car, bridge or house), and lets pretend these people have a point….

We start with the standard Lego brick:

Of course there are multiple colours:

And a few variations:

Which now allows us to snap together a useful wall:

Walls are good but to build anything more interesting we need some more pieces, maybe some flat pieces:

Or some thinner pieces, or some bigger pieces:

It might also help to have some angled pieces, you know for roofs and things, and remember the slant can go either way, up or down:

I think we’re heading for a house so we will need some doors and windows:

Personally I like wheels, I like things to move, and so do my kids. So we need some wheels – different size of course, and some means of attaching them to the other Lego blocks:

If we are building a car we need to be able to see out….

Umm… my imagination is running away, we need to steer, and how about a helicopter, and a ramp to load things onto the car/boat/plane/.…

Still, something missing…. people!

Lego is not homogenous, when you say “Lego brick software” people are probably thinking of the first 2×8 block I showed. But to make anything really interesting you need lots of other bits. Some of those bits have very specific uses.

I’ve not even started on Lego Space/StarWars, Harry Potter Lego or what ever this years theme is. Things get really super interesting when you get to Technical Lego.

But there are some things that every Lego enthusiast knows:
  • You regularly feel the need for some special part which doesn’t exist
  • You never have enough of some parts
  • You always make compromises in your design to work with the parts you have

The components themselves are less important than the interface.  The interface stays consistent even as the blocks change.  And over the years the interface has changed, sure the 2×4 brick is the same but some interfaces (e.g. wheels with metal pieces) have been removed (or deprecated?) while others have been added.  There is an element of commonality but the interface has been allowed to evolve.

So the next time someone says: “We need software like Lego bricks” remind them of these points.

The real lessons of Lego (for software) Read More »