Story points 2 of 4 – Duarte's arguments

This blog entry follow directly from the previous Story Points – Journey’s Start.

Key to Duarte argument, and something I didn’t originally appreciate from his initial Tweets is: he is not saying story points are rubbish, forget about them. What he is saying is: it is simpler and equally accurate to just count the stories as atomic items. This is equivalent to saying “All stories are 1 point” and having done with it.

For a mature team with a good relationship with its stakeholders I could see this working well. However, for less mature teams (who have difficulty agreeing among themselves) or a team with bully-boy stakeholders (or bully boy anyone else for that matter) then I think being able to put a higher point score on the card serves as a useful warning mechanism.

Duarte says in his blog “the best predictor of the future is your past performance!” On this I couldn’t agree more. He then poses three questions – and answers – which I think are worth reviewing.

Q1: Is there sufficient difference between what Story Points and ’number of items’ measure to say that they don’t measure the same thing?

Here he finds there is a close correlation. I’m not surprised here, in fact I would expect this to be the case. Teams are encouraged to write small stories, in fact Scrum almost mandates this because work should be completely done at the end of a sprint. In effect there is an upper bound placed on the size of a story.

Actually, I’m not so keen on this rule. I allow work to be carried from iteration to iteration but I only allow points to be scored when the work is done. Thus I encourage stories to be completed in an iteration but I don’t mandate it. One of the exercises I do with teams on my courses actually sets out to illustrate this point.

At the very least I would expect teams to settle on an “average story size” implicitly. Notice also that the correlation applies whether all stories are of size 1 or of size 2, 3 or any other number. Its a correlation between two series of numbers.

However, given all this Duarte has a point: if your stories are clustered around an average size then you might as well count the stories.

Q2: Which one of the two metrics is more stable? And what does that mean?

Duarte’s analysis says that both stories and story points have similar standard deviation. Thus they are of similar stability. Since these two are closely correlated this isn’t a surprise. In fact, given the correlation, it would be a surprise if one was notably more stable.

Q3: Are both metrics close enough so that measuring one (number of items) is equivalent to measuring the other (Story Points)?

Duarte’s data seems to measure the same thing – again, if they are closely correlated then this is exactly what you would expect. You can write out the equation:
        Story Points ~= (Correlation Co-efficient) x (Number of Stories)

(The ~= is supposed to mean approximately equal.)

With this out of the way Duarte moves on to consider Mike Cohn’s claims for story points.

Claim 1: The use of Story points allows us to change our mind whenever we have new information about a story
Duarte says: Story Points offer no advantage over just simply counting the number of items left to be Done.

I agree here. I’ve long encouraged teams to move away from story pointing work in the distant future. Yes I encourage them to story point some stories in the backlog – say a few months work – and story point tasks – for the next iteration. But for stuff that is “out there” or has just arisen my advice is usually: just assign it your average story point value.

In other words, assume your average story point value is your Correlation Co-efficient. When work gets close then estimate it traditionally, you might find the value changes. When it gets really close break it down into tasks.

Claim 2: The use of Story points works for both epics and smaller stories
Duarte says: there is no significant added information by classifying a story in a 100 SP category

Again I agree. To be honest I’m not a fan of Epics and while some of the teams I work with use them I often encourage teams to dump them. To me an epic is just a collection of stories around a theme.

Actually, what Cohn and Duarte are saying are not at odds here. Cohn doesn’t (seem to) make any additional claims. Its just a scaling question.

Claim 3: The use of Story points doesn’t take a lot of time        
Duarte says: In fact, as anybody that has tried a nontrivial project knows it can take days of work to estimate the initial backlog for a reasonable size project.

Here I have issues with both Cohn and Duarte.
If you are estimating stories then it does take time. Fast as it is, even planning poker takes time. However there is also a lot of design and requirements discussion going on in that activity. Therefore I don’t see this as a problem. In fact I see it as an important learning exercise.

True, on a none trivial project it will take time to estimate a large backlog. But a) that is valuable learning and b) I won’t try. I’d either estimate it in chunks or I’d apply an average estimate to work which wasn’t going to happen anytime soon – see claim #2.

I deliberately delay estimation as long as possible to allow more information to arrive and because work will change. It might be changed out of all recognition or it might go away completely.

Claim 4: The use of Story points provides useful information about our progress and the work remaining
Duarte says: This claim holds true if, and only if you have estimated all of your stories in the Backlog and go through the same process for each new story added to the Backlog.

Again I agree. However I doubt the usefulness of the concept of “work remaining”. Its only work remaining if you think you have a lump of work to do. In my experience work is always negotiable. Its just that people don’t want to negotiate until they accept that they won’t get everything.

One of my clients has gone through the very expensive exercise of estimating all the work they might do. Earlier this year they realised they had 3000 points to do by Christmas. They also realised they had capacity to do less than 1000. This brought home to fact that they couldn’t do everything – something many people on the project had long known or suspected. The company are still working through this issue but at they are having the discussion now, in March and April not September and October.

Claim 5: The use of Story points is tolerant of imprecision in the estimates
Duarte says: there’s no data [in Cohn’s book] to justify the belief that Story Points do this better than merely counting the number of Stories Done. In fact, we can argue that counting the number of stories is even more tolerant of imprecisions

Again I agree. But then, if there is a high correlation between story points and stories then this is self-evident. And again, as I said before: we need to work with aggregates and averages.

Claim 6: Story points can be used to plan releases
Duarte says: Fair enough. On the other hand we can use any estimation technique to do this, so how would Story Points [be better than counting the number of stories]

Again, agreement, and with correlation its self evident.

Duarte goes on to give a worked examples in which a project does not achieve the desire velocity and gets cancelled. His story makes no use of story points, simply stories. To be honest I’m missing something here. True the stories in his story have no points, but I don’t see where that makes a difference. What he describes is exactly the way I would play the scenario although I would have story points in the mix.

His conclusion: “Don’t estimate the size of a story further than this: when doing Backlog Grooming or Sprint Planning just ask: can this Story be completed in a Sprint by one person? If not, break the story down!”

This is interesting because while Duarte is working at the story level this pretty closely models the way I advise teams to work. I always tell teams:

  • “I’d like a story to be small, to fit in one iteration but that isn’t always the way.”
  • “In my experience stories need to be broken down, both so they can get done but also so you get flow and as part of a design exercise.”

I then have teams break down stories – which I call blues – into (developer) tasks – whites – this follows my Blue-White-Red process from a few years back. I then put points on whites – at this point you probably start to see why I prefer the term Abstract Points not Story Points because Whites aren’t stories.

When a team have a feel for points then I will have them put points – the same units, just bigger – on Blues. For Blues that won’t be done for a while then I’m happy to assign averages, and for Blues that are further out then I’m happy to leave them unpointed.

So, thank you for reading my analysis of here and staying all the way.

My conclusion? I think I agree with Duarte but I don’t agree with him. I think I actually disagree with Mike Cohn but I’d have to go back and look at what he says himself.

I think the way I estimate with teams, the way I used to do it when I ran teams and the way I teach clients to estimate is more different than perhaps I appreciated. My method grew from my interpretation of Kent Beck’s XP planning game and velocity. However I now think my approach has drifted from this. The result of my experience, seeing what works and what doesn’t has refined my approach.

The approach I’ve ended up with has similarities with what Duarte describes but is also different.

Despite lots of authors attempts to describe Scrum/XP planning and estimating I still find myriads of minor variations. Some improve things, some not. Until this moment I’ve always believed my approach only differed in minor ways. Now I’m thinking….

And what of the Duarte’s harmful claim? Actually, although he uses the word in the title I don’t see any discussion of the harm they cause.

Story points might be pointless but do they do any harm? I don’t really see it, they may be a waste of time, there might be more effective ways of doing the same thing but that’s not the same as harmful. Story points might mislead, there is little evidence here so I’ll hold judgement on that one.

To be continued….

Story points 2 of 4 – Duarte's arguments Read More »

Story points considered harmful? 1 of 4 – Journey's start

A few months ago Vasco Duarte, with a little help from Joseph Pelrine started a discussion entitled “Story Points considered harmful.” They, or at least Vasco, has given this as a conference keynote and has blogged about it.

(Warning: this is the first of 4 blog entires, its quite a long post. Plus I think there are two appendix blog entries to follow up.)

Now I’ll admit, when I first heard this argument I thought “Well I can guess where they are coming from” – I have sympathy with the argument, I’ve always considered story points as suspect myself. I also thought “But I don’t think they are right”. Specifically I know a team who use story points and claim to be able to schedule delivery “to the day.”

I’ve collected some data of my own from teams I’ve worked with, am working with or at least in contact with and done a bit of amateur analysis. I’m also taking time to go over Vasco and Joseph’s arguments.

This is a big topic and its going to take me a while to get to the bottom of the data, what I think and the pro and con arguments. So please forgive me, this is going to take a few, possibly long blog articles to go through.

Lets get a couple of things on the table to start with.

Firstly I don’t believe Story points are a Scrum technique. Yes they have been subsumed into Common Scrum but I believe they originally originated with Extreme Programming. Where they originated isn’t really important because I believe story point estimation and tasking (i.e. estimating work to do with story points and then scheduling a number of story points) is at odds with Scrum Commitment.

I’ve blogged about this before, Two Ways to Fill and Iteration, so I won’t repeat myself. Just say, in my book commitment and story pointing are alternatives.

By the way, the name Story Points comes from Mike Cohn, I prefer to call them Abstract Points, and I’ve heard others call them “Nebulous Units of Time”.

Second, I don’t believe story points can ever be stable if you don’t have a stable team. If you remove people from a team I expect it to slow down, if you add people to the team I expect it to slow down too – at least in the short term. In the longer term you might increase capacity but frankly I don’t know how long that will take.

I few years ago I worked with one team which had story/abstract points which appeared to be random. When I adjusted form changes in team staffing they average was constant. That said, I don’t expect points to remain constant, they might do for a while but I expect them to fluctuate at the very least.

It goes without saying that I expect sprint/iteration length to be stable too.

Which brings us to the third thing: with points, stories and projections, they only work at the average and aggregate level. They are a good predictor over several sprints but they offer no guarantees for the next sprint/iteration.

Finally, for now, something I do agree with Duarte on: we can’t estimate. By “we” I mean humans. Last year I devoted several blog entries to the subject, Humans Can’t Estimate.

Story points considered harmful? 1 of 4 – Journey's start Read More »

Agile: Where's the evidence?

A few weeks ago I was presenting at the BCS SIGIST conference – another outing for my popular Objective Agility presentation. Someone in the audience asked: “Where is the evidence that Agile works?”

My response was in two parts. First although it sounds like a reasonable question I’ve come to believe that this is a question that is asked by those who don’t believe in Agile, those who want to stall thing. It is rarely a question aimed at a rational decision.

Second I said: lets turn the question around, Where is the evidence for Waterfall? – as far as I know there is none, although there are plenty of cases of failed projects.

Now this was a pretty flippant answer, I was in show mode and as someone later asked: “Do you really recommend we ridicule people who ask for evidence?”, to which the answer is certainly No.

So let me try and answer the question more seriously.

Lets start with “Agile.” How do we define Agile? Scrum? XP? Where does Agile start and Lean begin, or vice-versa? Or, as I believe, Agile is a form of Lean?

We have a definition problem. Agile is poorly defined. I e-mailed a friend of mine who is undertaking a PhD in Architecture and Agile and asked him: Can you please point me at a literature review?

(Notice, if I was a serious researcher I would closet myself in the library for days or weeks and do the literature review myself. I’m not and I need to earn money so I I didn’t.)

His answer: “that’s an interesting question because a lot of research simply assumes that agile (as a whole) is better, without any evidence for it.”

Michael also pointed out there is a context issue here. Embedded systems? Web development? Finance? Health care?

And how do you define better? Better architecture? Better requirements? Better code quality?

And Better compared to what? Chaos? Good waterfall? Bad waterfall? CMMI level 1? 2? 3? 4? 5?

You see, context.

Despite this one study claimed Scrum resulted in productivity improvements of as much as 600% – Benefield, “Rolling Out Agile in a Large Enterprise”. I’ve even heard Jeff Sutherland verbally claim Scrum can deliver 1000% improvement. To be honest I don’t believe this figures. If they are possible then I think it says something about chaotic state the organisations started in. In these cases my guess is following any process or practice would be an improvement. Standing on one leg every morning would probably have generated a 50% improvement alone.

There is a trap for Agile here. Much traditional work has defined better as: On schedule/time, on budget/cost with the desired features/functionality. But Agile doesn’t accept that as better, Agile negotiates over features/functionality and aims for business value. A report from Cranfield University a few years ago suggested than much traditional IT work failed to truly capture business value because people focused on: time, budget, features.

Then there is a question of bugs. Traditional development has been very accepting of bugs, Agile isn’t.

Maybe asking for evidence about Agile is aiming for too much. Maybe we should look at the practices instead. Here there is some evidence.

Over the years there have been various studies on pair programming which have been contradictory. Since most of these studies have been conducted on students you might well question the reliability.

Test Driven Development is clearer. As I blogged two years ago there is a study from Microsoft Research and North Carolina University which is pretty conclusive on this, TDD leads to vastly fewer bugs.

Keith Braithwaite has also done some great work looking at Cyclomatic Complexity of code and there seems to be a correlation between test coverage and better (i.e. lower) cyclomatic complexity. Keith is very careful point out that correlation does not imply cause although one might hypothesis that test driven development leads to “better” design.

TDD and source code are relatively easy to measure. I’m not really sure how you would determine whether Planning Meetings or User Stories worked. Maybe you might get somewhere with retrospectives. Measure how long a team spends in retrospectives, see if the following iteration delivers more or less as a result.

For their book Organizational Patterns of Agile Software Development Coplien and Harrison spent over 10 years assessing teams. This lead to a set of patterns which describe much of Agile software development. This is qualitative, or grounded, research rather than qualitative but is just as value.

At this point I probably should seclude myself in the British Library for a week to research each practice. Maybe I should, or maybe you want to call me an charlatan. Next year, maybe.

If we switch from hard core research to anecdotal evidence and case studies things become easier. As many readers know I’ve been working with teams in Cornwall for over 18 months. On my last visit we held a workshop with the leaders of the companies and software teams. Without naming names some comments stood out here:

  • “The main benefit [of Agile] was time to market… I don’t know how we would have done it without Agile”
  • “Agile has changed the way we run the company”
  • “It is hard to imagine a world without Agile”

That last company is now finding their source code base is shrinking. As they have added more and more automated tests the design has changed and they don’t see the need for vast swaths of code. Is that success? Each line of code is now far more expensive and they have less. (Heaven only knows what it does to function point analysis.)

If you want something a little more grounded there was a recent Forrester report (sorry, I can’t afford the full report) which said: “Agile enables early detection of issues and mid-course corrections because it delivers artefacts much sooner and more often. Finally, Agile improves IT alignment with business goals and customer satisfaction.”

Before this there was a Gartner report in which said: “It’s a fact that agile efforts differ substantially from waterfall projects. It’s also a fact that agile methods provide substantial benefits for appropriate business processes.” (Agile Development: Fact or Fiction, 2006).

But I’m back to that nebulous thing “Agile.”

All in all I can’t present you any clear cut evidence that “Agile works”. I think there is enough evidence to believe that Agile might be a good thing and deserves a look.

What I haven’t done is look for, let alone present, any evidence that Waterfall works.

Frankly I don’t believe Waterfall ever worked. Full stop. Winston Royce who defined “the waterfall” didn’t so why should I? (You have to read to the end of the paper.)

OK, sometimes, to save myself form a boring conversation or to cut to the chase I’m prepared to concede that: Waterfall worked in 1972 on developments using Cobol on OS/360 with an IMS database. But this isn’t 1972, you aren’t working on OS/360 and very few of you are using Cobol with IMS.

(I have seen inside an Z-OS Cobol IMS team more recently and something not unlike waterfall kind of worked, but the company concerned found it expensive, slow and troublesome and wanted them to be “Agile.”)

Even if I could present you with some research that showed Agile, or Waterfall, did work then it is unlikely that the context for that research would meet you context.

(Thus, if you do want to stall the Agile people in your company, first ask “Where is the evidence Agile works?”. When they produce it ask “How does this relate to our company? This is not our technology/market/country/etc. etc.”)

I think it might come down to one question: Is software development a defined process activity or an empirical process activity?

If you believe you can follow a defined process and write down a process to follow and the requirements of the thing you wish to build then waterfall is probably for you. On the other hand, if you believe the requirements, process, technology and a lot else defies definition then Agile is for you.

Finally, the evidence must be in the results. You organisation, your challenges, your context, are unique to you. So my suggestion is: make your own evidence.

Set up two similar teams. Tell one work Waterfall and one to Agile and let them get on with it. Come back every six months and see how they are doing.

Anyone want to give it a try?

And if anyone knows of any research please please please let me know!

(Many thanks to Michael Waterman for a very thoughtful e-mail which filled in some of the missing pieces for this blog.)

Agile: Where's the evidence? Read More »

Technical debt – developer moans

A few days ago I pushed out a couple of Tweets which got quite a reaction:

“I am fed up hearing developers complain of Technical Debt as if it has nothing to do with them. Who writes this stuff?”

and

“Devs say ‘look technical debt’ like ‘look its snowing’. As if it has nothing to do with them”

It was that second one that got a hail of re-tweets.

So I thought I should add a few words of explanation here.

First it is important to say that the these tweets were not aimed at, or about, anyone in particular. I’ve visited a bunch of companies and delivered a lot of training in the last few months and the topic of technical debt comes up a lot. If I had anyone in mind about these comments in particular it was a developer I worked with four or five years ago but his sentiments are common.

Second, I’m not sounding off about developers. I love developers, I used to be one. I’ve had the same feelings myself at times, speak to people who managed me way back when and I probably moaned about the same things. (Over 10 years ago I wrote High Church C++ where I agonised about the right way to write code.)

What I am taking aim at is an attitude which says: “There is this thing called Technical Debt, we have it, it makes our lives hard, we want it to go away, but we have nothing to do with creating the problem and we need a special solution.”

It sometimes seems to me that having labeled the phenomenon developers are too keen to use the label. Steve Freeman and Chris Matts have tried to recast the debate in terms of real options “Bad code isn’t Technical Debt, it’s an unhedged Call Option”. Intellectually I agree with them but I find that the explanation required to explain an “Unhedged Call Option” is greater than the explanation to explain the original problem so I’m not sure what the metaphor adds to the debate.

The other problem with switching to an “unhedged call option” metaphor is that it perpetuates the Quick and Dirty Myth – the idea that doing a bodge-job now will pay back in shorter delivery time. In reality the reverse is just the case, I gave some statistics on this in my blogs last year about Capers Jones book Applied Software Measurement.

Common reasons given for the existence of technical debt are:

  • It was the developers who where here before us
  • We don’t have the time to fix it
  • The managers made us do it, or The business demands this
  • Its the Architects

(Notice the current developers are never to blame – and as much as I don’t want blame thrown around thats the debate is too often framed.)

Anyway, none of these reasons really stand up to analysis, except perhaps the last one.

Regardless of what the developers who were here before did the current developers should be addressing the issues. It is the current developers who need to live in this code, if they don’t like it they should do something about it.

Sure there may not be time to stop the world and fix everything but you don’t need to. You just need to fix a little bit every time. As Uncle Bob Martin says in Clean Code, follow the Boy Scouts Rule “Leave the code slightly better than you found it.” Then let the Power Law work its magic.

Secondly, there is more time if you do it right. Building quality, bug free, systems is faster than the alternative. Forget the Quick and Dirty Myth, its Slow and Dirty or Quick and Right.

(Sometimes I meet developers who want to “stop the world” and re-write the system. I’ve also on occasion met managers who have bought into this argument and stopped the world. This is a seriously bad idea, its why Netscape no longer exists: developers were allowed to re-write Navigator, as they were doing so Microsoft attacked.

I keep meaning to blog about “version 2” now I must, just watch this space.

Twice I’ve come across Managers who told the team they could re-write. In the first case the managers were lying. The developers, well one specifically, held a gun to their head and said “I’m resigning on 14 February if you don’t let me rewrite”. In that situation you should let them resign, these managers were scared of the developer. As far as I can tell things didn’t work out, the developer left the company not long after and the company was sold.

In the second case the Managers were either stupid or desperate. In either case within days of starting the “Technical Improvement Programme” (TIP) they found it wasn’t that easy and things started to revert to usual.)

Getting back on topic…

Unless Project Managers and Business Analysts sneak into the building at night and change code when developers aren’t looking then managers do not write code. Nor does the nebulous unit called “the business”. It is developers hands on the keyboard, “I was only following orders” is not an acceptable defence.

There is a professionalism issue here. Software Engineers need to take their work seriously and create code they are proud of. This is not to say they should start gold plating things but being an professional engineer means you don’t do shoddy work.

Architects may be the only defence I might accept: however this defence is only applicable when the organisation and architects are committing an even bigger sin, namely Architects who Don’t code, or Architects who don’t understand the system – usually these two go hand in hand.

If the architect(s) code then they are part of the development team, they see the same problem, they have the same complaints. Now it is just possible that the architect does code and they aren’t very good. Or rather, the Architects understanding is different to that of the developers, which might point to issues of recruitment, the architects style or bad team dynamics.

The reason I might accept this reason is that it points to an even bigger problem for the organisation and it does imply engineers are pull in opposite directions.

Interestingly Ron Jeffries replied on Twitter, and said “Ward [Cunningham] originally defined tech debt as reflecting later learning, not available at the time of construction.”

I think its time we returned to this definition. Its not about other developer, the business, architects or others creating technical debt for poor developers. Its about developers improving their own understanding. Its about the need for refactoring. And it most definitely is the developers own responsibility and professionalism.

Technical debt – developer moans Read More »

Heresy: My warped, crazy, wrong version of Agile

I increasingly feel that the way I interpret Agile, the practices and the processes, if different to the rest of the world. Perhaps this is just self doubt, perhaps because I started doing Agile-like-things before reading about XP or Scrum, perhaps this is because my version has always been more informed by Lean, perhaps this is because I have never achieved Certified Scrum anything status, perhaps because I’ve never worked for ThoughtWorks, perhaps because I hold and MBA (and thus have an over inflated opinion of myself) or perhaps I’m just wrong.

Yet clients of mine report some success (I even have a couple of case studies). Perhaps this was despite me rather than because of me. In the past I’ve described two of my own Agile systems – Blue-White-Red and Xanpan, these are my attempts to describe how I see the development working without being constrained by other systems (sometimes I think of own-brand-cola methods.)

Here is where I feel I differ from the Agile mainstream, or perhaps, just Scrum:

  • I don’t it is wrong to carry work from one sprint/iteration to the next
  • I don’t believe in Scrum Masters; where Scrum Masters exists I think they are a kind-of Project Manager; and I believe Scrum Master Certification is a con (but I would, wouldn’t I?) that is potentially damaging the industry and Agile
  • I don’t believe team Commitment works, its too open to gaming
  • I don’t think Agile has cracked the requirements side of the business, we’re working on it, things are getting better
  • I don’t believe Agile has cracked relational database management system; but I think much of the problem lies with the DBMS vendors
  • I don’t believe Agile Coaching is different enough from Agile Consulting to justify the name
  • I don’t really believe in offshore-outsourcing; OK, it can work but the companies that use it seem to be those who shouldn’t and those who are equipped to use it don’t need to
  • I don’t believe you can tell how long a piece of work will take (or how much it will cost) until you start doing it. And I believe it is wrong to pretend you can.
  • I don’t believe Scrum (specifically) really understands the requirements side; and I believe the Scrum Product Owner is a pale shadow of Product Manager role which is (often) needed
  • I don’t believe in One Product Owner to Rule them all; on a large development effort the Product Owner role needs to be refactored (damn, I need to blog more about that)
  • I don’t believe all companies can, or will, make the transition to Agile; but that many of these companies will stagger on for years while more Agile competitors take their market
  • I don’t believe Agile will ever work in Cobol teams, or not Agile-as-we-know it; anyway, Cobol teams which still exists are highly adapted to their environment.
  • I believe Agile is a dirty word in some circles and has been used and abused; but I don’t believe there is an alternative (at least not right now)
  • I don’t believe the Dreyfus model of learning is the right model for Agile, I think it makes work for consultants and creates Learned Dependency, I prefer the Constructivism model

Its not all bad, there are some things I believe:

  • I believe Agile means “Better”
  • I believe the Agile Manifesto is a historic document like the US Constitution, and like the constitution we can read all sorts of meaning into it, depending on who you are you read different things. But there is no Supreme Court to arbitrate on whats-what. So Agile is “what I say it is” – or maybe, like “art is what artists do”, “Agile is what Agile advocates say it is”
  • I believe Agile is, and always has been about flow, some people might not get it, and the some in the Kanban crowd might have come to it late but its always been there
  • I believe in breaking User Stories down – I mean, I’d love it if teams didn’t have to break them down and I aim to take them there one day, but its a long journey
  • I believe the Product Owner role is far more important than the Scrum Master
  • I believe “Project” is an accountancy term that has wrongly become attached to development work
  • I believe in the Planning Fallacy
  • I believe Developers are the centre of the process, ‘Work Flows Inward’ as the Pattern says
  • I still believe in Story Points, or rather, the things I call Abstract Points
  • And I believe Abstract Points should be assigned to task; I don’t believe in using different currencies for tasks and stories (i.e. don’t use hours for tasks and story points for stories)
  • I believe planning meetings, work break down and assigning points is as much about design as it is about work scheduling
  • I believe engineering quality is essential to achieving Agility
  • I believe in Software Craftsmanship and don’t think Agile will ever work if quality isn’t fixed
  • I believe every organisation has to define its own version of “Agile” – you can’t take Scrum (or any other method) off the shelf
  • I believe too much advice and consultancy can stifle an organisation, I believe in light-touch consulting to help companies find their own path
  • I believe that in many companies – corporate IT departments specifically – the right approach is to “do things right, then do the right thing”
  • I believe when you apply Agile thinking outside of IT it is Lean
  • I believe speaking out against Scrum is bad for the reputation and the Scrum Mafia will come after you in a dark alley

Gee, this starts to sound like Allan’s manifesto! Anyone want to sign? 🙂
– OK, the last point was a bit of a joke 🙂

I’ve linked to some blogs and such where I have previously discussed these items, and I might expand on some in future. Many of the issues raised here are complex and I really don’t have the time to explain them right now, so next time you see me ask me to discuss.

Heresy: My warped, crazy, wrong version of Agile Read More »

The Meter: Tom Giib's greatest invention

If you’ve ever found the time to read Tom Gilb’s work – and I’m thinking specifically of Competitive Engineering here – you’ll know that his work is packed full of good ideas. So many god ideas in fact that it can be hard to read his book – something I think he admits himself in the opening pages of Competitive Engineering.

One of his ideas is that of a Meter. This alway confuses me a little, still now…

You: A Metre you say?
Me: No, not a metric unit of distance, a Meter, as in a Gas Meter
You: What, how can a Gas Meter help software teams?
Me: No, the idea of measuring things, scrum that, say a Ruler
You: A ruler? Centimeters and inches?
Me: No, A yard stick
You: Distance again?
Me: No, A unit of measurement and a tool for measuring
You: Measuring what?
Me: Yes, Measuring what, that’s Tom’s great invention, we need to define measuring systems

Tom’s great invention is: define a mechanism for measuring the thing which is important to you.

OK, Tom takes the idea further with Plangauge he says we should define the meter, the starting value and the desired value. But forget values, its the measurement unit itself which I think is too frequently overlooked and worth more attention.

The thing is, the unit of measurement, and the means of measuring it is implicit in a lot of work: its in Kaplan & Norton’s Balanced Scorecard, its in Beyond Budgeting, its in Lean Start-Up and elsewhere.

But, as far as I know, only Tom calls the Meter out as an idea in its own right. In fact, I suspect Tom might not have realised the significance of this.

Once you know what the right Meter is it becomes – relatively – easy to measure things. So you can measure the starting point, you can set a target, and you can measure how far you have got.

We are surrounded my measurement units and meters which already exist

  • Currency: Pounds, Dollars, Euros, etc. etc. – and the associated profit, revenue, costs, etc. etc.
  • Accounts: Profit, revenue, costs
  • Website hits: visitors, unique hits, conversions, etc. etc.
  • Time: second, minutes, days….
  • Distance: inches, miles, centimetres, meters and kilometres
  • Location: Latitude and Longitude
  • Weight:…..

And we tend to use these measuring units again and their associated meters again and again.

New meters are not so common but they occur, e.g. Eric Ries in Lean Start-Up introduces a new meter: Cohort Analysis.

But – big but – if we use inappropriate measuring units and meters it gets hard, and it potentially gets wrong. (Another on of Eric’s points.)

Again, as far as I know, Tom is the only authority who says: define your own unit and define your own way of measuring it.

Now, as far as I can tell Tom is a master of this, the rest of us… well most people don’t get it and try and use pre-defined measurements and Meters, even those of us who “get it” struggle with it.

So, we need to start inventing new units of measurement and new ways of measuring things. We need to get good at both inventing these things and sharing them. History shows many of these measuring units are useful to other people.

Second, as more of these units are recognised then we will have more mechanisms – and language – with which to implement both our own tools and the tools like Balanced Score Card and Lean Startup.

The Meter: Tom Giib's greatest invention Read More »

Corporate Psychopathy

The Oxford Dictionary of English on my Kindle defines:

Psychopathy n. mental illness or disorder.

Psychopathic is more interesting:

Psychopathic adj. suffering from or constituting a chronic mental disorder with abnormal or violent social behaviour

There is a reoccurring pattern of behaviour I have observed in corporate environments which I have taken to calling Corporate Psychopathy, maybe you’ll recognise it if I describe it.

  • A company brings together a team to do some work. At some date they decide the work is “done” and disband the team. They often call this work “a project” – projects by their definition have a start date and an end date, and involve a temporary organisation unit; i.e. a team is brought together at the start, they do some work, then the team is disbanded.

I think this state of affairs is crazy. So crazy I believe it is abnormal social behaviour.

A team is a social unit. When was the last time you dumped all your friends and started to forma a new group?

Think of the high performing teams you know: sports teams or successful political parties. They may change players from time to time, even key players, but the winning team usually stays together from one victory to the next.

By contrast think of the teams who are less successful. How often to do they change personnel How often do they work together?

Anyone who has studies team dynamics will probably have come across Buckman’s “Storming, Forming, Norming and Performing” model. The idea is that when you bring a team together they pass through each phase as they learn to work effectively together.

If you have an effective team – one that has passed through storming, forming and norming and is now performing – why would you break them up? Let alone break up one team and create a new team which has to pass through three phases to get to the productive fourth?

It doesn’t make sense.

Just to make it worse when corporations break up teams many of the people leave for ever. Companies which use contract staff release the contractors who will go and work somewhere else. Even permanent staff may well find themselves release to a pool, or to previous roles and the same people are unlikely to work together again. Knowledge evaporates in the process.

If that isn’t enough think of the admin overhead in managing all this.

It also makes the start of a work incredibly difficult because you don’t know how effective a team will be or when they will break through into Performing. And if you are following a development method where you measure capacity (velocity) and benchmark against past performance you can’t use this method for the first few weeks.

Its psychopathic: anti-social, performance reducing, expensive, undermines forecasting and increases risk.

Put it another way: It is plain stupid.

My solution is keep the performing team together as much as possible and bring the work to the team. Have the team move from one piece of work to another together.

This parallels the way we timebox work. In timeboxing we fit the work to the time allowed – a two week sprint or a quarterly release. Similarly we should bring the work to the team.

Of course teams might not have very clear cut switch over dates. Perhaps they have to ramp-down one stream of work while ramping up another. More complex to manage.

Or perhaps a performing team splits into two – like cell division. One part continues working on the first piece of work while the second part starts something new. This way the team can carry the culture from one piece of work to another.

To do this is might be necessary to let the team grow organically, split the team, then grow both sides organically.

The point is: there are ways of managing this. You don’t have to split teams up and form them all anew.

Corporate Psychopathy Read More »

Xanpan: refining the process

In my last blog entry I discussed a Xanpan board layout which allows for planned and unplanned work to co-exist in one workflow. Here is a picture of the board I had just put together with a team in Cornwall:

This might be the perfect board layout for the team for ever. More likely it can be improved over time – in the words of Kevlin Henney, it is a Stable Intermediary Form.

I think of Xanpan boards like the Pompido centre in Paris, they externalise the plumbing. In this case the workflow and work in progress. You can see the stuff that is normally hidden inside.

My rules are quite straight forward for Xanpan boards:

Stage 1:

  1. Model the current workflow on the board: avoid changing it too much to “how it should be” but you will probably need to modify the workflow a little in order to be able to model it.
  2. Make sure you talk through various scenarios for work to make sure you have a workflow that will, well, work. As I said in the original Xanpan blog post columns normally come in pairs: queue then work.
  3. Operate the board for at least one iteration (several weeks) and learn from it.
  4. Reflect and refine the board: in doing so you will modify the workflow, this should be an improvement.
  5. Operate the board some more an start gathering data, qualitative and quantitative about the flow of work.

This alone might improve your teams performance: you have visibility of the work in progress, you can reason about it, people are better informed, you will make better decisions. But, this will not solve or your problems, nor is it the end state, it is just the beginning.

Stage 2: (Assuming you want to improve the throughput)

  • Think of the board as a pipeline: there are two ways to improve throughput, make the pipe bigger or make things move through the pipe faster.

First seek to move things through the pipe faster:

  1. Look at the blocks: why are they blocking or impeding? What can you do about removing them, not just on this occasion but permanently
  2. Where are the queues building up? Can you rebalance the board, or rather how your people (and possibly other resources) are deployed to even this out

Continuing reviewing the board and looking for improvements for ever. But if the time comes when you want a bigger pipe:

  • Add people to the team gradually over time: adding people slows down the existing team, the bigger the team the smaller the proportional slow down, above all else: avoid foie gras recruitment. Again this is likely to be a long term activity.

Finally:

  1. Look to level workflow: reduce the peaks and move work to the lows.
  2. Look to increase the value of the items moving through the pipeline – more and better business analysis and product management

Those are the rules of thumb. Of course there are exceptions, of course there are other issues, of course they don’t cover every eventuality.

Xanpan: refining the process Read More »

Xanpan: planned and unplanned work

I’m on my way back from another visit to the Cornish Software Mines. I’ve been with four different companies this week, laid the first plans for Agile on the Beach 2012 conference and got to know Sebastian Bergmann who has been visiting the mines this week working with the PHP teams.

One of these companies is new to the Agile programme, we did training for the entire company last month and this week I was there to get them started for real. This meant we had to design their board layout and talk about workflow. Once again I found myself advocating Xanpan, the cross between Extreme Programming and Kanban.

The board layout – effectively workflow – is on which reoccurs in Xanpan systems and I would like to discuss here. (I’ll explain the layout here and in a second blog entry I’ll post a picture and talk about the future.)

The problem companies/teams face is: they have planned and unplanned work to do. Typically this is some project they are working to deliver but they need to work on support issues – bug fixes – too. I say that, but it is not just support issues, sometimes it is clients who shouts very loud, or managers who can’t say no, or customers who refuse to wait a few days – indeed sometimes to do so would be the wrong thing to do.

The Scrum answer to this is to stamp on the unplanned work. The team are locked – remember commitment and Sprint-goal. This unplanned work is a problem the Scrum Master should stop – or declare Abnormal Termination of Sprint. Particularly in small companies this isn’t a very useful answer, the truth is, things are more complicated than that, planet-Scrum is too simple.

The ultimate solution is to look at the reason these unplanned items are arising and try to address the root course. In time it might well be possible to do that but right now we don’t want to block the team’s transition. Even in the long term this might be the way of the world for such a team.

The Kanban answer would be to go in the other direction: run all work as “unplanned” and just restock the to-do queue as needed. However this robs the team of the rhythmic certainty of a Sprint. Many of the teams I see need to move away from seat-of-the-pants decision making and knee-jerk reactions. Adopting regular iterations builds in decision making points and helps individuals understand their roles.

So: Xanpan, keep iterations/sprints but allow for unplanned work. To do this we design a board which begins with the three columns:

  • Planned
  • Unplanned
  • Prioritised (usually this queue is limited)

The next column is usually something like:

  • WIP: In development (usually WIP limited)

The team start Springing as usual: they hold a planning meeting, review Blue Cards (Business requests, User Stories) and if necessary break them down to White Cards (tasks). These then populate the Planned column. These blues are probably drawn from a traditional product backlog.

At any time work can be added to the Unplanned column. The column is unlimited.

A few minutes before the morning stand-up meeting around the board the person with authority populated the Prioritised column reviews and populates it. This could be the Project Manager, Business Analyst, Team Leader or someone else, the role and title doesn’t matter, what is important is that this person has the authority to do this. On one team this was the joint responsibility of the Project Manager and Business Analyst.

This person looks at the work still in the Prioritised queue, and they review the Planned Work and Unplanned work. From these sources they decide the priorities for today. When the team gather they can see what the priorities are. For the sake of simplicity it normally makes sense for the Prioritised queue to be limited and for the priorities to be ordered: 1, 2, 3, up to the limit.

If need be priorities can be changed during the day: maybe something even more urgent arrives – or goes away, maybe the prioritised queue empties, or some such.

Effort estimates are normally assigned to planned work during the planning meeting. For unplanned work some teams don’t both adding estimates, some added a quick estimate from the person who picks up the card when they pick it up and some put an effort estimate on when it is complete.

Importantly, the original source (planned or unplanned) is recorded on the card – maybe a different colour card, maybe a dot, a word, whatever. At the end of the Spring the team can review how much planned and how much unplanned work was undertaken.

Thats it really: planned, unplanned and prioritised. The first two are effectively queues for the third. Somebody, or bodies, are responsible for balancing priorities.

It is possible that this workflow/board layout is actually a transitional layout. After running with this flow for a while there will be data on how much unplanned versus planned work actually occurs.

Xanpan: planned and unplanned work Read More »

Retrospective Dialogue Sheets: feedback & updates 2 of 2

This is the second of two blog entries about the use of dialogue sheets. The first entry contain some simple statistics and findings from recent experiences and feedback from users. This entries discusses a few findings which deserve longer comment.

Those who use the sheets consistently report that they engage more team members (everyone gets to speak). Several people report that quieter team members are more likely to contribute in a dialogue retrospective than in a regular retrospective.

Another common finding is that energy levels are higher, there is more discussion and removing the facilitator changes the focus of the retrospective.

These are the results I hoped for when I created the sheets. Yet, to some degree I think these findings are simply the result of changing the format – doing something different. Perhaps a team that uses dialogue sheets for every retrospective will find that in time some of the energy is lost.

Right now I don’t know, I don’t have enough not enough data. However, I have always said these are one technique and I encourage you to use them as one for several retrospective approaches.

A few people report that they don’t like the idea of removing the facilitator. This dislike has stopped them from trying the dialogue sheet. While I understand this fear I hope they will try them all the same. After all, it is the nature of Agile to experiment.

My advice: try a dialogue retrospectives and see what happens, then decide if you want to try again.

One person did report that they felt that without a facilitator some issues were not explored as they might be. I fully understand this finding although it does cause me to ask: were some areas explored which might not have been explored? Again this could be a argument for having some facilitator-less retrospectives and some with.

There is a strange case reported from a company I will not name. Here the management vetoed facilitator-less retrospective. My correspondent doesn’t understand why and I am a little baffled. This gets curious when you know the name of the company concerned. Someone else from the same company replied independently (different country) and has had success with the sheets. I’ve put them in touch and hope this will be resolved.

Interesting, a previous correspondent told me that they kept a facilitator even when using the retrospective dialogue sheet. This hybrid could address both management concerns and the fear of missing topics. I think there is more experimentation to be done here, perhaps using the dialogue sheet to start the retrospectives and then having a facilitator follow up. Or using a dialogue sheet retrospective to identify issues and in following retrospectives deal with them in detail.

Finally, two unexpected findings which carry lessons for those doing regular retrospectives.

All the retrospective sheets include Norm Kerth’s Prime Directive:

“Regardless of what we discover, we must understand and truly believe that everyone did the best job he or she could, given what was known at the time, his or her skills and abilities, the resources available, and the situation at hand.”

To retrospective facilitators this is a holy commandment, something we hold close to our hearts. Yet left alone it is interesting to see how groups react to it.

While one corespondent reported “Kerth’s prime directive is also in our company’s culture, therefore there is no need to remind people of it, it comes natural for us.” Another reported:
“One group poked fun at the “prime directive”, but the other didn’t”.

My own experience is similar. Some teams – usually those who are familiar with the directive – just accept it. One team commented that although they knew it they tended to forget it and it was a good reminder. However, several teams have agonised about the directive, they stall at what should be a routine reminder. Sometimes it is obvious that team members implicitly do blame individuals for problems they face.

I think this behaviour says something about the company, the culture and the teams familiarity with conducting retrospectives. It is, perhaps, the one place where I would like a facilitator on hand. However intervening at such an early stage in the sheet would compromise the approach.

Perhaps the lesson here is that the dialogue sheets work best when teams have trust, and accept the Prime Directive. But then, isn’t that true for all retrospectives?

The other lesson is: while retrospective facilitators hold Kerth’s directive close to their heart the same isn’t true for many others.

The second unexpected finding came at a company in the North West of England. I visited and observed a retrospective using dialogue sheets. There were over a dozen people so we divided into two groups each with a retrospective dialogue sheet.

Unusually the two teams used different sheets – I think one had a T1 and one had a T3, so a similar format but slightly different questions. The first, finding here was that using different dialogue sheets not only does work but can help add insights.

The second, much more significant and unexpected finding was a result of who joined each group. I thought I had divided the groups randomly but as it happens the majority in one group were developers and the majority in the other group were Business Analysts.

At the end of the exercise one of the Business Analysts commented: In a normal retrospective the developers dominate (there are more of them) and so the retrospective usually talks about issues of concern to developers, the issues of concern to BAs are not usually discussed.

It sounds obvious once you see it: the largest group will tend to dominate a retrospective.

However, separate them out, as we did here, gives both groups an opportunity to talk about their concerns. Now the really interesting bit.

The two groups were retrospecting the same time period, however they approached it from different points of view. Some of their conclusions were different but some were the same. To my thinking, when different people, coming from different starting points come to the same conclusions, then those conclusions have added weight.

Overall then: Retrospective Dialogue Sheets are a success.

Retrospective Dialogue Sheets: feedback & updates 2 of 2 Read More »