My previous post on backlogs (Backlogs, #NoBacklog(s) and comfort blankets) generated a lot of attention, including a comment from Derek Jones – his is one of the blogs I read most often. I thought I’d post my reply as a fill post so here goes:
DJ: “Why is a backlog bad? Isn’t it better to have some idea of what work needs to be done, and at least it shows that work is waiting to be done.”
As I understand it Mary’s comment that backlogs are a problem is based on inventory thinking in Lean. I think she was speaking in generic terms and saying “Lean thinkers see backlogs as a problem so maybe having a backlog is not a good thing.”
In a software process backlog work requests are akin to supplies delivered and held in stock waiting for production. Although they don’t take up physical space (and therefore cost) software requests do increase the cognitive load because they take mental space – if only to worry about them.
Part of the logic of Lean’s Just-in-Time approach is to “lower the water level” and make problems more visible. The same is true with a software request backlog: all those backlog items hide problems, sometimes the items may contradict and sometimes, like I suggest in my post, they distract from the overall goal.
As for knowing the work that is coming I’m not sure that is a good thing: again this will increase cognitive load, and when the backlog is run away the content of the backlog is not a reliable indicator of what might happen in future. I’d also add that I’m not convinced software engineers do a better job by deliberately designing for the future, in may experience an awful lot of code which is built “for future change” end up being bloated by unused options for a future that never happened and which hinder the future that does happen.
Future plans can also distract from what is valuable and needed now. The more developed a plan for the future it is the harder it is to walk away from the plan when needs change. That is not an argument for no planning or no plans, it is simply to say that one has to balance both sides.
DJ: “Now, if the backlog just grows and grows, and random items are selected for implementation. That’s not good, but the problem is with how the backlog is being managed.”
Let me turn this around: I am not saying backlogs that are under control are a problem. If a team has a “tame backlog” which is not too large and only growing at a pace noticeably lower than “velocity” then everything is good. But, such backlogs seem to be few-and-far-between.
My conjecture is: many organizations have “run away” backlogs and in such environments a better solution would be to move to a just-in-time backlog generator and sideline the backlog. One could step further and say: even when the backlog is tame it can be better to use a just-in-time backlog generator rather than a (semi-static) backlog.
DJ: “How do we know whether items in the backlog are being consistently prioritised or selected at random?”
We don’t. In my experience large backlogs are seldom prioritised with anything more granular than Moscow rules (Must have, Should have, Could have, Would like to have – rather than the rules of spy tradecraft) – in which case 60% is rated high or Must. Within those priority #1s there may be second priority set at a more granular level. When this happens the majority of the “musts” will be rated low, in effect they are “nice to haves”. Of the few genuine high-priorities the actual priority is not stable. “decibel” management means that they are regularly leap-frogging one another to be Number 1.
DJ: “The waiting time is the key. An exponential waiting time suggests randomness, or FIFO, a power law with exponent -1 suggests item selection based on consistent priorities. For details, see https://shape-of-code.com/2022/08/28/task-backlog-waiting-times-are-power-laws/“
Agreed, I would suggest the behaviours with create that distribution also undermine the reactivity (i.e. agility) of the organization. If a team really was reactive then we would expect uniform, short, lead time. Conversely, if a team really was adhering to a rational plan, roadmap, requirements document, then the lead time would be longer but would also be uniform because at some point X the stories had been captured, the work had been prioritised and was being delivered in regular fashion.
Which begs the question: is a power law distribution of work-to-do a natural phenomenon which will always reassert itself or an indicator of dysfunction?
A team following my Story Generator (aka Just-in-time Backlog) model would see average delivery times of less than half the super sprint duration because any undelivered items would be deleted at the end of the super sprint.
Subscribe to my blog newsletter and get Continuous Digital for free
Thanks for the detailed reply.
There are solid economic reasons for minimising hardware items held in stock. The theory of optimum stock control was worked out in the 1930s https://en.wikipedia.org/wiki/Economic_order_quantity
Just in time production requires everybody in the network to work together; you cannot be JIT if your supplier does not play ball.
I get the feeling that the use of Lean in software development is Cargo culting in some of the processes. The cost of holding ‘stock’ is often effectively zero, and what about the supplier of new tasks; are they JIT?
Economic order quantity does not make much sense for software tasks (perhaps there are cases where it does).
The term ‘backlog’ is just plain wrong. It should be ‘Forlog’, because it is forward-thinking into the future, i.e., having some idea where things are headed next. It’s possible to think up reasons why too long a Forlog is a bad idea, but what about the benefits. A cost/benefit analysis is needed.
Re: distributions.
The output from all processes will have some form of distribution. If a distribution can be fitted to data, it tells us something about the processes that generated it.
A power law is produced by selecting from a priority queue (and perhaps other queueing processes); it is a signal about what is going on, and is not good/bad in itself.