allan kelly

10 rules of thumb for User Stories

There is a universe somewhere in which my blog is always on topic, I have a theme and I always produce posts which accord with that theme. However, in this universe my mind flits around and, for better or worse, this blog carries what is on my mind at the time. Sometimes that is grand strategy, sometimes hands-on-detail, sometimes the nature of digital work and sometimes frustration.

This week I’m dusting off my slides for next months User Stories workshop so I thought I’d share my User Stories 10 rules of thumb:

1 – Stories are usually written about a hands on user, someone who actually puts their hands on the keyboard

2 – If your story begins “As a user” then save the ink and delete the words, “as a user” adds nothing. “As a customer” isn’t a lot better. In both cases think harder, come up with a better “user”. Be as specific as you can.

3 – Stories which have systems in the user role should be rethought (“As Order Entry System I want the E-Mail System to send e-mail so that I can notify the humans”). Step back and think “Who benefits from System-E and System-F working together?” then rework the story with that person in mind seeing a combined system (e.g. “As a Branch Manager I want e-mail notifications sent when an order is entered.”)

4- Stories should be big enough to deliver business value but small enough to be complete in the near future, I’m prepared to accept a maximum of 2 weeks but others would challenge that and say “2 days is the max.” (Small and valuable actually constitute my 2 Golden Rules, where I also discuss Epics and Tasks.)

5 – There is no universal right size for a story. Teams differ widely in terms of the best (most efficient, most understandable, quickest to deliver, biggest bang for your buck…)

6 – In general the greater the gap (physical distance, cultural norms, history working in the domain, employment status, education level, etc. etc.) between the story writer (e.g. BA) and receiver (e.g. Tester or Coder) the more detail will be expected by one side or the other.

7 – If the User Story format “As a … I want to … So that …” isn’t readable then write something that it. Who, What and Why are really useful to know and he standard format usually works well but if it doesn’t write something that is. There are no prizes for “the perfect story.”

8 – Beware stories about team members: a story which begins “As a Tester I …” or “As a Product Owner …” are red flags and should be questioned. Unless you are actually building something for people like yourselves (e.g. a programming team writing an IDE) then stories should be able end users and customers. Very very occasionally it can make sense to write something for the team themselves (e.g. “As a Tester I want a log of all database action so I can validate changes”) but before you accept them question them.

9 – Stories should be testable: if you can’t see how the story can be tested think again. If you are asked to complete a story which you can’t test then simply mark it as done. If it can’t be tested then nobody can prove you haven’t done it. However, in most cases someone will soon report it as not working and you will know how to test it.

10 – Remember the old adage “Stories are a placeholder for a conversation” ? Well, if you have the conversation all sins are forgiven. No matter what the flaws are if you have the conversation you can address them.

Needless to say more about these topics in Little Book of Requirements and User Stories.

Now just because I go around saying radical things like “Nuke the backlog” does not mean I want to ditch the wonderful who-what-why structure of user stories. You might throw away a lot of user stories but when thinking about the work to be done in the post-nuclear supersprint then by all means: write user stories.

10 rules of thumb for User Stories Read More »

OKRs like its 2024 not 1974

Its not 1970 any more, even 1974 was 50 years ago. I used to make this point regularly when discussing the project model and #NoProjects. Now though I want to make the same point about OKRs.

Fashion in the 1970s

The way some people talk about OKRs you might think it was 1974: OKRs are a tool of command and control, they are given to workers by managers, workers have little or no say in what the objectives are, the key results are little more than a to-do list and there is an unwritten assumption that if key results 1, 2 and 3 are done then the objective will be miraculously achieved. In fact, those objectives are themselves often just “things someone thinks should be done” and shouldn’t be questioned.

I’d like to say this view is confined to older OKR books. However, while it is not so common in more recent books many individuals carry these assumptions.

A number of things have changed since OKRs were first created. The digital revolution might be the most obvious but actually digital is only indirectly implicated, digital lies behind two other forces here.

First, we’ve had the agile revolution: not only does agile advocate self-organising teams but workers, especially professional knowledge workers, have come to expect autonomy and authority over the work they do. This is not confined to agile, it is also true of Millennials and Generation-Z workers who recently entered the workforce.

Digital change is at work here: digital tools underpin agile, and Millennials and Gen-Z have grown up with digital tools. Digital tools magnify the power of workers while making it essential the workers have the authority to use the tool effectively and make decisions.

Having managers give OKRs to workers, without letting the workers have a voice in setting the OKRs, runs completely against both agile and generational approaches.

Second, in a world where climate change and war threaten our very existence, in a world where supposedly safe banks like Silicon Valley and Lehman Brothers have failed, where companies like Thames Valley Water have become a byword for greed over society many are demanding more meaning and purpose in their work—especially those Millennials.

Simply “doing stuff” at work is not enough. People want to make a difference. Which is why outcomes matter more than ever. Not every OKR is going to result in reduced CO2 emissions but having outcomes which make the world a better place gives meaning to work. Having outcomes which build towards a clear meaningful purpose has always been important to people but now it is more important than ever.

Add to that the increased volatility, uncertainty and complexity of our world, and the ambiguous nature of many events it is no longer good enough to tell people what to do. Work needs to have meaning both so people can commit to it and also so they can decide what the right thing to do is.

In 2024 the world is digital and the world is VUCA, workers demand respect, meaning and to be treated like partners not gophers.

OKRs are a powerful management tool but they need to be applied like it is 2024 not 1974.

OKRs like its 2024 not 1974 Read More »

David v. Goliath, User Stories v. Use Cases

As I was saying, you forget about something, and then suddenly its everywhere. So it was the other day when I saw someone on LinkedIn asking:

        “Which is best, User Stories or Use Cases?”

Unlike the story points, use cases are an alternative to User Stories so this question at least makes sense. But, a bit like when your child asks “Who is the best Superman or Winston Churchill?” you have to say “What do you want them to do? Feature in a comic book or lead a country?”

In many ways Use Cases are better: they are better thought out, originally part of UML they have their own notation – they are almost a requirements analysis technique in their own right – and they are great for well rounded requirements gathering.

User Stories on the other hand were invented in the heat of work and the originators never thought they would last. It was an interesting idea which just caught on. Somehow they became a standard part of agile, Scrum and even SAFe.

On that basis there is no competition, David v. Goliath, User Stories are the plucky David but Goliath holds all the cards. However…

Use Cases have an Achilles heal: all those diagrams notation, specific language and thinking behind them mean they require effort to understand. Typical Use Case training courses last two or three days. Contrast that with User Stories. My online primer next month will give you 50% of what you need to know in a couple of hours. When I’ve run User Story training in the past one day is enough.

For professionals – business analysis, product managers, etc. – that isn’t really a big deal. But it is a big deal when talking to customers, users and all those people who want the thing you are building. Techniques like Use Cases create a barrier between the experts with their specialist notation and the end-users. That is a big problem.

Like so much else it is a question of context. If you are building the control software for a nuclear power station, something which must be exact and will have very few users, changes slowly over time, somewhere you can’t “move fast and break things” then go with Use Cases. But, if you are building the customer self-service portal for an electricity company go with User Stories.


Book now for User Stories Primer – places limited,

Book early for biggest discount and use the code Blog20 for an extra 20% off


Thanks to Kishorekumar 62 for the Use Case image, CCL license via WikiMedia Commons.

David v. Goliath, User Stories v. Use Cases Read More »

User Stories are not Story Points

Funny how somethings fade from view and then return. Thats how its been with User Stories for me in the last few weeks – hence my User Story Primer online workshop next month.

A few days ago I was talking to someone about the User Story workshop and he said “O, I don’t use Story Points any more.” Which had me saying “Erh, what?”

In my mind User Stories and Story Points are two completely different thing used for completely different purposes. It is not even the difference between apple and oranges, its the difference between apples and pasta.

User Stories are lightweight tool for capture what are generally called requirements. Alternatives to User Stories are Use Cases, Personas Stories and things like the IEEE 1233 standard. (More about Use Cases in the next post.)

Story Points are a unit of measurement for quantifying how much work is required to do something – that something might be a User Story but it could just as easily be a Use Case or a verbal description of a need.

So it would seem, for better or worse User Stories and Story Points have become entangled.

The important thing about Story Points is that they are a unit of measurement. What you call that unit is up to you. I’ve heard those units called Nebulous Units of Time, Effort Points or just Points, Druples, and even Tea Bags. Sometimes they measure the effort for a story, sometimes the effort for a task or even epic, sometimes the effort includes testing and sometimes not. Every team has its own unit, its own currency. Each team measures something slightly different. You can’t compare different teams units, you can only compare the results and see how much the unit buys you with that team.

To my mind User Stories and Story Points are independent of one another and can be used separately. But, it is also true that both have become very associated with Scrum. Neither is officially part of Scrum but it is common to find Scrum teams capture requirements as User Stories and estimate the effort with Story Points. It is also true that Mike Cohn has written books on both and both are contained in SAFe.

Which brings me to my next post, User Stories v. Use Cases.


(Images from WikiMedia on CCL license, Apple from Aron Ambrosiani and Pasta from Popo le Chien).

User Stories are not Story Points Read More »

Big and small, resolving contradition

Have I been confusing you? Have I been contradictory? Remember my blog from two weeks back – Fixing agile failure: collaboration over micro-management? Where I talked about the evils of micro-management and working towards “the bigger thing.” Then, last week, I republished my classic Diseconomies of Scale where I argue for working in the small. Small or big?

Actually, my contradiction goes back further than that. It is actually lurking in Continuous Digital were I also discuss “higher purpose” and also argue for diseconomies of scale a few chapters later. There is a logic here, let me explain.

When it comes to work, work flow, and especially software development there is merit in working in the small and optimising processes to do lots of small: small stories, small tasks, small updates, small releases, and so on. Not only can this be very efficient – because of diseconomies – but it is also a good way to debug a process. In the first instance it is easier to see problems and then it is easier to fix them.

However, if you are on the receiving end of this it can be very dispiriting. It becomes what people call “micro management” and that is what I was railing against two weeks ago. To counter this it is important to include everyone doing the work in deciding what the work is, give everyone a voice and together work to make things better.

Yet, the opposite is also true: for every micro-manager out there taking far too much interest in work there is another manager who is not interested in the work enough to consider priorities, give feedback or help remove obstacles. For these people all those small pieces of work seem like trivia and they wonder why anyone thinks they are worth their time?

When working in the small its too easy to get lost in the small – think of all those backlogs stuffed with hundreds of small stories which nobody seems to be interested in. What is needed is something bigger: a goal, an objective, a mission, a BHAG, MTP… what I like to call a Higher Purpose.

Put the three ideas together now: work in the small, higher purpose and teams.

There is a higher purpose, some kind of goal your team is working towards, perhaps there is more than one goal, they may be nested inside one another. The team move towards that goal in very small steps by operating a machine which is very effective at doing small things: do something, test, confirm, advance and repeat. These two opposites are reconciled by the team in the middle: it is the team which shares the goal, decides what to do next and moves towards it. The team has authority to pursue the goal in the best way they can.

In this model there is even space for managers: helping set the largest goals, working as the unblocker on the team, giving feedback in the team and outside, working to improve the machine’s efficiency, etc. Distributing authority and pushing it down to the lowest level doesn’t remove managers, like so much else it does make problems with it more visible.

Working in the small is only possible if there is some larger, overarching, goal to be worked towards. So although it can seem these ideas are contradictory the two ideas are ultimately one.

Big and small, resolving contradition Read More »

Software has diseconomies of scale – not economies of scale

“Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.”

John Maynard Keynes

Most of you are not only familiar with the idea of economies of scale but you expect economies of scale even if you don’t know any ecoomics. Much of our market economy operates on the assumption that when you buy/spend more you get more per unit of spending.

At some stage in our education — even if you never studied economics or operational research — you have assimilated the idea that if Henry Ford builds 1,000,000 identical, black, cars and sells 1 million cars, than each car will cost less than if Henry Ford manufactures one car, sells one car, builds another very similar car, sells that car and thus continues. The net result is that Henry Ford produces cars more cheaply and sells more cars more cheaply, so buyers benefit.

(Indeed the idea and history of mass production and economies of scale are intertwined. Today I’m not discussing mass production, I’m talking Economies of Scale.)

Software Milk
Software is cheaper in small cartons, and less risky too

You expect that if you go to your local supermarket to buy milk then buying one large carton of milk — say 4 pints in one go — will be cheaper than buying 4 cartons of milk each holding one pint of milk.

Back in October 2015 I put this theory to a test in my local Sainsbury’s, here is the proof:

Collage of milk prices
Milk is cheaper in larger cartons
  • 1 pint of milk costs 49p (marginal cost of one more pint 49p)
  • 2 pints of milk cost 85p, or 42.5p per pint (marginal cost of one more pint 36p)
  • 4 pints of milk cost £1, or 25p per pint (marginal cost of one more pint 7.5p) (January 2024: the same quantity of milk in the same store now sells for £1.50)

(The UK is a proudly bi-measurement country. Countries like Canada and Switzerland teach their people to speak two languages. In the UK we teach our people to use two systems of measurement!)

So ingrained is this idea that when it supermarkets don’t charge less for buying more complaints are made (see The Guardian.)

Buying milk from Sainsbury’s isn’t just about the milk: Sainsbury’s needs the store, the store needs staffing, it needs products to sell, and they need to get me into the store. All that costs the same for one pint as for four. Thats why the marginal costs fall.

Economies of scale are often cited as the reason for corporate mergers: to extract concessions from suppliers, to manufacture more items for lower overall costs. Purchasing departments expect economies of scale.

But…. and this is a big BUT…. get ready….

Software development does not have economies of scale.

In all sorts of ways software development has diseconomies of scale.

If software was sold by the pint then a four pint carton of software would not just cost four times the price of a one pint carton it would cost far far more.

The diseconomies are all around us:

Small teams frequently outperform large teams, five people working as a tight team will be far more productive per person than a team of 50, or even 15. (The Quattro Pro development team in the early 1990s is probably the best documented example of this.)

The more lines of code a piece of software has the more difficult it is to add an enhancement or fix a bug. Putting is fix into a system with 1 million lines can easily be more than 10 times harder than fixing a system with 100,000 lines.

Projects which set out to be BIG have far higher costs and lower productivity (per unit of deliverable) than small systems. (Capers Jones’ 2008 book contains some tables of productivity per function point which illustrate this. It is worth noting that the biggest systems are usually military and they have an atrocious productivity rate — an F35 or A400 anyone?)

Waiting longer — and probably writing more code — before you ask for feedback or user validation causes more problems than asking for it sooner when the product is smaller.

The examples could go on.

But the other thing is: working in the large increases risk.

Suppose 100ml of milk is off. If the 100ml is in one small carton then you have lost 1 pint of milk. If the 100ml is in a 4 pint carton you have lost 4 pints.

Suppose your developers write one bug a year which will slip through test and crash users’ machines. Suppose you know this, so in an effort to catch the bug you do more testing. In order to keep costs low on testing you need to test more software, so you do a bigger release with more changes — economies of scale thinking. That actually makes the testing harder but…

Suppose you do one release a year. That release blue screens the machine. The user now sees every release you do crashes their machine. 100% of your releases screw up.

If instead you release weekly, one release a year still crashes the machine but the user sees 51 releases a year which don’t. Less than 2% of your releases screw up.

Yes I’m talking about batch size. Software development works best in small batch sizes. (Don Reinertsen has some figures on batch size in The Principles of Product Development Flow which also support the diseconomies of scale argument.)

Ok, there are a few places where software development does exhibit economies of scale but on most occasions diseconomies of scale are the norm.

This happens because each time you add to software work the marginal cost per unit increases:

Add a fourth team member to a team of three and the communication paths increase from 3 to 6.

Add one feature to a release and you have one feature to test, add two features and you have 3 tests to run: two features to test plus the interaction between the two.

In part this is because human minds can only hold so much complexity. As the complexity increases (more changes, more code) our cognitive load increases, we slow down, we make mistakes, we take longer.

(Economies of scope and specialisation are also closely related to economies of scale and again on the whole, software development has diseconomies of scope (be more specific).)

However be careful: once the software is developed then economies of scale are rampant. The world switches. Software which has been built probably exhibits more economies of scale than any other product known to man. (In economic terms the marginal cost of producing the first instance are extremely high but the marginal costs of producing an identical copy (production) is so close to zero as to be zero, Ctrl-C Ctrl-V.)

What does this all mean?

Firstly you need to rewire your brain, almost everyone in the advanced world has been brought up with economies of scale since school. You need to start thinking diseconomies of scale.

Second, whenever faced with a problem where you feel the urge to go bigger run in the opposite direction, go smaller.

Third, take each and every opportunity to go small.

Four, get good at working in the small, optimise your processes, tools, approaches to do lots of small things rather than a few big things.

Fifth, and this is the killer: know that most people don’t get this at all. In fact it’s worse…

In any existing organization, particularly a large corporation, the majority of people who make decisions are out and out economies of scale believers. They expect that going big is cheaper than going small and they force this view on others — especially software technology people. (Hence Large companies trying to be Agile remind me of middle aged men buying sports cars.)

Many of these people got to where they are today because of economies of scale, many of these companies exist because of economies of scale; if they are good at economies of scale they are good at doing what they do.

But in the world of software development this mindset is a recipe for failure and under performance. The conflict between economies of scale thinking and diseconomies of scale working will create tension and conflict.


Originally posted in October 2015 and can also be found in Continuous Digital.

To be the first to know of updates and special offers subscribe — and get Continuous Digital for free.

Software has diseconomies of scale – not economies of scale Read More »

Why I don’t like pre-work (planning, designing, budgetting)

You might have noticed in my writing that I have a tendency to rubbish the “Before you do Z you must do Y” type argument. Pre-work. Work you should do before you do the actual work. Planning, designing, budgeting, that sort of thing.

Why am I such a naysayer?

Partly this comes from a feeling that given any challenge it is always possible to say “You should have done something before now” – “You missed a step” – “If you had done what you were supposed to do you wouldn’t have this problem.” Most problems would be solve already, or would never have occurred, if someone had done the necessary pre-work.

There is always something you should have done sooner but, without a time machine, that isn’t very useful advice. Follow this line of reasoning and before you know it there is a great big process of steps to be done. Most people don’t have the discipline, or training, to follow such processes and mistakes get made. The bigger the process you have the more likely it is to go wrong.

However, quite often, the thing you should have done can still be done. Maybe you didn’t take time to ask customers what they actually wanted before you started building but you could still go and ask. Sure it might mean you have to undo something (worst case: throw it away) but at least you stop making the wrong thing bigger. Doing things out of order may well make for more work, and more cost, but it is still better than not doing it at all.

Some of my dislike simply comes from my preference. Like so many other people, I like to get on and do something: why sit around talking about something when we should be doing! I’m not alone in that. While I might be wrong to rush to action it is also wrong to spend so long talking that you never do it, “paralysis by analysis.” Add to that, when someone is motivated to do something its good to get on and do something, build on the motivation. Saying “Hold on, before you …” may mean the moment is missed, the enthusiasm and motivation is lost.

So, although there is a risk in charging in there is also a risk in not acting.

Of all the things that you might do to make work easier once you start “properly” some will be essential and some till not. Some pre-work just seems like a good idea. One way to determine what is essential is to get on with the work and do the essentially when get to them. Just-in-time.

For example, before you begin a piece of work, it is a really good idea to talk about the acceptance criteria – “what does success look like?” If you pick up a piece of work and find that there are no acceptance criteria you could say “Sorry, I can’t do this, someone needs to set criteria and then I’ll do it” or you could go and find the right person and have the conversation there and then. When some essential pre-work is missing it becomes job number 1 to do when you do do the work.

Finally, another reason I dislike pre-work is the way it interacts with money.

There are those who consider pre-work unnecessary and will not allocate money to do it (“Software design costs time and money, just code.”) If instead of seeing pre-work as distinct from simply doing the work then it is all part of the same thing: rather than allocate a few hours for design weeks before you code simply do the design for the first few hours of the work. That way, making the pre-work into a just-in-time activity you remove the possibility the work is cancelled or that it changes.

My other gripe with money is the way, particularly in a project setting, pre-work is accounted for differently. You see this in project organizations where nobody is allowed to do anything practical until the budget (and a budget code) is created for the work. But the work that happens before then seems to be done for free: there is an unlimited budget for planning work which might be done.

Again, rather than see the pre-work – planning, budgeting, designing, etc. – as something distinct that happens before the work itself just make it part of the work, and preferably do it first.

Ultimately, I’m not out to bad pre-work, I can see that it is valuable and I can see that done in advance it can add more value. Its just that you can’t guarantee it is done, if we build a system that doesn’t depend on pre-work being done first, then the system is more robust.

Why I don’t like pre-work (planning, designing, budgetting) Read More »

Reworking the risk equation

It is hard to argue with the old project management equation:

Risk exposure = Risk impact x Risk probability

Risk impact is often take to be money, say $1,000,000 and probability a percentage, say 50%. So exposure is also in money, $500,000. There is something rather obvious about it. So, while I’ve long disliked the equation I’ve not taken it to task, I never quite put my finger on it, or perhaps, I couldn’t really articular a winning argument.

That changed last week when I took “Papa” Chris Matt’s dog for a walk, or rather, he walked the dog I just tagged along and discussed the state of the world, and the thing we call agile.

Chris has a better equation:

Risk exposure = Money invested x Time to payback

Probability is out, after all, if something fails then it all fails. You were never going to loose $500,000, although you might loose the whole million. Probability is something of a red herring.

Chris’ insight here is that there is risk until the moment when you start seeing payback, i.e. until the investment starts pay is proven. In retrospect I should have realised this before. I read How Big Things Get Done a few months back and Bent Flyberg has the same idea. He points out that the time of biggest risk starts when the big money gets spent and continues until the project is delivered. Flyberg uses this logic to argue that long, detailed, upfront planning is good – because it is cheap and allows problems to be identified and avoided). Once you start the real work then do it as fast as possible; keep moving fast even if it costs you more because until the window is closed everything is as risk. (A highly recommended book by the way.)

The lesson many people take from this is: spend along time in planning and make sure you are certain about what you are doing before you spend the big money. I suggest a more important lesson is: once you start spending the money get to the point of return as fast as possible. Slowing spending does not reduce risk or save money, it increases risk and reduces benefit.

I’ve pointed out before that one way to increase the return on investment (significantly) is to make sure work starts paying back sooner. For example, make sure that the team deliver something every month (which itself reduces risk because something – anything! – is actually being delivered) and have that thing earn some revenue even if it is less than the full amount you want. When you do return on investment calculations correctly (i.e. account for time) then bringing forward the point where some (even a little) money is returned causes ROI calculations to jump.

Although it might seem too good to be true, the same logic will reduce risk. Both on paper and in practice.

Everyone who has ever argued for measuring and reducing cycle time will be happy right now. But Chris has a second point: today’s endeavours often involve multiple teams, and until all those teams deliver there is no return.

Think of it like ships in a convoy: they move at the speed of the slowest ship. None of the ships arrive until they all do, until they do they are all at risk, and since they are moving slow there is more risk. It also means, that reducing risk, speeding delivery, increasing ROI is going to depend on speeding up the slowest ship. If that sounds familiar its because it is what theory of constrains says: the system is constrained by its most significant bottleneck.

Reworking the risk equation Read More »

Nuke the Backlog in Digital Agencies

When I make my “Nuke the backlog” argument (use goals not backlogs to drive work, throw away the long term backlog and generate the work you need to do from the goals) there will often be someone in the room who says:

“We are a digital agency, our clients come to us with a list of features, a backlog, and ask us to deliver it. We can’t throw away the backlog.”

This is a fair point, and if you are an agency for hire, this has long been the model: clients list features they want, agencies quote for each item, the cheapest wins the work and everyone is happy.

The idea that you can write down everything you want is baked into our procurement processes. It means that before you can ask the question “how long will it take?” or “how much will it cost?” you have to spend time and money saying what “it” is.

Although if we are being honest, it is only after signing the contract that things go horribly wrong. The client never really knew what they wanted, the requirements gathering process didn’t satisfy them, when they see something (a bit of solution or an invoice) they get new insights and “change their mind.”

If you haven’t read my tongue-in-cheek “Dear customer” letter now would be a good time to do so.

I am not disputing that this “feature factory for hire” model is the way a lot, most, of the industry works. Nor am I disputing that for your company right now, or even for yourself, to move away from this model is a big ask.

The first problem is the race to the bottom: if you are competing on price then there is always someone, somewhere, that will offer to do it for less. And if you cut yourself to the bone to win the work you are making it less likely there will be a successful outcome. Economists call this “winner’s curse.”

Unfortunately, as an industry we have trained our clients to believe there is an unavoidable gap between awarding the contract and seeing the outcome. Which means, by the time the client realises something is wrong, they have spent so much money it is a career ending decision to admit it is wrong.

So, what is the solution?

We need to change the rules of the game. Are you a tailor or an image consultant?

Is your digital agency competing on price? Are you just making suits?

If you want a better way you need to rethink how you sell, you need to work with objectives and outcomes. Throw the requirements document away and nuke the backlog. Think holistically, consider this an all in process. You aren’t just building the features a client wants, you are helping them towards their business goals.

Now there are two problems here. That is a more difficult sell, you need to get involved sooner and few people really know how to do it. (Everyone claims they can but they usually end up selling features.)

Second, not every client is ready for it. Client’s have been conditioned to think technical types want big long documents – and so have their procurement people. So you have to say No to some clients. That is scary, especially for a small agency that lives one deal at a time.

The good news is: there are clients out there who are open to working in new ways. Offer up something different. Most agencies don’t, most agencies are racing to the bottom so you have less competition.

Otherwise, you will be forever competing on price and racing to the bottom.

Nuke the Backlog in Digital Agencies Read More »

Are you a tailor or an image consultant?

The best tailor

You go to the best tailor in town, and you say: ‘I’m getting married, my betrothed is beautiful, make me look one million dollars, I don’t care what it costs.’

The tailor measures you very carefully then stands back. He rubs his chin, he is clearly troubled by something. ‘Tell me’ you say, ‘Please tell me.’

‘Sir’, he replies, ‘I don’t think it is my place to say’

‘Tell me’ you beg, ‘I must look fantastic on the day’

‘Well Sir, …’ he talks slowly and cautiously, ‘… if you really want to look $1,000,000 can I suggest you lose 5kg?’

What do you say?

Is the tailor a master of clothes who just makes beautiful clothes?
Or are they, because of their long experience making people look beautiful in clothes, an image consultant who can help you look beautiful?

This is the dilemma many a consultant finds themselves in. More importantly it is the dilemma that I find again and again in corporate IT and digital agencies: Is an agency a service which is expected to build what is requested without answering back?

Or, Is an agency a group of experts who know the best way to maximise benefit and gain value from digital investment?

Is your agency a tailor who is expected to produce something brilliant no matter what the request is?

Or are they people who, because they work with technology every day, have experience and knowledge to help their customers maximise their benefit? (Even if that means the original request should be reconsidered.)

I distinctly remember a senior IT manager at an American bank telling me that his department was definitely a service department: it was not his job to answer back to business requests. His job, and that of his team, was to build what was asked for. The implication being that even if his people knew the (internal) client was approaching the issue from the wrong point, and even asking for the wrong system, then his people should not correct them. They should just build what was asked for.

Perhaps this dilemma is most acutely felt by Business Analysts who are sometimes expected to just write down the orders that someone wants to give to IT. Some brave Business Analysts will challenge, and lead their customers to ask different questions.

The original metaphor came from Benjamin Mitchell – who didn’t recall it last time I asked him. As I recall it was part of a very deep conversation we had in Cornwall back in 2011 (the same conversation led to the “Abandon hope all ye who enter Agile” blog post.)

For me the story is about missed opportunities, about frustration and about what you might do to change the position. For Benjamin it is about asking “Do we all expect the same thing? – If you think they consider you a service group how can you confirm this?”

Either way the core problem is about a mismatch of expectations. Your IT department might be a service department, that approach could work and it is how many big corporations operate. For a digital agency the question is more important: do they exist to do what the customer ask? “2kg of WordPress, certainly, sure” Or do they exist to advise the client on to live digitally? – and perhaps build a WordPress site.

I’ve used this story many times since 2011, it comes up again and again. If anything I use it more today than in 2011 or 2014 when I first shared it online.

What has changed since then is not so much the story but the world around it. Digital technology dominates business and requests to “build these 10 features” make less and less sense every year. The best ways of organising and managing work is to work with a tailor who knows what they are talking about and help both sides find the best solutions to meet a goal. Both sides need to respect each other.

I originally published this story in 2014. Nearly 10 years on it is even more relevant than when I first published it.

Are you a tailor or an image consultant? Read More »