Reflections on learning and training (and dyslexia)

Nuremberg_Funnel1910-2017-10-17-16-19.jpg

German readers might recognise the image above as a Nuremberg Funnel. For the rest of you: the Nuremberg funnel represents the idea that a teacher can simply open a students head and pour in learning.

First off, a warning: this blog entry might be more for me than for you, but its my blog!

Still, I expect (hope) some of you might be interested in my thoughts and reflections on how training, specifically “Agile training” sessions. More importantly writing this down, putting it into words, helps me think, structure my thoughts and put things in order.

Shall I begin?

I’m not naive enough to think training is perfect or can solve everything. But I have enough experience to know that it can make a difference, especially when teams do training together and decide to change together.

One of the ways I’ve made my living for the last 10 years is by delivering “Agile training.” I don’t consider myself a “trainer” (although plenty of others do), I consider myself more as a “singer songwriter” – it is my material and I deliver it. I’ve actually considered renaming my “Agile training” as “Agile rehearsal” because thats how I see it. I haven’t because I’d have to explain this to everyone and more importantly while people do google for “Agile training” nobody searches for “Agile rehearsal.”

Recently I’ve been prompted to think again about how people learn, specifically how they learn and adopt the thing we call “Agile”. One unexpected experience and one current challenge have added to this.

A few months ago I got to sit in while someone else delivered Agile training. On the one hand I accepted that this person was also experienced, also had great success with what he did, also claimed his courses were very practical and he wasn’t saying anything I really disagreed with.

On the other hand he reminded me of my younger self. The training felt like the kind of training I gave when I was just starting out. Let me give you two examples.

When I started giving Agile training I felt I had to share as much as possible with the attendees. I was conscious that I only had them for a limited time and I had so much to tell them! I was aiming to give them everything they needed to know. I had to brain dump… Nuremberg funnel.

So I talked a lot, sessions were long and although I asked “Any questions?” I didn’t perhaps give people enough time to ask or for me to answer – ‘cos I had more brain dumping to do!

Slowly I learned that the attendees grew tired too. There was a point where I was talking and they had ceased to learn. I also learned that given a choice (“Would you like me to talk about Foobar for 30 minutes or have you had enough?”) people always asked for more.        

Second, when I started out I used to include the Agile Manifesto and a whistle-stop-tour of Lean. After all, people should know where this came from and they should understand some of the theory, right? But with time I realised that the philosophy of the manifesto takes up space and isn’t really important. Similarly, Lean is very abstract and people have few reference points. To many (usually younger) people who have never lived “the waterfall” it can seem a crazy straw-man anyway.

Over the years I’ve tried to make my introductions to Agile more experiential. That is, I want to get people doing exercises and reflecting on what happened. I tend to believe that people learn best from their own experience so I try to give them “process miniatures” in the classroom and then talk (reflect) on the experience.

These days my standard starter 2-day Agile training course is three quarters exercises and debriefs. My 1-day “Requirements, User Stories and Backlogs” workshop is almost entirely exercise based. I’m conscious that there is still more stuff – and that different people learn in different ways – so I try to supplement these courses with further reading – part of the reason behind printing “Little Book of User Stories” is to supplement “Agile Reader” in this.

I’m also conscious that by allowing people to learn in different mediums, and to flip between them they can learn more and better.

My own thinking got a big boost when I learned about Constructivist learning theory. Perhaps more importantly I’ve also benefited from exploring my own dyslexia. (Reading The Dyslexic Advantage earlier this year was great.)

Why is dyslexia relevant here? Well two reasons…

Firstly, something I was told a long time ago proves itself time and time again: Dyslexics have problems learning the way non-dyslexics do, but the reverse is not true. What helps dyslexics learn helps everyone else learn better too.

Second: dyslexics look at the world differently and we have to construct their own meaning and find our own ways to learn. To me, Agile requires a different view of the world and it requires us to learn to learn. Three years back I even speculated that Agile is Dyslexic, as time goes by I’m more convinced of that argument.

So why am I thinking so much about all this?

Well, I’ve shied away from online training for a few years now – how can I do exercises? how can I seed reflection?

Now I’ve accepted a request to do some online training. I can’t use my existing material, it is too exercise based. I’m having to think long and hard about this.

One thought is to view “online training” as something different to “rehearsal training.” That is, something delivered through the online medium might be more akin to an audio book, it is something that supplements a rehearsal. But that thinking might be self limiting and ignore opportunities offered online.

The other thing is the commercial medium. As my training has got more experiential and better at helping people move from classroom to action it has actually covered less. The aim is to seed change, although the classroom is supplemented the actual material covered in class is less; learn less change more! – Thats a big part of the reason I usually give free consulting days after training.

In a commercial setting where there is a synopsis and a price tag the incentive is to list more and more content. One is fearful of missing something the buyer considers important. One can imagine a synopsis being placed next to a competitor synopsis and the one with the most content for the least price is chosen.

So, watch this space, I will be venturing into online training. To start off with I’m not sure who is going to be learning the most: the attendees or the presenter! (O perhaps I shouldn’t have said that, maybe I’m too honest.)

If you have any experience with training (as a teacher or student), and especially online training, I’d love to hear about them. Please comment below or email me.

Read more? Subscribe to my newsletter – free updates on blog post, insights, events and offers.

Reflections on learning and training (and dyslexia) Read More »

Friday throughts on the Agile Manifesto and Agile outside of software

AgileManifesto-2017-10-13-17-01.jpg

While I agree with the Agile Manifesto I’ve never been a great on for defining “Agile” in terms of it.

As time goes by I find the manifesto increasingly looks like a historic document. It was written in response to the problems in the software industry at the turn of the millennium – problems I recognise because I was there. I worked on the Railtrack privatisation, ISO 9000 certified and so much paper you needed a train to move it. I worked at Reuters as they destroyed their own software capability with a CMM stream roller.

The manifesto is a little like Magna Carta or the US Constitution, you sometimes have to read into it what would fit your circumstances. It was written about software and as we apply agile outside of software you have to think “what would it say?” the same way the US Supreme Court looks at the Constitutions interprets what it would say about the Internet

Perhaps a more interesting question than “What is Agile?” is “Where does Agile apply?” or, even more interesting, “Where does Agile not apply?”

One can argue that since Agile includes a self-adaptation mechanism (inspect and adapt) – or as I have argued, Agile is the embodiment of the Learning Organization – it can apply to anything, anywhere. Similarly it has universal applicability and can fix any flaws it has.

Of cause its rather bombastic to make such an argument and quite possibly anyone making that argument hasn’t thought it through.

So the definition of “Agile” becomes important – and since we don’t have one, and can’t agree on one we’re in a rather tricky position.

Increasingly I see “Agile” (and so some degree Lean too) as a response to new technologies and increasing CPU power. Software people – who had a particular problem themselves – had first access to new technologies (programmable assistants, email, instant messenger, wikis, fast tests and more) and used them to address their own issues.

The problems are important. Although people have been talking about “agile outside of software development” for almost as long as agile software development it has never really taken off in the same way. To my mind thats because most other industries don’t have problems which are pushing them to find a better way.

As technologies advance, and as more and more industries become “Digital” and utilise the same tools software engineers have had for longer then those industries increasingly resembled software development. That means two things: other industries start to encounter the same problems as software development but they also start to apply the same solutions.

Software engineers are the prototype of future knowledge workers.

Which implies, the thing we call Agile is the prototype process for many other industries

“Agile outside of software” becomes a meaningless concept when all industries increasingly resemble software delivery.

Read more? Subscribe to my newsletter – free updates on blog post, insights, events and offers.

Friday throughts on the Agile Manifesto and Agile outside of software Read More »

Tax the data

iStock_000008663948XSmall-2017-10-6-13-19.jpg

If data is the new oil then why don’t we tax it?

My data is worth something to Google, and Facebook, and Twitter, and Amazon… and just about every other Internet behemoth. But alone my data is worth a really tiny tiny amount.

I’d like to charge Google and co. for having my data. The amount they currently pay me – free search, free email, cheap telephone… – doesn’t really add up. In fact, what Google pays me doesn’t pay my mortgage but somehow Larry Page and Sergey Brin are very very rich. Even if I did invoice Google, and even if Google paid we are talking pennies, at most.

But Google don’t just have my data, they have yours, your Mums, our friends, neighbours and just about everyone else. Put it all together and it is worth more than the sum of the parts.

Value of my data to Google < 1p
Value of your data to Google < 1p
Value our combined data to Google > 2p

The whole is worth more than the sum of the parts.

At the same time Google – and Facebook, Amazon, Apple, etc. – don’t like paying taxes. They like the things those taxes pay for (educated employees, law and order, transport networks, legal systems – particularly the bit of the legal system that deals with patents and intellectual property) but they don’t want to pay.

And when they do pay they find ways of minimising the payment and moving money around so it gets taxed as little as possible.

So why don’t we tax the data?

Governments, acting on behalf of their citizens should tax companies on the data they harvest from their users.

All those cookies that DoubleClick put on your machine.

All those profile likes that Facebook has.

Sure there is an issue of disentangling what is “my data” from what is “Google’s data” but I’m sure we could come up with a quota system, or Google could be allowed a tax deduction. Or they could simply delete the data when it gets old.

I’d be more than happy if Amazon deleted every piece of data they have about me. Apple seem to make even more money that Google and make me pay. While I might miss G-Drive I’d live (I pay DropBox anyway).

Or maybe we tax data-usage.

Maybe its the data users, the Cambridge Analyticas, of this world who should be paying the data tax. Pay for access, the ultimate firewall.

The tax would be levied for user within a geographic boundary. So these companies couldn’t claim the data was in low tax Ireland because the data generators (you and me) reside within state boundaries. If Facebook wants to have users in England then they would need to pay the British Government’s data-tax. If data that originates with English users is used by a company, no matter where they are, then Facebook needs to give the Government a cut.

This isn’t as radical as it sounds.

Governments have a long history of taxing resources – consider property taxed. Good taxes encourage consumers to limit their consumption – think cigarette taxes – and it may well be a good thing to limit some data usage. Anyway, thats not a hard and fast rule – the Government taxes my income and they don’t want to limit that.

And consider oil, after all, how often are we told that data is the new black gold?
– Countries with oil impose a tax (or charge) on oil companies which extract the oil.

Oil taxes demonstrate another thing about tax: Governments act on behalf of their citizens, like a class-action.

Consider Norway, every citizen of Norway could lay claim to part of the Norwegian oil reserves, they could individually invoice the oil companies for their share. But that wouldn’t work very well, too many people and again, the whole is worth more than the sum of the parts. So the Norwegian Government steps in, taxes the oil and then uses the revenue for the good of the citizens.

In a few places – like Alaska and the Shetlands – do see oil companies distributing money more directly.

After all, Governments could do with a bit more money and if they don’t tax data then the money is simply going to go to Zuckerberg, Page, Bezos and co. They wouldn’t miss a little bit.

And if this brings down other taxes, or helps fund a universal income, then people will have more time to spend online using these companies and buying things through them.

Read more? Subscribe to my newsletter – free updates on blog post, insights, events and offers.

Tax the data Read More »

MVP is a marketing exercise not a technology exercise

MVP-2017-10-2-11-03.jpg
… Minimally Viable Product

Possibly the most fashionable and misused term the digital industry right now. The term seems to be used by one-side-or-other to criticise the other.

I recently heard another Agile Coach say: “If you just add a few more features you’ll have an MVP” – I wanted to scream “Wrong, wrong, wrong!” But I bit my tongue (who says I’m can’t do diplomacy?)

MVP often seems to be the modern way of saying “The system must do”, MVP has become the M in Moscow rules.

Part of the problem is that the term means different things to different people. Originally coined to describe an experiment (“what is the smallest thing we could build to learn something about the market?”) it is almost always used to describe a small product that could satisfy the customers needs. But when push comes to shove that small usually isn’t very small. When MVP is used to mean “cut everything to the bone” the person saying it still seems to leave a lot of fat on the bone.

I think non-technical people hear the term MVP and think “A product which doesn’t do all that gold plating software engineering fat that slows the team down.” Such people show how little they actually understand about the digital world.

MVPs should not about technology. An MVP is not about building things.

An MVP is a marketing exercise: can we build something customers want?

Can we build something people will pay money for?

Before you use the language MVP you should assume:

  1. The technology can do it
  2. Your team can build it

The question is: is this thing worth building?and before we waste money building something nobody will use, let alone pay for, what can we build to make sure we are right?

The next time people start sketching an MVP divide it in 10. Assume the value is 10% of the stated value. Assume you have 10% of the resources and 10% of the time to build it. Now rethink what you are asking for. What can you build with a tenth?

Anyway, the cat is out of the bag, as much as I wish I could erase the abbreviation and name from collective memory I can’t. But maybe I can change the debate by differentiating between several types of MVP, that is, several different ways the term MVP is used:

  • MVP-M: a marketing product, designed to test what customers want, and what they will pay for.
  • MVP-T: a technical product designed to see if something can be build technologically – historically the terms proof-of-concept and prototype have been used here
  • MVP-L: a list of MUST HAVE features that a product MUST HAVE
  • MVP-H: a hippo MVP, a special MVP-L, that is highest paid person’s opinion of the feature list, unfortunately you might find several different people believe they have the right to set the feature list
  • MVP-X: where X is a number (1,2, 3…), this derivative is used by teams who are releasing enhancements to their product and growing it. (In the pre-digital age we called this a version number.) Exactly what is minimal about it I’m not sure but if it helps then why not?

MVP-M is the original meaning while MVP-L and MVP-H are the most common types.

So next time someone says “MVP” just check, what do they mean?

Read more? Join my mailing list – free updates on blog post, insights, events and offers.

MVP is a marketing exercise not a technology exercise Read More »

Utilisation and non-core team members

SplitMan-2017-09-25-10-06.png

“But we have specialists outside the team, we have to beg… borrow… steal people… we can never get them when we want them, we have to wait, we can’t get them enough.”

It doesn’t matter how much I pontificate about dedicated, stable, consistent teams I hear this again and again. Does nobody listen to me? Does nobody read Xanpan or Continuous Digital?

And I wonder: how much management time is spent arguing over who (which individuals) is doing what this week?

Isn’t that kind of piecemeal resourcing micro-management?

Or is it just making work for “managers” to do?

Is there no better use of management time than arguing about who is doing what? How can the individuals concerned “step up to the plate” and take responsibility if they are pulled this way and that? How can they really “buy in” to work when they don’t know what they doing next week?

Still, there is another answer to the problem: “How do you manage staffing when people need to work on multiple work streams at once?”

Usually this is because some individuals have specialist skills or because a team cannot justify having a full time, 100%, dedicated specialist.

Before I give you the answer lets remind ourselves why the traditional solution can make things worse:

  • When a resource (people or anything else) is scarce queues are used to allocate the scarce resources and queues create delays
  • Queues create delays in getting the work done – and are bad for morale
  • Queues are an alternative cost: in this case the price comes from the cost-of-delay
  • Queues disrupt schedules and predictability
  • In the delay people try to do something useful but the useful thing is lower value, and may cause more work for others, it can even create conflict when the specialist becomes available

The solution, and it is a generic solution that can be applied whenever some scarce resource (people, beds, runways):

Have more of the scarce resource than is necessary.

So that sounds obvious I guess?

What you want is for there be enough of the scarce resource so that the queues do not form. As an opening gambit have 25% resource more than you expect to need, do not plan to use the scarce resource more than 75% of the available time.

Suppose for example you have several teams, each of who needs a UX designer 1-day a week. At first sight one designer could service five teams but if you did that there would still be queues.

Why?

Because of variability.

Some weeks Team-1 would need a day-and-a-half of the designer, sure some week they would need the designer less than a day but any variability would cause a ripple – or “bullwhip effect”. The disruption caused by variation would ripple out over time.

You need to balance several factors here:

  • Utilisation
  • Variability
  • Wait time
  • Predictability

Even if demand and capacity are perfectly matched then variability will increase wait time which will in turn reduce predictability. If variability and utilisation are high then there will be queues and predicability will be low.

  • If you want 100% utilisation then you have to accept queues (delay) and a loss of predicability: ever landed at Heathrow airport? The airport runs at over 96% utilisation, there isn’t capacity to absorb variability so queues form in the air.
  • If you want to minimise wait time with highly variable work you need low utilisation: why do fire departments have all those fire engines and fire fighters sitting around doing nothing most days?

Running a business, especially a service business, at 100% utilisation is crazy – unless your customers are happy with delays and no predicability.

One of the reasons budget airlines always use the same type of plane and one class of seating is to limit variation. Only as they have gained experience have they added extras.

Anyone who tries to run software development at 100% is going to suffer late delivery and will fail to meet end dates.

How do I know this?

Because I know a little about queuing theory. Queuing theory is a theory in the same way that Einstein’s Theory of Relativity is theory, i.e. it is not a theory, its proven. In both cases mathematics. If you want to know more I recommend Factory Physics.

So, what utilisation can you have?

Well, that is a complicated question. There are a number of parameters to consider and the maths is very complicated. So complicated in fact that mathematicians usually turn to simulation. You don’t need to run a simulation because you can observe the real world, you can experiment for yourself. (Kanban methods allow you to see the queues forming.)

That said: 76% (max).

Or rather: in the simplest queuing theory models queues start to form (and predictability suffers) once utilisation is above 76%.

Given that your environment is probably a lot more complicated then the simplest models you probably need to keep utilisation well below 76% if you want short lead times and predictability.

As a very broad rule of thumb: if you are sharing someone don’t commit more then three-quarters of their time.

And given that, a dedicated specialist doesn’t look so expensive after all. Back to where I came in.

Read more? Join my mailing list – free updates on blog post, insights, events and offers.

Utilisation and non-core team members Read More »

Continuous Digital & Project Myopia

Reprojects-2017-09-11-10-56.png

This seems a little back to the future… those of you who have been following the evolution of Continuous Digital will know the book grew out of the #NoProjects meme and my extended essay.

I think originally the book title was #NoProjects then was Correcting Project Myopia, then perhaps something else and finally settled down to Continuous Digital. The changing title reflected my own thinking, thinking that continued to evolve.

As that thinking has evolved the original #NoProjects material has felt more and more out of place in Continuous Digital. So I’ve split it out – Project Myopia is back as a stand alone eBook and you can buy it today.

More revisions of Continuous Digital will appear as I refactor the book. Once this settles down I’ll edit through Project Myopia. A little material may move between the two books but hopefully not much.

Now the critics of #NoProjects will love this because they complain that #NoProjects tells you what not to do, not what to do. In a way I agree with them but at the same time the first step to solving a problem is often to admit you have a problem. Project Myopia is a discussion of the problem, it is a critique. Continuous Digital is the solution and more than that.

Splitting the book in two actually helps demonstrate my whole thesis.

For a start it is difficult to know when a work in progress, iterating, self-published, early release book is done. My first books – Business Patterns and Changing Software Development – were with a traditional publisher. They were projects with a start and a finish. Continuous Digital isn’t like that, it grows, it evolves. That is possible because Continuous Digital is itself digital, Business Patterns and Changing Software Development had to stop because they were printed.

Second Continuous Digital is already a big book – much bigger than most LeanPub books – and since I advocate “lots of small” over “few big” it makes sense to have two smaller books rather than one large.

Third, and most significantly, this evolution is a perfect example of one of my key arguments: some types of “problem” are only understood in terms of the solution. Defining the solution is necessary to define the problem.

The solution and problem co-evolve.

In the beginning the thesis was very much based around the problems of the project model, and I still believe the project model has serious problems. In describing a solution – Continuous Digital – a different problem became clear: in a digital age businesses need to evolve with the technology.

Projects have end dates, hopefully your business, your job, doesn’t.

If you like you can buy both books together at a reduced price right now.

Read more? Join my mailing list – free updates on blog post, insights, events and offers.

Continuous Digital & Project Myopia Read More »

Definition of Ready considered harmful

Small-iStock-502805668-2017-09-6-12-58.jpg

Earlier this week I was with a team and discussion turned to “the definition of ready.” This little idea has been growing more and more common in the last couple of years and while I like the concept I don’t recommend it. Indeed I think it could well reduce Agility.

To cut to the chase: “Definition of ready” reduces agility because it breaks up process flow, assumes greater role specific responsibilities, introduces more wait states (delay) and potentially undermines business-value based prioritisation.

The original idea builds on “definition of done”. Both definitions are a semi-formal checklists agreed by the team which are applied to pieces of work (stories, tasks, whatever). Before any piece of work is considered “done” it should satisfy the definition of done. So the team member who has done a piece of work should be able to mentally tick each item on the checklist. Typically a definition of done might contain:

 

  • Story implemented
  • Story satisfies acceptance criteria
  • Story has been seen and approved by the product owner
  • Code is passing all unit and acceptance tests

Note I say “mentally” and I call these lists “semi formal” because if you start having a physical checklist for each item, physically ticking the boxes, perhaps actually signing them, and presumably filing the lists or having someone audit them then the process is going to get very expensive quickly.

So far so good? – Now why don’t I like definition of ready?

On the one hand definition of ready is a good idea: before work begins on any story some pre-work has been done on the story to ensure it is “ready for development” – yes, typically this is about getting stories ready for coding. Such a check-list might say:

 

  • Story is written in User Story format with a named role
  • Acceptance criteria have been agreed with product owner
  • Developer, Tester and Product owner have agreed story meaning

Now on the other hand… even doing these means some work has been done. Once upon a time the story was not ready, someone, or some people have worked on the story to make it ready. When did this happen? Getting this story ready has already detracted from doing other work – work which was a higher priority because it was scheduled earlier.

Again, when did this happen?

If the story became “ready” yesterday then no big deal. The chances are that little has changed.

But if it became ready last week are you sure?

And what if it became ready last month? Or six months ago?

The longer it has been ready the greater the chance that something has changed. If we don’t check and re-validate the “ready” state then there is a risk something will have changed and be done wrong. If we do validate then we may well be repeating work which has already been done.

In general, the later the story becomes “ready” the better. Not only does it reduce the chance that something will change between becoming “ready” and work starting but it also minimises the chance that the story won’t be scheduled at all and all the pre-work was wasted.

More problematic still: what happens when the business priority is for a story that is not ready?

Customer: Well Dev team story X is the highest priority for the next sprint
Scrum Master: Sorry customer, Story X does not meet the definition of ready. Please choose another story.
Customer: But all the other stories are worth less than X so I’d really like X done!

The team could continue to refuse X – and sound like an old style trade unionist in the process – or they could accept X , make it ready and do it.

Herein lies my rule of thumb:

 

If a story is prioritised and scheduled for work but is not considered “ready” then the first task is to make it ready.

Indeed this can be generalised:

 

Once a story is prioritised and work starts then whatever needs doing gets done.

This simplifies the work of those making the priority calls. They now just look at the priority (i.e. business value) or work items. They don’t need to consider whether something is ready or not.

It also eliminates the problem of: when.

Teams which practise “definition of ready” usually expect their product owner to make stories ready before the iteration planning meeting, and that creates the problems above. Moving “make ready” inside the iteration, perhaps as a “3 Amigos” sessions after the planning meeting, eliminates this problem.

And before anyone complains saying “How can I estimate something thing that is not prepared?” let me point out you can. You are just estimating something different:

 

  • When you estimate “ready” stories you are estimating the time it takes to move a well formed story from analysis-complete to coding-complete
  • When up estimate an “unready” story you are estimating the time it takes to move a poorly formed story from its current state to coding-complete

I would expect the estimates to be bigger – because there is more work – and I would expect the estimates to be subject to more variability – because the initial state of the story is more variable. But is still quite doable, it is an estimate, not a promise.

I can see why teams adopt definition of ready and I might even recommend it myself but I’d hope it was an temporary measure on the way to something better.

In teams with broken, role based process flows then a definition of done for each stage can make sense. The definition of done at the end of one activity is the definition of ready for the next. For teams adopting Kanban style processes I would recommend this approach as part of process/board set-up. But I also hope that over time the board columns can be collapsed down and definitions dropped.

Read more? Join my mailing list – free updates on blog post, insights, events and offers.

Definition of Ready considered harmful Read More »

Documentation is another deliverable and 7 other rules

8295173-2017-08-30-13-43.jpg

“Working software over comprehensive documentation.” The Agile Manifesto

Some have taken that line to mean “no documentation.” I have sympathy for that. Years ago I worked on railway privatisation in the UK. We (Sema Group) were building a system for Railtrack – remember them?

We (the programmers) had documentation coming out our ears. Architecture documents, design documents, user guides, functional specifications, program specifications, and much much more. Most of the documentation was worse than useless because it gave the illusion that everything was recorded and anyone could learn (almost) anything any time.

Some of the documentation was just out of date. The worst was written by architects who lived in a parallel universe to the coders. The system described in the official architecture and design documents bore very little relevance to the actual code but since the (five) architects never looked at the code the (12) programmers could do what they liked.

Documentation wasn’t going to save us.

But, I also know some documentation is not only useful but highly valuable. Sometimes a User Manual can be really useful. Sometimes a developers “Rough Guide” can give important insights.

So how do you tell the difference?

How do you know what to write and what to forego?

Rule #1: Documentation is just another deliverable

Time spent writing a document is time not spent coding, testing, analysing or drinking coffee. There is no documentation fairy and documentation is far from free. It costs to write and costs far more to read (if it is ever read).

Therefore treat documentation like any other deliverable.

If someone wants a document let them request it and prioritise it against all the other possible work to be done. If someone is prepared to displace other work for documentation then it is worth doing.

Rule #2: Documentation should add value

Would you add a feature to a system if it did not increase the value of the system?

The same is true of documentation. If it adds value then there is every reason to do it, if it doesn’t then why bother?

Rule #3: Who will complain if the document is not done?

If in doubt identify a person who will complain if the document is not done. That person places a value on the document. Maybe have them argue for their document over new features.

Rule #4: Don’t write documents for the sake of documents

Don’t write documents just because some process standard somewhere says a document needs to be written. Someone should be able to speak to that standard an explain why it adds more value than doing something else.

Rule #5: Essential documents are part of the work to do

There are some documents that are valuable – someone will complain if they are absent! User Manuals are one reoccurring case. I’ve also worked with teams that need to present documentation as part of a regulatory package to Federal Drug Administration (FDA) and such.

If a piece of work requires the documentation to be updated then updating the documentation is part of the work. For example, suppose you are adding a feature to a system that is subject to FDA regulation. And suppose the FDA regulation requires all features to be listed in a User Manual. In that case, part of the work of adding the feature is to update the user manual. There will be coding, testing and documentation. The work is not done if the User Guide has not been updated.

You don’t need a separate User Story to do the documentation. In such cases documentation may form part of the definition of done.

Rule #6: Do slightly less than you think is needed

If you deliver less than is needed then someone will complain and you will need to rework it. If nobody complains then why are you doing it? And what value is it? – Do less next time!

This is an old XP rule which applies elsewhere too.

Rule #7: Don’t expect anyone to read what you have written

Documentation is expensive to read. Most people who started reading this blog post stopped long ago, you are the exception – thank you!

The same is true for documentation.

The longer a document is the less likely it is to be read.
And the longer a document is the less of it will be remembered.

You and I do not have the writing skills of J.K.Rowling, few of us can hold readers attention that long.

Rule #8: The author learns too

For a lot of documentation – and here I include writing, my own books, patterns, blogs, etc. – the real audience is the author. The process of writing helps the author, it helps us structure our thoughts, it helps us vent our anger and frustration, it forces us to rationalise, it prepares us for an argument, and more.

So often the person who really wants the document, the person who attaches value to it, the person who will complain if it is not done, and the person who will learn most from the document is the author.

Join my mailing list to get free updates on blog post, insights, events and offers.

Documentation is another deliverable and 7 other rules Read More »

Principles of software development revisited

BeachSpainLite-2017-08-15-10-04.jpg

Summer… my traditional time for doing all that stuff that requires a chunk of time… erh… OK, “projects” – only they aren’t well planned and they only resemble projects in the rear-view mirror.

Why now? Why summer? – Because my clients are on holiday too so I’m quiet and not delivering much in the way of (agile) training or consulting. Because my family is away visiting more family. Thus I have a chunk of time.

This year’s projects include some programming – fixing up my own Twitter/LinkedIn/Facebook scheduler “CloudPoster” and some work on my Mimas conference review system in preparation for Agile on the Beach 2018 speaker submissions.

But the big project is a website rebuild.

You may have noticed this blog has moved from Blogger to a new WordPress site, and attentive readers will have noticed that my other sites, allankelly.net and SoftwareStrategy.co.uk have also folded in here. This has involved a lot of content moving, which also means I’ve been seeing articles I’d forgotten about.

In particular there is a group of “The Nature of Agile” articles from 2013 which were once intended to go into a book. Looking at them now I think they still stand, mostly. In particular I’m impressed by my 2013 Principles of Software Development.

I’ll let you can read the whole article but here are the headlines:

Software Development Principles

  1. Software Development exhibits Diseconomies of Scale
  2. Quality is essential – quality makes all things possible
  3. Software Development is not a production line

Agile Software Principles

  1. Highly adaptable over highly adapted
  2. Piecemeal Growth – Start small, get something that works, grow
  3. Need to think
  4. Need to unlearn
  5. Feedback: getting and using
  6. People closest to the work make decisions
  7. Know your schedule, fit work to the schedule not schedule to work
  8. Some Agile practices will fix you, others will help you see and help you fix yourself

The article then goes on to discuss The Iron Triangle and Conway’s Law.

I think that essay might be the first time I wrote about diseconomies of scale. Something else I learned when I moved all my content to this new site is that the Diseconomies of Scale blog entry is by far my most read blog entry ever.

I’m not sure if I’m surprised or shocked that now, four years later, these still look good to me. I really wouldn’t change them much. In fact these ideas are all part of my latest book Continuous Digital.

I’m even starting to wonder if I should role those Nature of Agile essays together in another book – but thats bad of me! Too many books….

Join Allan Kelly’s mailing list to get free updates on blog post, insights and events.

Principles of software development revisited Read More »

What if it is all random?

iStock-512907832Small-2017-08-10-12-37.jpg

What if success in digital business, and in software development, is random? What if one cannot tell in advance what will succeed and what will fail?

My cynical side sometimes thinks everything is random. I don’t want to believe my cynical side but…

All those minimally viable products, some work, some fail.

All those stand-up meetings, do they really make a difference?

All those big requirements documents, just sometimes they work.

How can I even say this? – I’ve written books on how to “do it right.”
I advise companies on how to improve “processes.” I’ve helped individuals do better work.

And just last month I was at a patterns conference trying to spot reoccurring patterns and why they are patterns.

So let me pause my rational side and indulge my cynical side, what if it is all random?

If it is all random what we have to ask is: What would we do in a random world?

Imagine for a moment success is like making a bet at roulette and spinning the wheel.

Surely we would want to both minimise losses (small bets) and maximise wheel spins: try lots, remove the failures quickly and expand the successes (if we can).

I suggested “its all random” to someone the other day and he replied “It is not random, its complex.” And we were into Cynefin before you could say “spin the wheel.”

Dave Snowden’s Cynefin model attempts to help us understand complexity and the complex. Faced with complexity Cynefin says we should probe. That is, try lots of experiments so we can understand, learn from the experiments and adjust.

If the experiment “succeeds” we understand more and can grow that learning. Where the experiment “fails” we have still learned but we will try a different avenue next time.

Look! – it is the same approach, the same result, complexity, Cynefin or just random: try a lot, remove failure and build on success. And hang on, where have I heard that before, … Darwin and evolution; random gene mutations which give benefit get propagated and in time others die out.

It is just possible that Dave is right, Darwin is right and I am right…

Today most of the world’s mobile/cell telephone systems are built on CDMA technology. CDMA is super complex maths but it basically works by encoding a signal (sequence of numbers, your voice digitised) and injecting it into a random number stream (radio spectrum), provided you know the encoding you can retrieve the signal out of the randomness. Quite amazing really.

Further, provided the number sequences are sufficiently different they are in effect random so you can inject more signal into the same space.

That is why we can all use our mobile phones at the same time.

Put it another way: you walk into a party in London, in the room are Poles, Lebanese, Germans, Argentinians and the odd Brit. They are all talking in their own language to fellow speakers. Somehow you can hear your own language and join the right conversation. Everything else is random background noise.

Maybe the same is true in digital business and software development…

Perhaps it is all complex but it is so complex that we will never be able to follow all the cause and effect chains, it is so complex that it looks random. Dave is right with Cynefin but maybe there is so much complexity that we might as well treat it as random and save our time.

Back to CDMA and London parties, faced with apparent randomness there are useful strategies and signals can still be extracted.

Perhaps the way to deal with this complexity is not to try and understand it but to treat it as random. Rather than expend energy and time on a (possibly) impossible task accept it as random and apply appropriate strategies.

After all, if we have learned anything from statistical distributions it is that faced with actual and apparent randomness we can still find patterns, we can still learn and we can still work with, well, randomness.

What if it is all random? Read More »