What we forget about the Scientific Method

I get fed up hearing other Agile evangelists champion The Scientific Method, I don’t disagree with them, I use it myself but I think they are sometimes guilty of overlooking how the scientific method is actually practiced by scientists and researchers. Too often the scientific approach is made to sound simple, it isn’t.

First lets define the scientific method. Perhaps rather than call it “scientific method” it is better called “experimentation.” What the Agile Evangelists of my experience are often advocating is running an experiment – perhaps several experiments in parallel but more likely in sequence. The steps are something like this:

  1. Propose a hypothesis, e.g. undertaking monthly software releases instead of bi-monthly will result in a lower percentage of release problems
  2. Examine the current position: e.g. find the current figures for release frequency and problems, record these
  3. Decide how long you want to run the experiment for, e.g. 6 months
  4. Introduce the change and reserve any judgement until the end of the experiment period
  5. Examine the results: recalculate the figures and compare these with the original figures
  6. Draw a conclusion based on observation and data

I agree with all of this, I think its great, but…

Lets leave aside problems of measurement, problems of formulating the hypothesis, problems of making changes and propagation problems (i.e. the time it takes for changes to work though the system). These are not minor problems and they do make me wonder about applying the scientific method in the messy world of software and business but lets leave them to one side for the moment.

Lets also leave aside the so-called Hawthorne Effect – the tendency for people to change and improve their behaviour because know they are in an experiment. Although the original Hawthorne experiments were shown to be flawed some time ago the effect might still be real. And the flaws found in the Hawthorne experiments should remind us that there may be other factors at work which we have not considered.

Even with all these caveats I’m still prepared to accept an experimental approach to work has value. Sometimes the only way to know whether A or B is the best answer is to actually do A and do B and compare the results.

But, this is where my objections start…. There are two important elements missing from the way Agile Evangelists talk about the scientific method. When real scientists – and I include social-scientists here – do an experiment there is more to the scientific method than the experiment and so there should be in work too.

#1: Literature review – standing on the shoulders of others

Before any experiment is planned scientists start by reviewing what has gone before. They go to the library, sometimes books but journals are probably more up to date and often benefit from stricter peer review. They read what others have found, they read what others have done before, the experiments and theories devised to explain the results.

True your business, your environment, your team are all unique and what other teams find might not apply to you. And true you might be able to find flaws in their research and their experiments. But that does not mean you should automatically discount what has gone before.

If other teams have found consistent results with an approach then it is possible yours will too. The more examples of something working the more likely it will work for you. Why run an experiment if others have already found the result?

Now I’m happy to agree that the state of research on software development is pitiful. Many of those who should be helping the industry here, “Computer Science” and “Software Engineering” departments in Universities don’t produce what the industry needs. (Ian Sommerville’s recent critique on this subject is well worth reading “The (ir)relevance of academic software engineering research”). But there is research out there. Some from University departments and some from industry.

Plus there is a lot of research that is relevant but is outside the computing and software departments. For example I have dug up a lot of relevant research in business literature, and specifically on time estimation in psychology journals (see my Notes on Estimation and Retrospective Estimation and More notes on Estimation Research.)

As people used to dealing with binary software people might demand a simple “Yes this works” or “No it doesn’t” and those suffering from physics envy may demand rigorous experimental research but little of the research of this type exists in software engineering. Much of software engineering is closer to psychology, you can’t conduct the experiments that would give these answers. You have to use statistics and other techniques and look at probabilities. (Notice I’ve separated computer science from software engineering here. Much of computer science theory (e.g. sort algorithm efficiency, P and NP problems, etc.) can stand up with physics theory but does not address many of the problems practicing software engineers face.)

#2: Clean up the lab

I’m sure most of my readers did some science at school. Think back to those experiments, particularly the chemistry experiments. Part of the preparation was to check the equipment, clean any that might be contaminated with the remains of the last experiment, ensure the workspace was clear and so on.

I’m one of those people who doesn’t (usually) start cooking until they have tidies the kitchen. I need space to chop vegetables, I need to be able to see what I’m doing and I don’t want messy plates getting in the way. There is a word for this: Mise en place, its a French expression which according to Wikipedia means:

“is a French phrase which means “putting in place”, as in set up. It is used in professional kitchens to refer to organizing and arranging the ingredients (e.g., cuts of meat, relishes, sauces, par-cooked items, spices, freshly chopped vegetables, and other components) that a cook will require for the menu items that are expected to be prepared during a shift.”

(Many thanks to Ed Sykes for telling me a great term.)

And when you are done with the chemistry experiment, or cooking, you need to tidy up. Experiments need to include set-up and clean-up time. If you leave the lab a mess after every experiment you will make it more difficult for yourself and others next time.

I see the same thing when I visit software companies. There is no point in doing experiments if the work environment is a mess – both physically and metaphorically. And if people leave a mess around when they have finished their work then things will only get harder over time. There are many experiments you simply can’t run until you have done the correct preparation.

An awful lot of the initial advice I give to companies is simply about cleaning up the work environment and getting them into a state where they can do experiments. Much of that is informed by reference to past literature and experiments. For example:

  • Putting effective source code control and build systems in place
  • Operating in two week iterations: planning out two weeks of work, reviewing what was done and repeating
  • Putting up a team board and using it as a shared to-do list
  • Creating basic measurement tools, whether they be burn-down charts, cumulative flow diagrams or even more basic measurements

You get the idea?

Simply tidying up the work environment and putting a basic process in place, one based on past experience, one which better matches the way work actually happens can alone bring a lot of benefit to organizations. Some organizations don’t need to get into experiments, they just need to tidy up.

And, perhaps unfortunately, that is where is stops for some teams. Simply doing the basics better, simply tidying up, removes a lot of the problems they had. It might be a shame that these teams don’t go further, try more but that might be good enough for them.

Imagine a restaurant that is just breaking even, the food is poor, customers sometimes refuse to pay, the service shoddy so tips are small, staff don’t stay long which makes the whole problem worse, a vicious circle develops. In an effort to cut costs managers keep staffing low so food arrives late and cold.

Finally one of the customers is poisoned and the local health inspector comes in. The restaurant has to do something. They were staggering on with the old ways until now but a crisis means something must be done.

They clean the kitchen, they buy some new equipment, they let the chef buy the ingredients he wants rather than the cheapest, they rewrite the menu to simplify their offering. They don’t have to do much and suddenly the customers are happier: the food is warmer and better, the staff are happier, a virtuous circle replaces a vicious circle.

How far the restaurant owners want to push this is up to them. If they want a Michelin star they will go a long way, but if this is the local greasy spoon cafe what is the point? – It is their decision.

They don’t need experiments, they only need the opening of the scientific method, the bit that is too often overlooked. Some might call it “Brilliant Basics” but you don’t need to be brilliant, just “Good Basics.” (Remember my In Search of Mediocracy post?).

I think the scientific method is sometimes, rightly or wrongly, used as a backdoor in an attempt to introduce change. To lower resistance and get individuals and teams to try something new: “Lets try X for a month and then decide if it works.” Thats can be a legitimate approach. But dressing it up in the language of science feels dishonest.

Lets have less talk about “The Scientific Method” and more talk about “Tidying up the Kitchen” – or is it better in French? Mise en place…. come to think of it, don’t the Lean community have Japanese word for this? Pika pika.

What we forget about the Scientific Method Read More »

How do I make testing faster?

Earlier this week I was the guest of a large bank in the City, OK Canary Wharf actually. They had their own little internal Agile conference. As well as myself some of the usual suspects were on parade as well as some internal speakers. It was very enjoyable and as usual the C-List speakers were some of the most interesting.

Later in the day I found myself in conversation with two people concerned about software testing. They posed the question: “How do we make testing faster?” Specifically, how do we make SIT (“System Integration Testing”) and UAT (“User Acceptance Testing”) faster?

Now of these solutions are far from unique to this particular bank. Indeed after listening to the question and situation I recalled several discussions I’d had, problems I’d seen and solutions I’d suggested before. (OK bad consultant me, I should have enquired into the problem, understood the context fully, coached her to find her own solutions etc. etc. But we only had 30 minutes and I wasn’t being paid for a full consulting session – nor am I charging you for reading this! – so lets not invent the wheel, lets fill in with assumptions and given some general answers.)

The solutions we came up with were:

  1. More testing environments: it is shocking how often I hear testers say they do not have enough test environments. They have a limited number of environment and they have to time-share or co-habit. There are two answers: either increase the number of test environments or reduce the number of test and testers. I’m sorry I can’t give you a magic sentence that persuades the money people to buy you more environments. Now we can virtualise machines there is very little excuse for not creating more test environments. The usual one is cost. But even here the costs are not that high. (Unless that is you need to buy a new mainframe.)
  2. Testers integrated & co-located with developers: if you want to go faster I’ve yet to find anything that beats face-to-face direct verbal communications. If you want/need people distributed then you should accept you will not be as fast as you could be. I’ve said it before and I’ll keep saying it: Testers are first class citizens, treat them as you would developers.
  3. Automate SIT: simple really, automate the hell out of testing. Well easy to say, harder to do but far from impossible. Since most UAT/Beta testing is exploratory in nature it is largely SIT that need automation. And please please please, don’t spend money on tool test tools.
  4. Make UAT just UAT: Recognise that some of what passes as UAT is actually SIT and automate it. In large corporates an awful lot of what is called “User Acceptance Testing” isn’t “User Testing” – although it may well be “Acceptance Testing.” Look carefully at what happens in this phase: that which is genuine Acceptance Test can – indeed should – be specified in advance (of the coding), can be moved into the SIT phase and automated. The true User bit is going to be much more explorative in nature.
  5. Improve development quality with TDD and do it test first: one reason for long SIT and UAT phases is that they find bugs. One way to significantly reduce the number of bugs is to have developers practice Test Driven Development, i.e. build in quality earlier. This may mean coding takes longer in the first place but it will speed up the wider process. TDD is still quite new to many developers so you can speed up the process by giving them some help, specifically TDD coaching from someone who has done lots of this before. Until developers make the jump from “post coding automated unit tests” to “pre-coding test first development” you will not maximise the benefits of TDD, you will have more code in your system than you need and you will have a poorer overall design, and both of those make for more tests later on and potentially more problems. When tests are written before code TDD becomes a design activity.
  6. Reduce batch size: i.e. reduce the size of the release, and probably do more releases. Do less, do a better job, push it out more often, reduce risk. Think dis-economies of scale. Many large corporations use a large batch size simply because any release is so painful. This is completely the wrong approach, rather than running scared of releases take them head on.
  7. Done means done and tested in the iteration – at the end of the iteration it is releasable: when you reach the end of an iteration and declare work “Done” that means “Done ready for release (to customers).” Well, that should be your aim anyway and you should be working towards that point. If “Done” currently means “Done development, ready for SIT” then you should be working towards bringing SIT inside the iteration. If that looks hard right now then look again at items 1 to 6 in this list. And when you’ve moved SIT inside the iteration start on UAT. Until both those phases (and any other phases in the tail) are inside the iteration they can throw up problems (bugs) which will disrupt a future iteration which implies the work you said was Done is not Done.

Now the catch. Money.

Some of these things require money to be spent. In a rational world that would not be a problem because there is plenty of evidence that increased software quality (by which I mean specifically few bugs and higher maintainability) leads to shorter schedules and therefore reduced costs. And as a plus you will get more predictability. What’s not to like?

So rationally its a case of spend money to save money. Quality is free as Philip Crosby used to say.

But unfortunately rational engineers often find it difficult to see rationality inside big corporation operating annual budgeting cycles. One of the reoccurring themes this week – although not one I think was given official time – was the constraints of the budgeting cycle. The rational world of the engineer and the rational world of the account and budgeteer seem a long way apart.

There is a solution but it is a big solution which will scare many people. It is Beyond Budgeting, and that is why I was so keen to invite Bjarte Bogsnes – the author of Implementing Beyond Budgetting – to talk about Beyond Budgeting at this years Agile on the Beach conference. And its probably why Bjarte recently tweeted “Invitations to speak about Beyond Budgeting at Agile and HR conferences are now as frequent as Finance conference invitations. Wonderful!”

How do I make testing faster? Read More »

Train to Cornwall

Do you know the way to Falmouth? (Agile on the Beach)

The Agile on the Beach 2014 conference is fast approaching and there is every sign that it will, again, be sold out. Which means, in the days and weeks before the conference I’m going to have several people asking me for my advice on how to get to Falmouth. So here is some advice on getting to Falmouth and the conference, not everything you could know, but the main points I tell people when they ask. (If you want a quick summary skip to the end.)

Firstly even some English people are stumped when they stop and consider how to get to Falmouth. Its in Cornwall, its not as far west as it could be (thats Penzance) but it is part of the country that many people have never been to. And if you are coming from outside the UK its even more likely that you need to stop and think.

So the first piece of advice is: plan in advance. Don’t leave it to a few days before hand. You probably need a day just to get the Falmouth.

And that leads to the second piece of advice: if you can plan to extend your stay it is well worth it. The conference is Thursday and Friday, if you can plan to stay longer. I know speakers and attendees who have brought their significant other with them, or arranged for the significant person to arrive on the Friday.

Falmouth is a great little town to spend a day or two in. Better still St Ives is close by, I have been known to go all the way to St Ives just for an exhibition at The Tate gallery there. Then there is the rest of Cornwall to explore. And, if you look carefully there are great places to eat.

Lets start with the most common case…

Falmouth by car

You can drive from London to Falmouth, its about 300 miles and five hours driving – before you take any breaks. Personally I don’t recommend it. I’ve done it a couple of times and its not my idea of fun. Especially once you leave the M5, the A30 seems to go on and on and on… If you do drive allow plenty of time and plan on a couple of breaks.

Also, you are likely to be charged for car parking at the conference itself. The conference is held at Falmouth University and despite lots of talks with the University people they refuse to remove the car parking fee. As I recall this was £3 a day but I might be wrong. Hopefully they will relent but there you go.

Flying – from London or elsewhere

You can also fly. Actually you fly to Newquay (NQY) airport from where you need to get the 25 miles or so to Falmouth. You are probably looking at a taxi which adds to the cost. Also adding to the cost is the third-world like £5 “Airport Improvement Taxi” which you need to pay on departure.

I don’t like NQY. Of all the airports I’ve been to I find the security staff the most intrusive. I know they have a job to do. I know they have to screen you. I know there are bad people in the world. But… of all the airports in the world I find the security here to be the biggest infringers of my personal space.

I once saw the security staff send once send a lady back to the checkin desk because she had a boarding pass issued by a code-share partner of the airline. The demanded a genuine FlyBe boarding pass and not a self-printed BA boarding pass. Maybe this is common but I’ve never seen it elsewhere.

Anyhow nobody flies for fun these days so maybe I’m being an grumpy old man.

If you want to fly you probably want FlyBe from Gatwick, although FlyBe also have daily flights from Manchester. FlyBe also have some flights from other places but these don’t happen everyday. At the time of writing EasyJet also list a flight from Liverpool and have operated from Southend before now but again its not everyday.

Internationally

If you are travelling internationally you may well be getting on a plane anyhow. Although Lufthansa has an occasional flight to Newquary you are probably going to have to connect somewhere. Gatwick is the obvious place but Manchester is just as good an option.

One thing to be aware of: if you are flying into Heathrow and need to get from there to Falmouth I recommend you switch to trains. Transferring between Heathrow and Gatwick is better avoided. On the other hand, getting from Heathrow to Paddington and picking up a train is really easy.

One minute please, I’ll talk about trains in a moment.

Another option is Exeter airport – Exeter also has an airport with some flights to Amsterdam and other locations in the UK and Europe. Since Amsterdam is a hub this is a viable option for anyone coming from further a field.

From Exeter airport you could continue by train. I’m not sure how you get from the airport to Exeter train station, bus or taxi is my guess.

Last year some Dutch speakers flew to Exeter and hired a car to drive the second leg.

Train

Personally I always take the train. I live in London so this means getting to London Paddington and picking up a train to Penzance. But you don’t ride all the way, a few stops before Penzance is Truro where you need to change.

Paddington to Truro is about 4.5 hours, then change for Falmouth at Truro for a little train but get off at Penryn station (Penryn live departure boards), from there it is a short walk up the hill to the University campus.

I always take the train. Once I’m on the train at Paddington I can work, read, sleep, what ever I like. Unlike the plane where you are queuing to get through security, queuing for coffee, queuing to get on the plan, squeeze in flight for 40 minutes, queuing to get off the plane you sit on the train.

If you are coming from somewhere else in the country you can pick up the Cross Country train that also goes to Penzance or pick up the train from Paddington at Reading.

If you’ve never done the journey before pay special attention to the Dawlish section after you leave Exeter, the scenery is beautiful – and yes it has been rebuilt after last winter.

My big piece of advice here is: book your train tickets in advance. If you leave it until the last minute you could end up paying £300 for Second Class. If you book in advance you can get First Class for as little as £100. The extra’s in First Class aren’t up to much but you do get free water, tea and coffee (compared to first class on East Coast Mainline or Virgin West Coast the FGW First Class is poor).

Note also: FGW don’t have wifi onboard yet and 3G reception is poor after Exeter. The further west you go the weaker it is.

Some of the First Great Western trains have a “Travelling Chef” who does a good bacon sandwich. But, determining which trains have travelling Chef’s is hard, FGW’s website makes you work hard. Second: sometimes the Chef doesn’t turn up. I’ve planned on a bacon butty before now only to find the Chef didn’t turn up for work that day.

Next advice: going home look at the sleeper – especially if you live in London. You leave Cornwall about 10pm and get to Paddington about 5am the next morning. I’ve done it for the last three year – although I’m probably not doing it this year. Again FGW don’t make the sleeper that easy to find or book (no Cross Country this time).

The sleeper isn’t really that much more expensive – I think it cost me £100 last year – and it is fun. You don’t sleep all the way but you will sleep. Its an experience.

A warning: The same trains which have the sleeper cars have regular cars where you can sit -airline style – and try and sleep. If you don’t explicitly book the sleeper you have a seat not a bed.

There you go, that is pretty much the advice I give to anyone who asks: How do I get to Agile on the Beach?

  1. Avoid driving
  2. Only fly if you are coming from international (or Scotland) and then try to connection somewhere other than Heathrow
  3. Take the train if you possibly can
  4. Book early and pay the extra for first class
  5. Think about getting the sleeper back

Do you know the way to Falmouth? (Agile on the Beach) Read More »

Tailor or an image consultant?

“Gentlemen, you are getting married. You go to the best tailor in town, and you say: ‘I’m getting married, my bride is beautiful, make me look one million dollars, I don’t care what it costs.’

The tailor measures you very carefully then he stands back. He rubs his chin, he is clearly troubled by something. ‘Tell me’ you say, ‘Please tell me.’

‘Sir’, he replies, ‘I don’t think it is my place to say’

‘Tell me’ you beg, ‘I must looks fantastic for my girl’

‘Well Sir, …’ he talks slowly and cautiously, ‘. if you really want to look $1,000,000 can I suggest you lose 5kg?’

What do you say?”

This is the dilemma many a consultant find themselves in. More importantly it is the dilemma that I find again and again in corporate IT:

  • Is IT a service department which is expected to build what is requested without answering back?

Or

  • Is are the IT departments the experts in maximising benefit and value out IT investment?

Is the tailor a master of clothes who just makes beautiful clothes? Or are they, because of their long experience making people look beautiful in clothes, an image consultant who can help you look beautiful?

Is your IT department a tailor who is expected to produce something brilliant no matter what the request is? Or are they people who, because they work with information technology every day, have experience and knowledge to help their (internal) customers maximise their benefit? (Even if that means the original request should be reconsidered.)

I distinctly remember a senior IT manager at an American bank telling me that his department was most definitely a service department, it wasn’t his job to answer back to business requests, his job was to build what was asked for. The implication being that even if his people knew the (internal) client was approaching the issue from the wrong point, and even asking for the wrong system, then his people should not correct them. They should just build what was asked for.

Perhaps this dilemma is most accurately felt by Business Analysts who are sometimes expected to write down the orders that someone else wants to give to IT even if they know better. Fortunately some Business Analysts can go further and advise their “customers” on what the bigger questions.

The original metaphor come from Benjamin Mitchell. As I recall it was part of a very deep conversation we had in Cornwall back in 2011 and which also led to the “Abandon hope all ye who enter Agile” blog post. I’ve embellished and elaborated the story over the years.

I reminded Benjamin of this story in January – he’d forgotten about it, thats how brilliant he is, he has so many great ideas he can’t keep track of them all. Once I’d reminded him he immediately made it better.

“How would you know?” he asked, meaning, “How can you determine what is expected of you?”

For me the story is about missed opportunities, about frustration and about what you might do to change the position. For Benjamin it is about asking “Do we all expect the same thing? – If you think they consider you a service group how can you confirm this?”

Either way the core problem is about a mismatch of expectations. Your IT department might be a service department, that approach could work, its how many big corporations operate (I my experience).

Or the IT department could be the experts in technology, how to apply technology, how to frame problems so technology can help and how to maximise the return from technology.

Both are valid options.

But the real problem is when someone in the department thinks the department is should be doing more than the rest of the company does; or when the rest of the company expects more from the department but the department sees itself as a service group.

Finally, go back to my series on the economics of software development, the supply and demand curve, and apply this story. If the IT department is seen as a tailor who shouldn’t answer back they will find it very difficult, if not impossible, to change the demand curve. Such a tailor can only work on the supply side.

Tailor or an image consultant? Read More »

False Projects

In my last post (Inconvenient Truths of Project Status Reporting) I used an expression which I think deserves highlighting and explaining:

        False Projects

False projects occur when we use the word “project” for work which is not really a project. For example:

  • There is no end date to the work, it goes on and on, rightly or wrongly.
  • The team is not broken up, it goes on and on, rightly or wrongly, i.e. the team is not a temporary structure.
  • The output of the work is not defined in advance, or might not even be defined.

Some might think I’m playing word games here but I think its important, I think we need to be clear about our terms. Please, hear me out. (This is really important when we talk about #NoProjects and #BeyondProjects. If this is a new idea to you see my Why Projects Don’t Make Sense post.)

PRINCE 2 (the UK project management standard) offers the following definition of “a project”

“A temporary organization that is needed to produce a unique and predefined outcome or result at a pre-specified time using predetermined resources.”

This is the definition which is most commonly violated. PRINCE 2 helpfully offers a second definition of a project:

“A management environment that is created for the purpose of delivering one or more business products according to a specified business case.”

This seems like a pretty open ended definition, none of the terminating conditions of the first, thats part of the problem. Potentially they are very different. Under this definition a false project would be a piece of work which is called “a project” but for which there is no specified business case or defined products.

Lest I be too UK centric, the American Project Management Institute offers this definition (I have highlighted the parts which concern me most):

“More specifically, what is a project?

It’s a temporary group activity designed to produce a unique product, service or result.

A project is temporary in that it has a defined beginning and end in time, and therefore defined scope and resources.

And a project is unique in that it is not a routine operation, but a specific set of operations designed to accomplish a singular goal. So a project team often includes people who don’t usually work together – sometimes from different organizations and across multiple geographies.”

As with the first PRINCE 2 definition the emphasis is on an activity that ends. In this definition the team “don’t usually work together”, there comes a point when working together becomes normal.

An awful lot of software development work just doesn’t correspond to any of these definitions but we call it a project. And a lot of work – whether we call it a project or not – is managed by a Project Manager which kind of implies someone, somewhere, thinks of it as a project.

I call such work a False Project.

The problem is: when we use the language of projects – when some people assume the work conforms to one of the above definitions, specifically when they think there is an end, or a temporary arrangement – then we create a conflict. We create confusion. We create a cognitive dissonance.

When we manage a piece of work as “a project” when it isn’t then at the very least we use the wrong language and tools. Worse still we create expectations that can’t be met. Part of the #NoProjects / #BeyondProjects logic is about fixing this conflict, closing this gap.

Personally I have a little hypothesis but I don’t know how to prove it – or disprove it. If someone has employment statistics on managers, middle managers and project managers for the last 30 years we might be able to find some supporting evidence.

My hypothesis is: starting in the 1980’s middle managers became unfashionable, many of them got laid off in mass downsizings. But that doesn’t mean all the management work went away, sure some did but some remained. I suspect the growth in projects and project management stems from this time. I suspect that much of what gets called project management is in fact good old fashioned junior or middle management dressed up so it might be added back into organizations which have previously rejected middle management or want “flat hierarchies” (and other such oxymorons).

False Projects Read More »

Inconvenient Truths of Project Status Reporting

Long time readers of this blog may recall that I subscribe to the high-brow MIT Sloan Management Review. They may also remember that I’m never quite sure it is worth the rather high subscription fee. But once in a while an article comes along which is worth a year’s subscription. The “Pitfalls of Project Status Reporting” is this year’s article.

As the title suggests the article looks at why project status reporting, specifically for IT projects, so often fails to alert companies of the problems in project work. (Clearly the authors are unfamiliar with #BeyondProjects/#NoProjects but I suspect most of the same problems can be found in the false-projects so beloved of IT organizations.)

If you want the full story you will need to buy your own copy of the article (Amazon have it for £2.50) but for the rest of you here are the highlights, the 5 points the authors called “Inconvenient Truths” plus some of the text and a couple comments from from myself. The comments are all my own but the quotes – and the headlines in bold – are from the article which is research not conjecture, something we could do we more of!

1. Executives can’t rely on project staff and other employees to accurately report project status information and to speak up when they see problems.

“[software project managers] write biased reports 60% if the time and that their bias is more than twice as likely to be optimistic.”

Basically people don’t like reporting bad news and worry that reporting it will reflect badly on themselves.

2. A variety of reasons can cause people to misreport about project status; individual personality traits, work climate and cultural norms can all play a role.

“employees who work in climates that support self-interested behaviour are more likely to misreport than employees who work in climates based on ‘rules and code’.” Why do I think of banks and bonuses when I read this?

3. An aggressive audit team can’t counter the effects of project status misreporting and withholding of information by project staff.

“Once auditors are added to the mix… a dysfunctional cycle that results in even less openness regarding status information.”

“Diminished trust in the senior executive who controlled their project was associated with an increase in misreporting.”

More reporting will make things worse, if auditors arrive then people don’t feel they are trusted and guess what? They might not show the auditors the dirty washing and might tend to paint things as good.

4. Putting a senior executive in charge of a project may increase misreporting.

“research suggests that the stronger the power of the sponsor or the project leader, the less inclined subordinates are to report accurately.”

I find this suggestion very worrying because some people in the Scrum community insist that the Product Owner must be an executive (“the real product owner”) who has real power to secure the resources and changes a make the project a success. This research suggests that having a strong, senior, Product Owner could make things worse.

5. Executives often ignore bad news if they receive it.

I am reminded of one client where my own attempts to raise a red flag have gone no-where. On my final visit to the client I spoke with the senior architect to again voice both my concerns over the work and the failure of anyone to listen. Unfortunately he had the same problem. He saw the same problems but couldn’t find anyone willing to listen.

My initial thought on all of this is that this is all the more reason to base reporting on deliverables, i.e. what has actually been delivered that is working and in use, that is rather than reporting on proxy “status” criteria, report actual progress, actual deliverables.

Inconvenient Truths of Project Status Reporting Read More »

AmebaTeam.png

Ameba Teams and Amoeba Management

I’m probably going to upset some people with my spelling here but as far as I can tell, there are two ways to spell the unicellular lifeforms most of use learn about in school: Amoeba and Ameba. I believe Amoeba is the more, shall we say, classical way of spelling the name but Ameba is also acceptable. I tend towards using Ameba because I’m lazy. On this occasion its useful to have two ways to spell the same word.
What, you might be asking, has all this to do with software development?
Ameba Team
For a while now I’ve been talking about the concept of Ameba Teams. Specifically I talked about this in my Conway’s Law and Continuous Delivery presentation (SlideShare and an earlier version on recorded on Vimeo). In my model the team is a self-contained entity:

  • It contains all the skills necessary for under taking its task: developers, analysts, testers, what-ever.
  • Ideally the team extends all the way into the business, there is no them-and-us divide.
  • The team is responsible for its output.
  • The team crosses the finishing line together – developers don’t win while testers lag behind, analysts don’t win while developers struggle too keep up, they win or lose together.
  • Team and management should aim to minimise – even eliminate – dependencies on other teams.
  • Work flows to the team.
  • Teams start small and grow as they are seen to work.
  • Initially teams may start understaffed, even with key skills missing, a Minimal Viable Team (MVT).
  • A team contains more than 1 person (name me a sport where team size is 1 please). That probably means a team must have at least 3 people, but applying MVT the minimum size for a team is 2. (With the anticipation that it will either grow or die.)
  • The team may work on more than one product or project at one time – actually the concept of project should be avoided if possible. See #NoProjects – why projects don’t make sense and #NoProjects becomes #BeyondProjects.
  • Everyone aims for team stability: yes people occasionally leave and join but we aim for a stable team.
  • The team can grow when there is more work and contract when there is less work. Growing takes time if it is not to undercut the teams productive capacity so needs to be done steadily over time. Shedding staff can be done quickly it is difficult to recover capability after the event.
  • The teams historic past performance is used as a benchmark and to judge (velocity) how much work the team can take on and even forecast end dates.

We start small and grow to work with, not against, Conway’s Law. If you start with a larger team, with more skills and roles defined in anticipation of what the team must then the software design is set by the team designers.
Although I don’t say “Ameba” explictly this is the thinking behind teams in Xanpan – and I promise to say more about Ameba teams in ‘Xanpan Volume 2: Management Heuristics” which I’ve now started work on.
Yes there is a tension, a friction, here: I say you want a fully staffed team but I then tell you to have an MVT. Life isn’t easy! You start with an MVT and if they can produce something useful you decide what other skills they need and add them when you find the need.
Ameba Team
I’m happy for teams to grow relatively large, say up to 15 people. Because:

  • Larger teams can handle greater variability in workload and task variability and still have useful data.
  • It is easier to justify full time specialists (e.g. UXD) on a larger team.
  • Larger teams aren’t so susceptible to unexpected events (e.g. people leaving).
  • Larger teams lose less of their capacity when they have a new person joining them.

Once a team gets to a critical mass, say 14 people, then the team – like the creature – should split in two. The two resulting teams need not be equal sizes but they should each be stand alone. Each should contain all the skill necessary to continue their work.
Ameba Team Split
My thinking on team splits comes from reports of Hewlett-Packard I read a long time ago (and I’ve lost the reference). When a successful HP unit hit a particular size the company would split it in two so that each unit was manageable.
The tricky bit is finding where to split the teams. We need to respect Conway’s Law. Ideally each resulting team would have its own area of responsibility. So if a large team looks after three similar products they might split into two teams, say 5 and 9 people each with the smaller team taking responsibility for one product while the larger team retains responsibility for the other two.
What you must absolutely avoid is splitting the team so that one team now depends on the other. For example: one team is the UI team and the other the business logic. Shipping anything requires both to complete work. That means co-ordination, it means complication, it makes work for managers to do, it means expense and it means moving at the pace of the slowest team.
The ideas behind Ameba Teams have formed over the years in my mind. In fact I started using the expression several years ago. So when, nearly two years ago, I came across someone else describing Amoeba Management I was a little crestfallen. Fortunately the ideas are compatible and I’ve recently gone back and re-read the article. It therefore seems the right time to bring it to the party. (I’m sure the article has influenced my thinking since I originally read it too.)
Amoeba Management: Lessons from Japan’s Kyocera appeared in the Sloan Management Review in late 2012 – you can buy the article from Amazon too. You can find more information about it both on Wikipedia and on Kyocera’s website. And there is a book I’ve yet to read: Amoeba Management: The Dynamic Management System for Rapid Market Response.
According to the Kyocera website the objectives of Amoeba Management are:

  1. Establish a Market-Oriented Divisional Accounting System: The fundamental principle for managing a company is to maximize revenues and minimize expenses. To implement this principle throughout a company, the organization is divided into many small accounting units that can promptly respond to market changes.
  2. Foster Personnel with a Sense of Management: Divide the organization into small units as necessary, and rebuild it as a unified body of discrete enterprises. Entrust the management of these units to amoeba leaders in order to foster personnel with a sense of management.
  3. Realize Management by All: Realize “management by all,” where all employees can combine their efforts to participate in management for the development of the company, as well as work with a sense of purpose and accomplishment.”

They look very compatible with everything in Agile to me, they look compatible with Conway’s Law and Continuous Delivery too.
Some other highlights from the MIT Sloan Management piece

  • “Kyocera is structured as a collection of small, customer focused business units”
  • “Each unit, generally made up of between five and 50 employees, is expected to operate independently and to develop its own ways of working with other amoebas to achieve profitable growth.”
  • “Like other decentralized management systems, amoeba management is designed to spur market agility, customer service and employee empowerment.”

There is also some hints at Kyocera’s accounting systems. They use a simplified system for transparency and leave details to teams. While this is a long way from Beyond Budgeting it does suggest, like Beyond Budgeting, that sophisticated long range budgets get in the way for actual performance.
The article does point out that Amoeba Management, indeed any management approach, depends on the context. This is an important point that is often missed by Agile-folk, although one may forgive them as long as they confine their believe to software teams where there is a lot of shared context.
Rather than go on about Amoeba Management right now I’ll let you read about it yourself – although that will cost, the article and the book. Although right I promise to return to this subject in the near future.
Right now let me conclude with two thoughts.
Ameba Teams (my ideas) share a lot in common with Amoeba Management (Kyocera), my thinking has been influenced by Amoeba Management but frankly I don’t know enough about them (yet) to speak with authority so I retain the right to define Ameba Teams in my own terms.
While Agile and Lean people have been watching Toyota (the company that gave us Karōshi, death by overwork) another Japanese company has some interesting ideas we should consider.

Ameba Teams and Amoeba Management Read More »

Loose ends: Continuous Delivery & Conway’s Law

A few weeks ago I did a presentation entitled “Conway’s Law and Continuous Delivery” – although it was also at some point entitled “Conway’s Law and Organisational Change” or possibly “Conway’s Law and Change for Continuous Delivery” – to the Pipeline Conference in London.

The presentation was very well received at the time and I spent most of the next hour talking with people who wanted to talk about the presentation. One tester came to be later and said “Thank you, I understand what I’m doing now, I’m testing the organisation.”

Since then the presentation has had a lot of interest on Twitter and on SlideShare – at one time it made the front page of SlideShare (my 15 minutes of fame I guess).

I should say I took a slight liberty with the “organisational change” bit. I didn’t so much describe how to bring about organisational change as describe how the organization might (even should) change to align itself with Conway’s Law. Sorry about that, I don’t have a potted “how to change your organisation” recipe (although if you want to call I’m available for hire!)

Actually the Pipeline conference was the second time I’d made this presentation. The first outing was for the London Continuous Delivery group and hosted at the Financial Times. If you missed these two outings the good news is that the FT delivery was recorded and is available on Vimeo. This presentation is also on SlideShare but isn’t significantly different from the Pipeline version.

(Personally I feel that was the better of the two presentations but others have told me the second was the better, perhaps what the second lacked in raw freshness was more than made up for in being a little more polished and debugged.)

I’m really grateful to Matthew Skelton, Steve Smith and Chris O’Dell who are the people behind both London Continuous Delivery and Pipeline. Although I’ve known about the idea of continuous delivery for a while I know much more and I’m really excited by it.

They have also furnished me an opportunity to revisit Conway’s Law. Nearly 10 years ago I ran a workshop exploring Conway’s Law at EuroPLoP 2005. Both my co-organizer and I learned a lot, in fact the fuss about our write up rambled on for a couple of years – I’m not going to reopen that here and now. (The full report “What do we think of Conway’s Law now?” is available for download from allankelly.net.)

Ever since then one of my mental models has been the organisation structure is the software architecture, I may no longer change code but by changing organizations I am practicing software architecture.

Right now my head is buzzing with thoughts on Continuous Delivery, Conway’s Law, Team Structure, Scaling Agile and how this all fits with the Beyond Projects / No Projects agenda. I feel that if I can just make sense of all this I’m on the verge of some great breakthrough!

Loose ends: Continuous Delivery & Conway’s Law Read More »

Software Developmers: prototype of future knowledge workers?

‘‘Knowledge work is not easily defined in quantitative terms, . . . To make knowledge work productive will be the great management task of this century, just as to make manual work productive was the great management task of the last century.’’ Peter Drucker (1969)

In recent months I have repeatedly been reminded of an argument, or perhaps a hypothesis, I advanced in my 2008 book Changing Software Development. In the introduction chapter I suggested that software developers might be regarded as the prototype of future knowledge workers. I’ve republished “Prototype of Future Knowledge Workers” as a blog post yesterday.

I’ve been reminded of this in a number of casual conversations recently, like the one I had with Laurie Young (@wildfalcon), again when rereading some Peter Drucker and because of media pieces like one in the Financial Times (“Do it like a software developer” by Lisa Pollack) suggesting that companies with successful software development teams might have something to teach other companies, and also by the topics submitted to the Agile on the Beach conference.

Let me explain, firstly let me explain why I make this argument, then let me explain the significance – and with reference to Agile – and finally, for completeness let me quote from the book…

Software Developers – programmers, testers, and everyone else – are what academics would call knowledge workers. We work with our brains rather than our brawn. We generate value because of what we know and what that knowledge allows us to do.

There are lots of other knowledge workers, teachers, lawyers, doctors, and during the twentieth century the number of knowledge work professions and number of knowledge workers grew.

In many ways software developers are no different, but in one very important ways they are very different. Software developers are able to create new tools for machines which allow them to change the way they do knowledge work.

Think about some of the tools your common knowledge worker uses during the typical day:

  • E-mail: invented by Ray Tomlinson, a programmer
  • Spreadsheets: invented by programmers Dan Bricklin and Bob Frankston
  • The Web: invented by programmer Tim Berners-Lee
  • Wiki sites: invented by programmer Ward Cunningham

Even if Twitter, FaceBook, Video conferencing, Voice over IP and countless other knowledge sharing and collaboration technologies were not invented by programmers I will bet good money on one common factor: software developers, programmers, had early, if not first access to these technologies. That is very very significant.

Lets be clear:

  • Unlike other knowledge workers software developers have the ability to create new tools which can change their work.
  • Software developers have early, if not first, access to tools which benefit knowledge work.

One might also add that people working with technology welcome and embrace new tools and technologies (on the whole) so the barriers for technology change are lower.

This means those working in software technology – and particularly those developing software – have more experience and most history of using these tools, and consequently are the likely to be the most adapted to working with these tools.

Thus if you want to see how new technologies, new tools, will change knowledge work look at those who create the tools and those closest to them. This shouldn’t be a surprise, after all knowledge work is about managing and using information, and these technologists create the tools we use to store, retrieve, manage, manipulate, combine, and share information.

(And there is a difference between knowledge and information but I’m not going into that here, its in the book.)

By itself this argument is interesting and perhaps good for developer self-esteem. We could leave it there but…. it also begs a question:

If software developers are the prototype of future knowledge workers what does it suggest will happen next? Specifically, how can this insight help answer the challenge posed by Peter Drucker above?

As a step towards answering those two questions let us rephrase the question:

  • What are software developers doing today which might benefit other workers?
  • Or to put it in Drucker’s language: what can managers of non-software staff learn from software teams which will make their teams more productive?

One way would be to look at the tools software developers are using today which might benefit other groups.

Version control springs to mind but version control has existed for at least 30 years and hasn’t caught on with the masses. We could continue to examine the toolset, one of my favourites is Markdown. Or we could look at the way software developers are managed. This was the thrust of the FT piece which looked at personnel management (I refuse to say HR) practices.

But there is something else… Agile itself.

One of the reasons I believe Agile came about was that the tools we use changes. Cobol gave way to Java which gave way to Ruby. OS/360 gave way to VMS which gave way to Windows (or was it Unix?) which gave way to Unix derivatives. One might say paper gave way to the web and wiki’s, telephones gave way to VOIP, telegraphs gave way to Twitter.

Our tools changed which allowed – even forced? – our processes to change. As knowledge workers outside of IT embrace these these tools its possible that their working practices will change in a similar way. My favourite example is a presentation given by Kate Sullivan at Agile on the Beach two years ago in which she described how the legal department at Lonely Planet adopted Agile.

There are several Agile practices which area are easier to imagine outside of software than others. For example:

  • Visual boards: lots of places!
  • Standup meetings: example abound and usually predate Agile, e.g. bar staff, Japanese office workers, NATO commanders (in Bosnia and probably elsewhere)
  • Test driven working: consider many of the business examples from Lean Start-Up
  • Pair programming: airline pilots and surgeons have been doing this for a long time but Kate added Pair Lawyering
  • Iterations: although I’ve yet to get a concrete example most news organizations clearly operate in daily iterations

During the second half of last year I was involved with a client whose project was to write documentation. Specifications to be precision but not specifications for software – there was an element of that but that was not the aim. We used iterations, visual board and other techniques. We even used a walking skeleton. It worked. We delivered to schedule.

We in software have developed something good. Something that is spreading beyond software. Now I wonder, whats next?

Software Developmers: prototype of future knowledge workers? Read More »

The Prototype of Future Knowledge Workers

The following is an except from my 2008 book “Changing Software Development: Learning to be Agile”. I’ve been thinking about this suggestion a lot recently and have a blog post in the works. Hence I thought now would be a good time to share this, it also means I can reference this post in the one that comes next… and who know, I might even rustle up a few sales for the book!

The Prototype of Future Knowledge Workers

Highlighting IT workers as knowledge workers allows us to learn from the existing body of knowledge on the subject. IT workers are not alone; they are knowledge workers and there’s much to learn from other knowledge workers, and from research and literature about knowledge work in general. There’s no need for IT managers (and writers) to re-invent the wheel.

Yet, in another way, the existing literature, research and experience can’t help IT workers and their managers. This is because IT workers, and software developers in particular, are at the cutting edge of knowledge work. In many ways, they’re the prototype of the future knowledge worker; they’re pushing the boundaries of twenty-first century knowledge work.

This occurs because, to paraphrase Karl Marx, software developers control the means of production. Modern knowledge work is enabled by and dependent on information technology: e-mail for communication, web sites for distribution, databases for storage, word processors for writing reports, spreadsheets for analysis – the list is endless! These technologies are created by software developers and used by legions of knowledge workers worldwide. The key difference between software knowledge workers and the others is that other knowledge workers can only use the tools that exist. If a tool doesn’t exist, they can’t use it. Conversely, software developers have the means to create any tool they can imagine.

Consequently, it was a programmer, Ward Cunningham, who invented the Wiki. Programmers Dan Bricklin and Bob Frankston invented the electronic spreadsheet. Even earlier, it was another programmer, Ray Tomlinson, who invented inter-machine e-mail. This doesn’t mean that non-programmers can’t invent electronic tools. Others can invent tools, but for programmers the barriers between imagining a tool and creating the tool are far lower.

Lower barriers mean that programmers create many more tools than other types of worker. Some tools fail, while others are very specific to a specific problem, organization or task in hand, but when tools do work it is programmers who get to use them first. In addition, because IT people have had Internet access for far longer than any other group, the propensity to use it to find tools and share new tools is far greater. So tools such as Cunningham’s Wiki were in common use by software developers years before they were used by other knowledge workers.

Early Internet access has had other effects too: IT workers were early adopters of remote working, either as individual home workers or as members of remote development teams; IT people are far more likely to turn to the Web for assistance with problems and more likely to find it, because IT information has been stored on the Web since the very beginning.

The net effect of these factors and others means that software developers are often the first to adopt new tools and techniques in their knowledge work. They’re also the first to find problems with such tools and techniques. Consequently, these workers are at the cutting edge of twenty-first century knowledge work; they are the prototype for other knowledge workers. Other knowledge workers, and their managers, can learn from the way in which IT people work today, provided that we recognize these workers as knowledge workers.

From “Changing Software Development: Learning to be Agile” by Allan Kelly, 2008.

The Prototype of Future Knowledge Workers Read More »