Scrum is the new XP

Another insight I had last week: Scrum is the new Extreme Programming. Or rather Scrum is the new Agile Software Development.

Before there was Agile Software Development there was Extreme Programming – or just XP. XP got all the attention and coverage. Then the term Agile was adopted by Kent Beck, Jim Highsmith, Martin Fowler and others as an umbrella term for XP, Scrum, FDD, Crystal, etc. etc.

But, for a long time many people – outside the inner sanctum of the Agile community – equated Agile with XP, the two terms were inter-changable.

That period came to and end about two years ago, XP stopped getting all the fuss and Agile was something in its own right. (Still all the terms XP, Agile, Scrum, Lean, etc. still confuse the uninitiated.)

But now Scrum is supreme. Scrum is everywhere. Scrum certification seems essential for ones career. Now it seems the terms Scrum and Agile are inter-changable. Scrum is Agile and Agile is Scrum.

Either this means Scrum has won – the Scrum Alliance has done an excellent job of marketing Scrum and it now has the respectability that “extreme” anything could never had.

Or is the start of the end of Scrum.

I expect the Scrum is Agile and Agile is Scrum trend to grow for the next year or two. It will be at least that long before Agile is again a free term. (All the more reason then to adopt Agile Engineering as the new banner.)

Scrum is the new XP Read More »

Agile Engineering

I had the pleasure to meet Ryan Shriver last week at Tom Gilb’s seminar. Ryan had some very interesting ideas, experience and suggestions about Agile Software Development and Tom’s EVO process.

In fact, Ryan had independently arrived at many of the same conclusions I had about Agile, management and the future of development. He also gave me a new term, a new piece of the jigsaw which I think helps make sense of the world: Agile Engineering.

Ryan doesn’t claim to have invented the term but he did define it using seven points:

1. Deliver Value
2. Quantify Qualities
3. Reduce Feedback Cycles
4. Explore Alternatives
5. Last Responsible Moment
6. Front-to-Back
7. Test-Driven Development

This list will look familiar to anyone who knows about Agile Software Development but it is subtly different and allows for greater scope in the engineering, management and development of software systems.

Like me Ryan has seen the limitations of Agile Software Development and, like me, sees the need to go further. As it originated – in XP, in Scrum, etc. – Agile Software Development doesn’t go far enough to address requirements, design, large systems, and a whole bunch of other things you have to do to manage software development.

This is where Agile Engineering comes in, it includes Agile Software Development but is about more than just Agile Software Development.

Ryan Shriver has some presentations online which should expand on this. (He had a busy week last week, as well as presenting at Tom Gilb’s London seminar he also presented at Agile ITX in Reston, Virginia.)

So, Agile Software Development is Dead, long live Agile Engineering!

Agile Engineering Read More »

Off balance sheet risk

I spent most of last week at Tom Gilb’s seminar on Risk and Uncertainty. Tom expects everyone at this seminar to contribute, so, having heard several very competent speakers present some very thought provoking ideas I was, for the first time in a long time, a little bit nervous about doing my own presentation.

As it was my presentation – entitled Risk, Uncertainty and Money – was well received. Funny thing was, I had a much better understanding of what I was trying to communicate after I finished presenting! I expect a revised version of this session to resurface at some other conference in the near future.

A few days before the seminar I heard about a new risk management technique being employed at one London bank. The bank is asking its suppliers to take on the risk of system development. Nothing new there you might say, the ability to take on such risk is exactly why Logica, EDS, Accenture and co. exists and keep merging – you need a big company, with a big balance sheet to take on these risks.

But this bank had, as I understand it, three novel twists:

• The bank wanted the suppliers to take on the risk of errors made by the system after deployment. Not just the risk of development but the risk of using the system.

• The bank wanted the suppliers to accept unlimited liability – so an insurance policy wasn’t going to be enough.

• The bank was managing this project themselves and the suppliers were not Accenture or Logica or similar but IT contractors. Such contractors are individuals, trading as self-employed professionals or, more usually, limited companies with one employee.

Taken together what this means is: part of the operational risk of building an using the system is now taken by not the bank but by the IT contractors. So, on the bank doesn’t need to carry this on the “risk balance sheet”. If something goes wrong the contractors will pay up.

Except, the contractors are individuals. If the bank comes looking for £10m they aren’t going to be able to pay. The bank could get them declared bankrupt but it isn’t going to get its £10m.

So the bank isn’t off loading the risk, it is still taking the full risk but by a slight of hand it can claim it isn’t. This seems like a dangerous development, the bank are not only deluding themselves but are potentially deluding their investors and regulators about the risks they are taking on.

Off balance sheet risk Read More »

Agile failure modes: Potemkin Agile

One of the re-occuring themes in this blog is my attempt to capture “Agile failure modes”. For example:

Failures in Agile processes?
More Agile failure modes
Agile failure modes – again
• And some in Reflections on XP Day

One I should have added a few months ago comes courtesy of Steve Freeman: Potemkin Agile.

Characteristics of Potemkin Agile include:
• “Show” teams in the organisation follow Agile but they are isolated examples and do not represent the organization as a whole (other teams are as disfunctional as ever)
• Teams who go through the motions of Agile without any real understanding
• Teams who do Agile, deliver software, get a big tick but the end product doesn’t deliver value

(I know Steve reads this blog so if I’ve missed anything, or misrepresented it I’m sure he’ll respond.)

The last one on this list is classic goal-displacement. People come to see “following the process” as the objective not adding value. Its been documented as happening in traditional development methodologies (specifically SSADM), I’m sure I’ve seen one project manager mistake following PRINCE 2 as success, I’m also sure it happens with Agile methods like XP and Scrum.

The more prescriptive the approach, the methodology, the greater your chance of this happening. My solution: don’t have a methodology, or at least, roll-your own. If you create your own your free to keep improving it.

That may sound radial and scary to some but it means you can focus on what you are really trying to achieve and then work out how to do it.

Agile failure modes: Potemkin Agile Read More »

Q&A: Where do testers fit in?

I got the following question in my mail box a few days ago:

“I was reading the Powerpoint deck you prepared for the East Anglia Branch of the BCS and wondered what your thoughts were around Testing.

How should testers tackle verbal requirements – as you say Requirements shouldn’t be a document but Testers rely on such documentation as check lists for their testing?

I would be interested as I find myself in an ‘Agile’ environment and this issue takes up a lot of our time. Any advice would be great.”

I think this is an interesting question, and I think we often ignore the role of testers in Agile software development. This is for two reasons: first Agile came from developers, hence it ignores testers as it does Product Management. Second, Developers don’t really want to admit the need for Testers. Developers always think they can do it right and nobody needs to check on them. Call it arrogance if you like.

Consequently testers have problems like this. So I’m happy to try and suggest some answers.

(By the way, if anyone else out there reading this blog has a question please e-mail me the question, I’m more than happy to answer, especially if it gives me bloggable material.)

Short answer to this question is that Testers need to be involved in the same dialogue as the Developers and Product Managers. It is not so much the lack of a document that is the problem as unclear requirements, or requirements that are too specific, or Testers being cut out of the loop altogether.

People focus on the document because, despite its imperfections, it is the only thing that is available. The deeper problem is often that nobody pro-actively owns the business need. We somehow assume developers know best.

(I’m using the term Product Manager here. In most organizations this person goes by the name of is a Business Analyst, Product Manager, Product Owner (in Scrum) or even Project Manager. What ever they are called it is the person who decides what should be in, and how it should be.)

If we break the problem down it comes in three parts:
1. What needs testing?
2. How should it perform?
3. How should it be tested?

The planning/scheduling system should supply the “what to test” part. If you are using a visual tracking system what is being worked on should be clear, and it should be clear when it is ready for test. (I’ve described this in my Blue-White-Red process.)

If Software Testers are having problems knowing what to test it is usually a problem with the project tracking. In Blue-White-Red each piece of work is represented as a card, the card journey’s across the board an can’t finish its journey until a Tester approves. Testers might demand to see unit tests, see evidence of a code review or pair programming and may try the work themselves.

I think my correspondent is having a problem with the second question. After all, if you don’t have a written document how do you know how the software should perform? As I’ve said before, written documents are flawed – see Requirements are a Dialogue not a Document from last year.

Testers need to be part of this dialogue, they need to be sitting there when the Product Manager and Developer discuss the work so they can understand what is required at the end. If the work is contentious, or poorly understood, they may take notes and confirm these with the Product Manager later. They may also devise tests to tell when these requirements are met.

The Testers role is in part to close the loop, their role follows from the Product Manager role not the Developers role. They need to understand the problem the Product Manager has and which the developer is solving so they can check the problem is solved.

If Testers are having a problem knowing how something should be then it is probably a question of Product Ownership and Management. When this role is understaffed or done badly people take short-cuts and it becomes difficult to know how things should be. One of the reason many people like a signed-off document is that is ensures this process is done – or at least, it ensures something is done. (I’ll give an example later). But freezing things for too long reduces flexibility and inhibits change.

Finally, the “how should it be tested” question, well, that is the testers speciality. That is their domain, so they get to decide that.

That’s the basic answer but there is more that needs to be said – sorry for the long answer.

In my experience there are four levels of testing, most have more than one, two is the norm, having all four is probably overkill.

User Acceptence Testing (UAT)
Performed by Software Testers and Business Analysts, and/or Customer representatives. Formal test cycle using release candidates. Software is released with ‘release notes’ listing changes, additions, fixes, etc. Seldom found in ISVs (Independent Software Vendors) which produce shrink wrap software, conversely may be a regulatory requirement in environments like banking. Always conducted in dedicated UAT environment.

System / Integration Testing
Performed by Software Testers to ensure software works as expected when working with existing software and other systems. Usually, but not always, working with release candidates although these may not be as formal as in UAT. Usually conducted in dedicated system test environment.

Functional / Embedded / Close support Testing
Performed by Software Testers working closely with development teams. Tests any sort of build, usually from the (overnight) build environment, or potential release candidates, or even from developers machines. Helps to keep developers honest and bring testing skills and considerations right into the development team.

Developer / Unit Testing
Performed by developers on their own work before turinging over to anyone. Traditionally manual – perhaps using formal documents and “condition response” charts. Increasingly automated with tools like Junit and Aeryn.

Notes:
1. I’ve deliberately excluded regression testing here, although it is important it is not a level in it’s own right. It is an activity that happens within one or more of these levels.
2. Different organizations use different names and different descriptions for these levels. I’m not concerned what they are called, these are the levels I see.

My correspondent didn’t mention what sort of testing they were doing. If they are working at the UAT or System Test level they should be getting Release Notes from the development team to tell them what has been changed – not how it should work but what to look for.

One of the tenants of Agile software development is higher quality. If the code the developers produce is of higher quality then there is less rework to be done, so there is less disruption and overall productivity rises. This is the old Philip Crosby Quality is Free argument. Thus the overall approach is to inject more quality at the lower levels, i.e. Developer and Embedded Testing rather than UAT and System Test.

Good developers have always tested their own work. This has taken different forms over the years but is usually called Unit Testing. Nobody has ever recommended that developers slap some code down and pass it to testing. This might be what has happened sometimes but nobody has ever recommended it.

In compiled languages no developer would ever try to claim his work was finished until it at least compiled. And neither should they try to release it unless they have made some effort to ensure it does what it was supposed to.

What Agile software development does is suggest a set of practices – like TDD, pair-programming, code reviewing, etc. – which improves the developer testing side of the work in order to reduce the amount of work required later.

When this works code quality improves, Software Testers get to pass more work and raise fewer issues and the whole process works more smoothly. Testers can play a role here by keeping the developers honest, they should be asking: was the code reviewed? Have unit tests be written? Have the tests been added to the build scripts?

The pure-testing part of a Software Testers work will reduce over time thereby allowing them to focus on the really Quality Assurance aspect of their work. Unfortunately, in the software world the terms “Testing” and “Quality Assurance” are too often confused and exchanged.

So, as quality increases the amount of testing should reduce, and the amount of Quality Assurance increases – which again highlights the link between the Product Manager and the Tester.

One way in which Quality Assurance should work, and where Testers can help, is by defining, even quantifying, the acceptance criteria for a piece of work. Again this is picks up from Product Management.

For example:
In many “traditional” environments the Product Manager comes to the developers and says: “Make the system easier to use.” The Developers respond by putting in more menu items and other controls and the Tester is has to test the items exist and work.

What should happen is: the Product Manager comes to the Developers and Testers and says: “Users report the system is difficult and we would sell more if it was easier to use.” All three groups have a discussion about why the system is difficult to use, perhaps there is some research and user observation. Product Managers assess how many more sales there are to be had, Testers determine how they would measure “ease of use” – how they would know the system was easier to use – and the Developers come up with some options for making it easier.

Knowing how many more sales there are to be had determines how much effort is justified, and the test criteria help inform the selection of options.

As Agile development is introduced into an environment with legacy code it takes more time for this effect to come through. Even if Developers start producing higher quality code there is still a lot of legacy checks that need to be performed manually. Plus, as Developers productivity increases there is more to check.

The answer is that Testers need to bring in their own mechanisms for improving consistency and quality. This often takes the form of automated acceptance tests – perhaps using FIT or Selenium.

In the longer term quality all round should increase and – this is the scary bit for software testers – the need for testers should reduce.

If all software was perfect the need for Software Testers would go away. The presence of Software Testers shows that something is not right. However, the same could be said about many other industries which employ Testers but the reality is testing is a multi-billion pound industry.

So, should Software Tester be worried about their jobs when Agile software development comes along? – some are. I remember one Test Manager who like the ideas behind Agile but could bring himself to embrace it. He saw it as a threat to his job and to his people.

But No, I don’t think Software Testers should be worried about their jobs.

Even as quality increases there will be a need for some qualification that things are good. There are some things that still require human validation – perhaps a GUI design, or a page layout, or printer alignment.

In legacy systems it will be a long time before all code is unit tested, in fact I think it is unlikely to ever happen simply because it doesn’t need to. The Power Law tells us that only part of the system needs to be covered by automated tests. Still the rest of the system needs checking so there is work for Testers.

As Testers work more closely with Business Analysts and Product Managers their knowledge and insights into the business will become greater allowing them to add more value – or even more into Business Analysis or Product Management themselves if they wish.

And if this weren’t enough the success of Agile should make more work for everyone. Agile teams are more productive and add more business value. Therefore the organization will succeed, therefore there will be more business, therefore there will be more work to do.

The summary of the long answer is: Agile does mean change for Software Testers, it means putting more emphasis on quality and less on testing, I think it was Shigeo Shingo of Toyota who said there were two types of inspection [testing]:
• Inspection to find defects is waste [so should be eliminated]
• Inspection to prevent defects is essential.

Software Testers need to look reconsider their work as the prevention of defects, not the detection of defects.

I hope that answers the question. Without knowing more about the organization set up its hard to be more specific. And I apologise for the length, in Blaise Pascal fashion, I was working fast.

If anyone else has a question just ask!

Q&A: Where do testers fit in? Read More »

Kanban: efficient or predictable, you decide

I have written a little about the Kanban method of software development in the past. And thanks to the comments from Karl and Wayne, plus a few on the side conversations with other people I have a better understanding of what is going on. (Although I’ve still to see it in action.)

Well after much thinking I’ve come to a conclusion, I’ve worked out why it is different to other Agile methods like XP, Scrum and my own Blue-White-Red. And this is is – drum role….

• The focus of most Agile methods is to bring order to the development process, one way they do this is by trading efficiency for predictability.
• Kanban – it seems to me – makes the trade in the opposite direction: predictability is reduced to increase efficiency.

I’m not saying Agile methods don’t care about efficiency, they do. And I’m not saying Kanban ignores predictability, it has an different method of delivering it. And I’m not saying Kanban isn’t part of the Agile/Lean family, it is.

But I am saying there is a trade off. And there has to be a trade off. Its not very nice to say but it is true.

It is true and can be proved mathematically through queuing theory. I can bore you silly with queuing theory but I’ll leave that to one side for now – if you want to know get in touch with me or read another site.

In summary queuing theory explains why in any production system you can have predictable deliveries up to 76% of utilisation. Once your system utilisation goes beyond 76% predicability is lost.

Just to be clear: You can have predicability or efficiency but not both.

The way most Agile processes work the highest priority items are more-or-less guaranteed to be done in the next time-boxed iteration, whether that be a week, two weeks, a month or what ever. Lower priority items might, or might not, get gone. But you can’t squeeze a pint into a quart pot, if you do bad things happen. People don’t sit around for 24% of the time but there is a buffer.

These processes reduce maximum theoretical throughput in order to increase predictability. There is nothing wrong with that, remember our 76% rule. Management might not like it but they are up against mathematics.

In Kanban the team remove (some of) the time-box constrains, planning processes and such that create predictability and go all out for efficiency. If you like they take a gamble, they aim to complete the work before there is need for predictability.

That in a but nut-shell is it. However Kanban does have one twist in the tail. As I understand it teams then apply statistical methods and observation to calculate the flow through the system. This can return some degree of predictability.

One more thing to note. All Agile methods, including Kanban and Lean are actually more efficient than traditional methods for another reason: quality improvement leads to less (re)work.

All Agile-family methods place a high emphasis on avoiding re-work, they are prepared to invest more in up-front quality in order to reduce tail end (bug) fixing. The net effect is to reduce the total amount of work. So, while Scrum and Blue-White-Red may trade some efficiency for predictability they do so and are still be more efficient than traditional approaches.

So it all comes down to choosing what is more important: out right efficiency or predictability. Then set up in your process appropriately.

Having written these words I can already hear senior managers responding: but we want both efficiency and predictability, why should we settle for one or the other? We need both, we are different.

Managers get brought up on these kind of contradictions: they are taught “the magic of and” to implement “simultaneously tight and loose controls” and so on. In many cases these are good ideas and putting the contradictions together leads to creativity but in this case it won’t end.

Still we all live in the real world so if you really need efficiency and predictability you can have it. The magic ingredient is illusion of control. Just d this and all things are possible – especially those “when will it be ready” and “why can’t IT deliver?” conversations.

Kanban: efficient or predictable, you decide Read More »

Could the Credit Crunch be a software fault?

The Financial Times is reporting that the credit rating agency Moody’s has uncovered a bug in the software they use to value credit derivatives. As a result some instruments rated 4 notches higher than they should have been. And Moody’s shares are down 15% as a result.

Think about this. If these instruments had the correct rating people would have put a lower value on them to reflect the risk. And banks would have had a different value at risk so maybe they would have seen problems coming sooner.

How come Moody’s didn’t notice this sooner? Well it seems they did but they didn’t want to face up to it – management not wanting to hear what engineers say? – Sounds like the Challenger disaster again.

What happened to code reviews? What about Unit testing?

Let me guess, ‘this is the real world, we don’t have time for that’. The financial industry is build on software but banks haven’t really internalised that yet. The Moody’s bug is not an isolated instance.

Story 1, I was working at an investment bank in the late 1990’s. I was working on a risk system with the quants. They had branched the code base from the main developers months before – when I say branched, I mean it was put on another server so there was no way back.

The bank was silly enough to loose its development team – five of them left within a few weeks of one another. I was asked to take over the main code base and support the main overnight risk assessment system. There was a list of bugs they wanted fixing, most of which I’d fixed in the other code base, but before I could do so I needed to build the live system.

The two code bases on two different servers were different, not just diverged but contradictory. The code left on the developers machines was different again. And being C++ there were a lot of #ifdef’s and I couldn’t tell which ones were on when the live system was built.

I had no way of building the system to produce the same numbers as the live one.

I asked for help but nobody really wanted this news. I could build a system, several versions of it in fact, but none of them produced the same figures as the live system. And nobody had any time to help me work out what the figures should be, or even tell me how I could work them out.

The bank’s risk analysis software was effectively a random number generator.

Story 2, I heard this at SPA this year from “Bill” – a pseudonym. I think it was a derivatives or risk system but I’m not sure. Bill had been examining some figures one evening and they didn’t look right. The more he looked the more wrong they looked.

Bill found a developer and together they looked at the code. And in the code there was a sign error. Numbers which should be positive were negative and vice versa. They made a fix, and reported up. It went all the way up the chain and fast. The fix was signed off and in production very quickly.

I didn’t find out if the bank ever reviewed the numbers, or how long the error was live but, it was quite possible the bank was insolvent on several occasions without knowing it.

Moral? Increasingly our financial markets are built on systems which are not only difficult too understand but which we known contain errors. So what happens now?

Most likely Moody’s will survive – even if their shares fall. The moment the word “code” is used management will tune out, nobody wants this news so lets move on.

But if it is just possible the SEC, FSA and other regulators will get involved. This could be serious and they might demand something is done about it. Not just at Moody’s but elsewhere – clean up financial systems!

Now this is a more interesting scenario because two things might happen.

Firstly this could make Sarbanes Oxley regulation look like a walk in the park. What if the SEC introduce their own coding standards? What if all software needs to be formally approved?

This route would be a field day for the CMMI and ISO guys, the high priests of formal methodology and for document management systems. They would have so much work… and banks software would grind to a halt.

It would also be good news for vendors of shrink wrapped software. If developing your own costs so much more, and gets you tied in regulation then it is no longer cost effective so buy, don’t build.

There is another option, far less likely I’ll admit but…

What if you had an audit trail? You could build the system for any date at the push of a button.

What if code reviews did happen? Or what if code was pair programmed?

What if all source code was unit tested? And the tests kept? And the tests re-run regularly?

What if writing code became so difficult, expensive and risky we had to reduce our use of code? Well then we’d have to really code the right thing – improve our requirements.

Do you see were I’m going? Improving our code base, even regulating it, doesn’t have to be SarBox all over again. Many of the Agile methods are surprisingly well suited to this.

We will see. In the meantime, next time someone tells you you are wasting time writing a unit test remind them that Moody’s lost 15% of its value because of a bug and may have endangered a fair few banks.

Could the Credit Crunch be a software fault? Read More »

Do we really learn from failure?

Its an often heard expression: “We learn from our failures”. Particularly when you’ve just failed at something: “Well put it down to experience.”

I’ve always had my doubts: if we can learn from failure can’t we also learn from success? But how often do you hear people say: “Great success! Now what did you learn?”

All too often we don’t stop, examine our failures, take time to reflect and actually do the learning from them. We’ve failed, failure is painful, we want to put that behind us, forget about it. Maybe we could make time but do we really want to? Who wants to dwell on what has gone wrong? Naturally, after a failure we are defensive, and when we are defensive our learning process aren’t at their most effective.

In the IT world failure isn’t an absolute. Its not like a football match, one side goes home knowing they scored more goals than the other and therefore “won.”

If an IT project delivers late is it automatically a failure? What if it succeeds in the other goals?
What if it is late, and costs more, but delivers more value to the business?

I often tell story I found when I was writing my MBA dissertation. For this project I interviewed a variety of software developers about past projects, one guys, lets call him Paul (because that’s his name), he said: ‘The project was a great success’

He went on to describe how the technology was good, the product did what the customer wanted and everyone was pleased with the outcome. Then I said: ‘You say the project was a success, but before that you said it took 12 months to complete, and how it was originally estimated to take 3 months. Some people would say a project that was nine months late on a three month schedule was a failure’

To which he replied: ‘I never thought of it like that’

Failure is ambiguous. So how do we know when and what to learn?

In Wednesday’s FT there is a piece by David Story in which he looks at some statistics. He discusses the failure rate of entrepreneurs. If we learn from our failures we should expect entrepreneur who have failed in business to be more successful if they launch another business. But it turns out the chances of a business succeeding are the same whether the founder is a first time entrepreneur or an entrepreneur with a failure behind them.

You are as likely to success with your first business as your are the second or third. The odds don’t improve.

The only thing that does change is that you try again. If you never launch a business venture you can’t have a successful one. If you failure once you can’t have a success unless you try again. He likens it to buying tickets in the lottery.

If we don’t learn from our failures, we don’t learn from our success, and often we don’t even know what was successful and what was a failure, then how are we to learn?

We have to buy more lottery tickets. We have to institutionalise our learning, we have to create routines to learn and set our learning triggers.

Do we really learn from failure? Read More »

Product Management conference

On Tuesday I attended a mini-conference about Product Management. In fact, this conference billed itself as “The UK’s first conference on Product Management” which might actually be true.

I’ve written before that I believe Product Management is a much misunderstood role in the UK, and that I think it is an appreciation of this role that makes Silicon Valley companies so much more successful than their UK peers. Well, this conference might be a sign that things are changing. The fact that it existed at all, that it had sponsors, and that it was full is a good sign. However, I still came away with a feeling that only a few companies “get it.”

Practically the conference was well organised and free. It was sponsored by Telelogic (part of IBM) and Tarigo – a company I’ve not heard of before but I’m glad to say offer Product Management training in the UK. As with any vendor sponsored conference it was in some ways a marketing exercise by these two companies, but that said, the marketing wasn’t too heavy and they didn’t ram their products down your throat.

However, the conference was pretty tightly controlled, no questions until the end and no involvement of the attendees. Quite different to my usual type of conference, but then ACCU, SPA and PLoP’s just aren’t ordinary conferences, I’m spoilt.

So what did I learn?

Well I had my enthusiasm for Product Management warmed up again – it would be good to work as a Product Manager again!

Tarigo presented some estimates they had done of the number of Product Managers in the UK. Much to my surprise they believe there are about 90,000 to 110,000 Product Managers in the UK, which equates to one Product Manager for every nine developers.

On the face of it this is positive. I had no idea there were so many! And the ratio of 1:9 isn’t too far adrift of my own recommendations on what you need (1:3 for innovative rapidly changing products, 1:7 for less innovative more stable products) but, Tarigo has included Business Analysts in their figures.

I need to write a full entry on Business Analysts and Product Managers. In some ways the role is similar, in others it is different. I think most BAs could benefit from learning about Product Management and I think that is why Tarigo included them. But, on the other hand the roles are different. In the Q&A session I asked the panel about this and they agreed the roles are different.

So I think Tarigo’s 1:9 figure is optimistic.

And to prove the role is misunderstood I spoke to people at the conference who had nothing to do with technology. They are a different type of Product Manager again. Looking at the attendee list and their companies I think more fell into this category than I realised.

I also got some ideas for my growing business pattern language and confirmation of some of the things I’ve been writing in my latest set of patterns.

I learned a little about Telelogic’s product – I didn’t try to hard – and it confirmed another pattern I’m seeing: specialist database products. I first noticed this when I came across ChimSoft – Chimney Sweep Software – honest, as far as I can tell they are for real.

What I see is this: databases are wonderful things, they can do anything! But to use them effectively you need to program them. In any sector – chimney sweeping, product management – there are people who need a database and don’t have the skills to programme FoxPro or Excel let alone Oracle or DB2. Therefore, there is a market in specialised databases in specific sectors. And exploiting it is nine-tenths good product management because there really isn’t any new technology here.

Extend this argument upwards, to more complex domains, and you arrive at CRM and ERP systems. Both of these are in many ways databases with fancy utilities. But again, this is another blog entry.

Back to the conference.

Finally, perhaps the most surprising thing at the conference was the reaction to one session about Agile Software Development. Unfortunately the speaker didn’t link it up with Product Management very well so it was basically a “This is Scrum” talk but the audience got very very interested. Most of the Q&A session was about Agile in fact.

So why?

Well I think there were several reasons.

Firstly most of these people had not been exposed to the ideas behind Agile before. This just goes to show how things have yet to take off.

Second: the Agile story is very compelling if you know a little about software development. Agile promises products on time, with quality and everyone is happy.

The problem with the Agile story is: unless you know a little about software development already then it doesn’t sound any different to anything else. You abstract the message for those who don’t know software development (i.e. senior managers) and you get: improved productivity, reduced time to market, increased business value, etc. etc. At which point are we talking about Agile or Ration Rose? Or .NET? Or BoundsChecker? Or outsourcing to EDS? Or India?

I have more insights from the conference – so it was worth going to – but they are destined for my current crop of patterns.

Product Management conference Read More »

Make strategy like you make software?

 

There is an interesting piece in the latest issue of the MIT Sloan Review entitled: Should you build strategy like you build software?

I can imagine some managers initial reaction: What? IT is such a total disaster why would we want to make strategy the same way?

After all, we are regularly told that 70% of IT projects fail, and a few months ago the same journal ran a piece damning software development and specifically ERP systems: The Trouble with Enterprise Software

So why would anyone want to copy what IT does?

Well, believe it or not there is something interesting what the author, Keith R. McFarland is arguing is: many of the practices and techniques used in Agile software development can be applied to strategy formation and execution. McFarland focus on techniques such as small iterations, collective ownership, overlapping phases, direction changes (i.e. refactoring), organising around people not tools and abolishing big up front design.

To some degree I think he’s a little behind the curve. The myth about long range / strategic planning in companies was exploded a long time ago. Sure some companies still do it but that doesn’t mean it works. Henry Minztberg’s Rise and Fall of Strategic Planning explains why it just doesn’t work and why its a myth.

It is not only software development were managers and companies have suffered from the Illusion of Control it occurs in strategy formation and planning. Strategy formation is an emergent process, in the same way that software design is emergent.

Big strategic planning has been out of fashion for a while but its far from clear what should replace it. McFarland seems to be suggesting the answer is Agile Strategy.

There is an irony in McFarland’s work – and that of Donald Sull at LBS – is that while businesses thinkers, and some businesses, are looking to Agile Software Development for a paradigm of devising strategy too many businesses are still resisting Agile development or finding it impossible to implement.

Agile development is still largely a bottom-up change exercise. Not enough senior managers are embracing and backing Agile, they cling to the illusion of control. Too many people still say ‘Agile is unproven’. The fact that McFarland and Sull are prepared to embrace these ideas for strategy formation shows that Agile ideas are valid.

Regular readers will know that I have been arguing that there are close parallels between business strategy and software development for some years now. This argument is embedded throughout Changing Software Development. It is most clearly seen in the early chapters were I discuss the knowledge and learning aspects of development work. (The argument is also embedded in my MBA Dissertation “Software Development as Organizational Learning” if you want a less polished – more academic – version.)

And I have written in Overload about it – An Alternative view of design (and planning).

Most directly I’ve blogged about it on and off, specifically last August
Agile software and strategy and Thoughts on strategy and software development.

But this is more than just a parallel: for companies which use a lot of technology software and strategy are increasingly converging. Ultimately your software is your strategy – so much so that I sometimes image software code as liquid strategy.

There are two forces at work here. Firstly, as much IT – including software – becomes commoditised (Nicholas Carrs argument) what remains is strategic. Your Word Processor is a commodity, but your inventory management system is strategic. if you inventory management system is a commodity then your customer management system is strategic, and so on. Because IT is so varied, and software so capable – limited by your imagination – it is used to embed our knowledge.

Which is our second force: what we know. We embed our knowledge in our code so our organisations can operate: whether it is the Galileo booking system, Google’s Adwords or Unilever’s ERP system the capabilities and limitations of our IT systems are also the capabilities and limitations of our organizations.

As Cynthia Rettig argued in her Sloan Review piece, the limitations imposed by an ERP system impose costs on organization and limits on what they can do. That in turn limits on strategy options.

At the same time software systems open up opportunities and create strategy options for companies. We see this most directly with companies like Microsoft and Amazon but it is also true of companies we might not expect. Companies like FedEx depend on software in order to be able to execute strategies like delivering packages on time.

Banks are on the cutting edge of this convergence, unfortunately that doesn’t mean they do it well, but they are converging. Go to the financial centre of London or New York and you’ll probably find more software developers than investment bankers. Trading derivatives, managing risk and countless other banking activities would be impossible today without the sofwtare.

So where does this leave us?

Companies which embrace Agile in software development can learn a new way of working which will help them elsewhere – specifically in strategy formation

Companies which embrace Agile will be better positioned to as software and strategy converge

No company has to embrace Agile, you can do it the old fashioned – see IT as a cost outsource it, run it like a train timetable. But those companies who embrace Agile will win. The more you embrace it the more you win.

 

Make strategy like you make software? Read More »