Follow up 2: Personal retrospective and strategy

This is the second of two follow ups replying to some comments on the blog. What’s interesting about this one is that its from someone who I don’t know, or rather has chosen to hide their name. Still they have a great memory because they’ve linked something I said a few days ago with something I said last year. Now there’s paying attention! And its good for me because it helps me improve my own understanding.Anyway this comment asked how I reconciled my comments in the personal retrospective entry where I said I should have done more quicker, with my comment last year (last thing your people need is a strategy) where I suggested managers should spend time observing before acting. Good catch who ever you are.

So, at the risk of sounding like a politician trying to wiggle out a difficult question I’ll try and explain and perhaps I’ll shift my position a little. There is no simple answer so please bear with me…

To some degree it is a question of context. One size does not fit all, and it is important to look at the problems facing an organization before acting. The trouble in the ‘solutions’ oriented world is that you risk applying the wrong solution if you are not careful. Just as it is wrong to apply a predetermined solution before looking at the problem it is wrong to use a predetermined time frame before acting.

Which leads to the question: how do you know what the context is? Well you have to look around, observe what is happening and talk to people. I still believe the people on the ground want to do the best job possible and most likely know what needs doing. So I think its more important to talk to the people doing the tasks than it is to those who nominally control what is going on.

If the people doing the tasks, and those in leadership positions, aren’t seeing the same problems, and aren’t trying to go in the same direction then they positions need to be reconciled. That might mean explaining to the leadership why their model of the world is wrong. Or it might mean explaining to the workers that the company doesn’t want that.

The key point both ways round is not to assume you know all the answers, keep an open mind yourself. Collectively the team will know the answers, and one person may know more than most. I see a manager’s role as unlocking those ideas.

Going back to my personal retrospective post, one thing I didn’t mention was that during my first week at the client I held a reverse retrospective with the development team. This was a little like a project retrospective but since there was no single project to review, and others on the team were also new we had nothing to retrospect about. I call it a “future-spective.”

I gathered the team together for an afternoon and we talked about problems we faced and the problems the management saw. We also talked about improvements we could make and created a long list of things we could do. We then collectively prioritised them and worked out what actions were needed.

One way I would go faster in future is to implement the top priorities on that list faster. I realised when I did my personal retrospective that it took us months to implement some of our ideas, and as a result it took longer to see the benefit and prevented us from moving onto other initiatives.

In future I’ll probably use the future-spective technique but I’ll implement the suggestions sooner. And I’ll go further, I allowed myself to get caught in the ‘no time to improve’ trap in places and after 3 or 4 future-spectives stopped doing them.

As my anonymous commenter pointed out there was quite a gap in the three-month period I talked about last year and the one-week period I talked about recently. To be honest that gap surprises me, I’m not sure why I said 3 months last year, it does seem quite long given what I learned in my personal retrospective. However, I’m also surprised that it only took me a week to diagnose so many issues. So perhaps one week is a minimum and 3 months a maximum, ask me again after I have some more data points.

Again its a matter of context, I’d like to think it is always less than 3 months but I’ll be surprised if its as short as 1 week. In general its is better to delay making decisions until you are sure of the situation but once those decisions are made it is better to act quickly.

One way to ensure that decisions are acted on quickly is to involve more people in making the decision. Although having more people involved may delay the decision when one is made these people share the decision and share the responsibility for acting.

I still stand by my comment that when you are new the last thing your people need is your strategy (actually stole that idea from Lou Gerstner in Who Says Elephant Can’t Dance). However, you do need your own personal strategy. You need to know what you are doing and what you are aiming for.

That personal strategy needs to have a short term element (meet, watch and listen), a medium term element (find problems and opportunities, get peoples ideas, get agreement) and a long term element (do whatever it is you were asked to). And your personal strategy needs to tell you when to wait, watch and learn – something my friend Klaus Marquardt refers to as Proactive waiting.

Follow up 2: Personal retrospective and strategy Read More »

Follow up 1: Software as an asset

I don’t know how many readers this blog has but I don’t think it s many. In fact I tend to think I know them all. Well, that’s a illusion I can be pretty certain to dispel. I know the ones I know, because they comment on the blog, or they tell me that read it, but by definition I can’t know who I don’t know – if you get my drift. If I now find someone who reads my blog who I didn’t know read it I still don’t know how many reads I have, I just know more of them.What is gratifying is that although the reader base may be quite small I get quite a few comments. And sometimes I get comments in e-mail not as comments on the blog. So it was that Mick Brooks e-mailed to point out an ambiguity in my blog posting, Software as an Asset, I wrote:

“Your software is worth $10,000,000. Next year, with inflation and depreciation the software may only be worth $9,000,000. So there is an incentive to invest up to $1,000,000 in improving the software, increasing functionality, or code quality, or making it easier to use.”

And Mick asked:

“I don’t understand why there is an incentive to invest *up to* $1,000,000 – why
not twice that, or more?”

Well he’s right, I my thinking was sloppy. I skipped several steps in and jumped to a conclusion. The nature of blogging is that it is meant to be fast, what you think of now, not too polished. Anyway, let me think about this a bit slower…

What I was trying to say is: if your software depreciates by $1,000,000 over the course of a year there is a strong incentive to invest in it. At the moment software isn’t carried as an asset on the books, therefore it is worth $0 this year and $0 next year. Therefore, why invest in it?

Now suppose we wish to keep the software worth $10,000,000 each year. We now have to do something to add 10% value to the software just to stand still. Therefore, anything we can do that costs less than $1,000,000 is worth considering. What we really want to do is spend as little as possible to increase the value by as much as possible, i.e. maximise our rate of return.

There are two possible ways value the software. Firstly, remember I imagined a tool that could value our software? Well, what we need to know is what parameters that tool looks at, then by targeting those parameters we can increase the value.

For example, from a product perspective: if adding some new functionality to the system will add value we might do that. Or, from a programmer perspective, if improving the code quality, increasing cohesion and reducing coupling say, will increase the value we could do that. In this way the parameters the tool uses will guide us what changes we make.

The second way we might value the improvements to the software is to look at the effort expended in enhancing it. It is already possible to capitalise the cost of research and development projects, i.e. show expenditure on R&D as an investment on the balance sheet. So, we could argue that spending $100,000 on a programmer to make changes to the software increases the value of the software by $100,000.

We have to be careful with this line of arguing because we could end up counting every bug fix as an asset. WorldCom did something similar with their telephone network and look what happened to them.

Which ever way we value the software the trick is to find the way of increasing the value as much as possible by paying as little as possible – the rate of return. Once you know this you can compare such a project against others in your organization. So, if spending $100,000 to improve the software results in an extra $1,000,000 of asset on the balance sheet it has paid back 10 times.

Conversely, if spending $5,000,000 on a replacement piece of software results in an asset worth $10,000,000 (and the old software is binned) the return is only twice.

But, doing nothing, let the software stand still looks less attractive than it did when the software was not counted as an asset.

Thus, valuing software as an asset gives us a new way of thinking about how we manage the lifetimes of our software.

Back to Mick’s point: my example was a little brief. The real test is whether we can see the value of our software asset increasing faster than other assets. If it is then it is worth investing in. Whether this is a $100,000, $1m a $2m investment.

Follow up 1: Software as an asset Read More »

Personal retrospective

As I said in my previous blog entry I recently finished a major assignment with a start-up client. The company had a lot of problems when I started, and the development group had its share of bad problems. I think I’ve left them in a much better shape than when I arrived.Last Friday I took some time today to reflect on what it was I did and how I did it. A personal retrospective if you like. This took place between me and my word processor. I just wrote and wrote…. I tried to understand what I did right, that is what I would do again. I tried to understand what I did wrong, things I would avoid in future. And most of all I tried to understand what I would do differently.

To help me I had my personal journal, a diary of events, thoughts and reflections I’d kept during the assignment. The entries were perhaps not as frequent as I would have liked. It started well but after the first few weeks I got very erratic until close to the end. At the time the journal was useful for thinking things through in my own head, understanding what was happening and finding courses of action.

What I found striking was that in the first week I had diagnosed most of the major problems with the development group and company. Some of these I had managed to fix, some remained problems when I left – you can’t fix everything I’m afraid. But it was just striking how much I had worked out so soon.

I normally get sceptical about the expression ‘hit the ground running.’ I’m not sure if anyone really does it. But in this case I’m glad to say I was up and running within a week of getting there.

So what were my big insights from this personal reflection? Two really. Although I consider myself a good communicator (think of this blog, think of the book, etc. etc.) I really could have communicated even more. Particularly upwards.

My second finding: I wish I’d done things faster. There are a few decisions I seemed to hold off doing for weeks but in retrospect worked out for the best.

Any note to myself for the next assignment needs to read: be fearless, go faster.

Personal retrospective Read More »

Software as an asset

An interesting piece from Monday’s Financial TimesStudy urges IT valuation rethink. What this study says is: IT is an asset, therefore companies should value the asset show it as an asset on the books. This isn’t such a new idea, some people have been arguing this for a while but INSEAD and Micro Focus have written a report and got some publicity for the idea. (See also the Micro Focus press release.)

I have to say I agree with the central argument being made here. IT systems are an asset to a business, they allow the business to operate, they cost money to develop and without them there would be a big hole in the organization. Companies count machines, buildings and even brands as assets but not IT systems. In most companies if the IT system disappeared tomorrow it would need to be replaced and that would cost money.

Actually companies do carry some IT elements as assets – hardware. Mainframes, PCs, printers etc. are bought, counted as assets and devalued over time. So actually, when we talk about putting IT on the balance sheet we are really talking about the software.

Now this has some interesting consequences – not all of which my younger self would be happy to hear. Suppose for the moment software is considered an asset, what happens?

Well firstly you get an increase in your balance sheet. This is good, it shows the company is worth more. It also means that software can be depreciated over time, as with hardware, we can aim to write it off over time. This might make it easier to end of life a system after a period of time. Given that software deteriorates over time this could be a good thing.

However, this also means you can’t dispose of it overnight. How many developers faced with a poor system cry: “We can’t maintain it! We need to re-write it” ? How many managers accept this argument. If software is an asset this decision is changed significantly.

Neither could you throw away failed projects so easily. Many companies bury failed projects. Rather than admit to an embarrassing failure they simply don’t and loose the costs in expenditure. However if you are building an asset and suddenly it disappears then you will have some explaining to do.

For both these reasons (and others) putting software on the balance sheet will make it more difficult to get rid of existing systems. This is the bit my younger self would hate.

When I was still programming I saw lots of really bad software. I always wanted to re-write it and sometimes I did. However rewriting is no panacea. It is often (usually) better for the business to continue with the old system. But in doing so you have to improve it. You have to move it in the right direction or else it will become impossible to work with.

This is what interests me now: how can a business maintain legacy systems? This is the challenge I now embrace as a manager.

Now, if we start counting software as an asset there will be more incentive to maintain the software and keep it alive for longer. This means we will invest in the software to keep the asset. Of course we’re going to need tools to value the software – INSEAD suggest one tool and my guess is Micro Focus can sell you a product to do so.

So what is the net effect?

Well I think this approach would allow better reasoning about the software life-cycle. If a piece of software is only going to be used once then it isn’t going to be an asset and we treat it as such. However if we are going to count it as an asset we can decide whether we are going to devalue it over time, and plan in advance for retirement or replacement. Or if we are going to maintain it and keep (or increase) the value on the books then we need to invest in it. Either way we get to reason about the future of the software more logically.

Now imagine we had a tool that assess software value. Point it at the source code control system, put in some economic numbers and bingo! Your software is worth $10,000,000. Next year, with inflation and depreciation the software may only be worth $9,000,000. So there is an incentive to invest up to $1,000,000 in improving the software, increasing functionality, or code quality, or making it easier to use. That changes the game.

(I’m sure Micro Focus, who specialise in tools for legacy (and particularly COBOL) systems are well aware of this. I for one hope we hear more about this idea.)

We tend to think our software will have a short life. But we already know that some software systems are 20, 30 or 40 years old. (Remember the millennium bug?) These are major engineering achievements.

We take it for granted than major engineering achievements are still there. The Forth Rail bridge was opened in 1890. Nearly 120 years later it still works, it is an asset to the nation. St Pancras station in London was reopened yesterday after refurbishment, originally opened in 1877 it is still an active part of the economy 130 years later and it is an asset to National Rail.

On this basis there is every reason to believe that software being written today will be in in use in 2127. Isn’t it about time we started treating it as such?

Software as an asset Read More »

Blue White Red – a simple Agile process

My article from last months ACCU Overload is now on my website. This piece describes a simple Agile process which I have used successfully. I don’t claim it introduces anything new or radical. I don’t even claim originality, it derives from SCRUM and XP. All I claim is it worked for me, my teams and has been used in multiple organizations.

What I think is important about Blue White Red is that is shows how you can start to roll-your-own Agile-like process. Start simple with what works for you.

Read the full piece here.

Blue White Red – a simple Agile process Read More »

Book review: The Innovators Dilemma

I’m just coming to the end of The Innovator’s Solution (Christensen and Raynor, 2003) – OK, I admit it, I can’t be bothered to read the last couple of chapters. Like so many books the first few chapters are very interesting, then the tend to get a little bit boring. I suppose this is because the first few contain the ideas while the later chapters are more about implementation. Still, I always make a point of reading the last chapter as this usually restores some of the feel of the early chapters.That said, its a good book and interesting. I suppose more people have heard of Christensen earlier book The Innovators Dilemma. So why did I choose to read this one and not the earlier one? Well I guess because it was a) newer, and b) be seemed to offer a solution where the other offered a problem. Having read this I wonder if I would have been better reading the first book after all. That’s not to say this isn’t a good book.

Clayton Christensen is most famous for giving the world the theory of disruptive technologies. Such technologies are ones which, well, disrupt, the current market. Current market leaders fall, distribution channels change and small companies get to grow fast. All good stuff.

What gets less publicity is the alternative to disruptive technologies, namely sustaining technologies. These are technologies that can be used by the current market incumbents to maintain their market. In this book Christensen and Raynor suggest that in many instances companies can choose to use a new technology as disruptive or sustaining, it depends in part on how you package and market it, what market you are in now and which market you want to be in.

They also suggest that in an established market the incumbents will always win a straight fight with new competition. The only chance a new entrant has is to use their technology as a disruptive influence. Conversely incumbents need to use technology (maybe the same technology) to sustain their position. So if you are a new entrant don’t both trying build a better mouse trap, and if you are an incumbent don’t bother trying to disrupt the market.

What is nice about this book, although it does make it longer in place, is that the authors understand that a different context needs a different solution. Unlike many books which say: faced with problem X you should do Y; these authors say, if you face problem X and you are an incumbent than do Y, if you are a new entrant do Z.

The formula for a new entrant with a disruptive technology seems quite simply. Enter the market at the low end, offer a cheaper product than the current incumbents. Rather than compete head on target people who don’t buy the current products (non-competition) with a cheaper, less functional product. Sell to people who don’t buy the current product or are over served by it. Then as your technology improves expand this base. Quite often the current incumbents will move out of your way because they don’t want the low end of the market – that is if they notice you at all!

Of course I summarise and miss out many of the details but you get the idea. If you want to know more then read the book!

Interestingly much of the book’s key message is shared with Crossing the Chasm, they have the same idea but come at it from different perspectives. The key idea is: start with a small niche and grow it.

The advice in Crossing the Chasm is for small high-tech companies who wish to increase sales of their product. Moore suggests you define your own niche, nominate it then grow the niche. In this case the new technology is assumed to be superior – in some way – to the incumbents.

The advice in Innovator’s Solution is aimed at existing, probably large, corporations who want more innovation. Here the niche is always at the low end of the market, it is always a niche defined by cost and price. In this case the technology is superior through a lower cost structure, therefore price can be lower. Over time the niche is expanded by moving the product up market.

Most of the small software companies I see are so concerned with getting a sale, sometimes at any price, that the idea of niche marketing is foreign to them. Sales (and any marketing) tends to be a shot gun affair where any sale is what counts.

Indeed the whole idea of marketing is pretty foreign. Increasingly I see the existence of well functioning marketing as the key differentiator between successful and not successful technology companies. When I say marketing I mean both outbound marketing – telling people you have a product, advertising, etc. etc. and (the lesser known) inbound marketing – understanding your customers, knowing their problems. For both you need to define your target market.

Trouble is: you don’t need a marketing department to start a high-tech company. You do need techies, scientists and engineers to design and build the product. You do need sales people, someone to get out and sell the thing that has been built. If you lack either of these you don’t have a company. So you always find engineers and sales in any company.

But you don’t need marketing. You can carry on with the sales and engineers for quite a while without marketing. But if you want to grow you either need marketing or you need a lot of luck.

This is a mistake made by many many software companies. Which might just explain why Crossing the Chasm, The Innovators Dilemma and Solution sell so well.

Book review: The Innovators Dilemma Read More »

Feeling sorry for EDS – business that don't know what they want

It is not often I feel sorry for EDS. Early in my career I had contact with EDS and I was not impressed. Since then nothing I have heard about them has caused me to change my opinion. So it is pretty unusual that I ever give them the benefit of the doubt, let alone feel sorry for them. But this time maybe I do…

Just at the moment EDS are being sued by BSkyB – part of News International – for failure to deliver on a project. In 2000 BSkyB agreed to pay EDS £48m to develop a new ‘state of the art’ customer service system. Unfortunately the project collapsed in 2003, BSkyB completed the project without EDS at a cost of £265m and are now suing them for £709m. The case is being covered by the FT if you want to know more.

Basically EDS claim that BSkyB didn’t know what it wanted. It seems the requirement was simply: a ‘state of the art’ customer service system to provide BSkyB competitive advantage over their competitors. BSkyB respond that EDS didn’t follow any recognisable project planning system.

I don’t know the rights and wrong of the case, what I do know is that businesses frequently don’t know what they want. I’ve written about this several times before (Software requirements and strategy and Requirements: A dialogue not a document for example). This case is about EDS and BSkyB so it is high profile but the same debate is acted out between thousands of IT providers and their customers every week. Because these names are less well know, and because these companies can’t afford to go to court we never heard about them and some agreement is reached.

This happens internally too. Companies with their own IT groups often get into this mess, the business can’t tell the IT people what they want. Perhaps it is because I’m in London but most of the stories I hear involve banks. Sometimes being part of the same company can help, and sometimes it makes things worse.

Rather than ask whose fault is it? we need to ask How can we fix it?

Well there are two parties here and both need to be involved.

The business side needs to set the overall goals – ‘build a state of the art customer service system to produce competitive advantage’. And the IT side – whether in house or outsourced – needs to implement what was asked for an no more. But in between there is a massive, massive, gap.

• What does a state of the art customer service system look like?
• What is competitive advantage in customer service?
• How do you avoid spending so much on ‘state of the art’ that you loose competitive advantage?

These aren’t easy questions to answer. Neither side can answer them alone and neither side can expect the other to answer them. You need co-operation, you need an on going dialogue.

Business needs to be able to articulate what it wants but it is wrong to think it can articulate everything up front. It can set goals but in achieving those goals there are thousands, indeed millions, of options and decisions to be made. The devil really is in the detail. Business and IT need to make choices together.

Making those decisions requires people in the role (business analysts or product managers), it requires IT people who understand business goals, it requires business people who understand how IT is created and how to recognise the benefits, and it requires a process to keep talking and making decisions.

In this case it is wrong to think BSkyB can just throw the problem ‘over the wall’ to EDS. And it is wrong of EDS to expect fully defined requirements documents up front. But, the nature of competitive business encourages people to take this approach.

I don’t know how EDS and BSkyB structured this project but here some advice:

• Start small: IT projects are often far too big, if you can find the real benefit it then stick to delivering just that, keep it small. Inside every large project is a small one struggling to get out.
• Requirements have to be driven by the customer, who needs to set business objectives and vision. However further elaboration needs to come from close collaboration between the business and the supplier.
• If you intend to build a ‘state of the art’ system, and gain competitive advantage then know your enemy, know what state of the art means and know what other systems do.

Finally, make sure you are building your ‘state of the art’ system for the right reason. Will a state of the art system really deliver competitive advantage? Maybe you don’t need ‘state of the art’, maybe you just need to use an ‘average’ system better. Maybe you don’t need to build a system at all, maybe you can just take one off the shelf. This won’t be cheap – especially if you have to customise it – but it will be a lot cheaper than starting from scratch.

Feeling sorry for EDS – business that don't know what they want Read More »

Something is stirring in the office software market

As a PC user there was only one game in town for office software – Microsoft Office. Yes there was other software but very few people used it. Everyone used Word, Excel, etc. and if you used anything else people thought you were strange.Since moving to the Mac I’ve discovered the hegemony doesn’t exist here. I’ve discovered people who use NeoOffice and as Mark points out in his comment today there is Apple software too.

Funny thing is I first met Mark when we both worked for a company which produced UNIX based office automation software, Quadratron. In fact the company had made a good job of missing the PC market and realised it needed to catch up. Somehow Mark, Adrian, myself and a couple of others weren’t going to make it happen. Once it became clear the company had no future we jumped ship.

But now it appears the office automation market might be on the move again.

Yesterday I used the Google spreadsheet and word processor for the first time. In fact I was using them in collaboration mode with Till Schuemmer as we started the process or organising EuroPLoP 2008. They worked well – despite my very slow ADSL connection – and I was impressed. However shortly after we finished the ADSL connection collapsed again so while Google Apps might be the future they are not the present.

Also this week I learned that IBM is launching, or maybe re-launching, a office automation suite called Symphony. Lotus Symphony was an integrated office package in the days of 8Mhz 80286 CPUs with 640k RAM. It worked but not well enough too make much of a mark.

Why has IBM done this?
Will Google Apps (or ADSL) ever be stable enough to challenge desktop software?
Will NeoOffice or Apple displace Microsoft on the Mac?

I don’t know the answers to these questions but I do see the start of a change.

Something is stirring in the office software market Read More »

Agile failure modes – again

Back in April and May I discussed failure modes in Agile software development. Well I’ve found another. That is to say I’ve found another way in which an Agile development practise can go wrong. I’ve seen this before but I didn’t recognise it as a failure mode, I saw it as a little local difficulty now I’ve seen it a second time I understand it a bit better. The trouble with this failure is that at first it looks a lot like what should be happening.

One of the core ideas behind Lean and Agile is improving quality of code. Improving code quality enabled several other benefits. Improved quality means fewer bugs, so future development is less disrupted because there is less rework. Improve quality means less time spent in test at the end of a development. Thus short iterative cycles become possible. And improved quality means you can deliver (or just demo) any time. So quality improvement is important.

One way in which we seek to improve quality is through automated unit testing. This is usually undertaken as Test Driven Development or Test First Development. Now there is a whole host of reasons why it can be difficult but that is another blog. With automated unit tests fine grained functionality can be tested very rapidly which provides for rapid regression testing and increased confidence in new code.

This is where the failure comes in.

We actually want the tests to fail if we change something because they prevents errors slipping into the system. So we add a test for new code, write the new code and by the time we check the code it is finished and the test pass. But, if later something changes which will break the code then we want the test to fail in order to catch the error that has been introduced. In this case we want the test to be fragile so that it catches anything which is now wrong.

The problem occurs when the test is fragile in the wrong way. In the case I’ve just observed the test was fragile when data in the database changed. Previously when I saw this the tests were fragile around COM. The test was/is not sufficiently isolated from its environment.

True dyed-in-the-wool Agile/TDD practitioners will immediately jump to say: you shouldn’t do it that way. And they are right. These fragility points should be addressed. This might be by stubbing code out, using mock objects or by tearing-down and restoring some element – such as a database.

As a consequence instead of the tests helping you validate code and move faster they become a hindrance. The need to update and change the tests regularly adds to the work load and complexity. Instead of the test showing everything is good they add to the maintenance burden, the exact opposite of what was desired.

The problem is that those trying to take the Agile/TDD approach have good intentions by they don’t have the experience, but how do you get the experience without doing it?

Well there are three answers here: first the organization should have invested in training for these people. Second the organization could have hired someone with experience to help them, perhaps as a part-time coach or perhaps as a full time co-worker. And third, the management could have helped the teams learn faster and better and encouraged them to reflect more on what they were doing.

Trouble is Agile development is often introduced bottom up, so people do just jump in. And even where management introduce it then such training and coaching can be expensive – that is if you know who to call and where to buy it in the first place. (If you don’t know who to call then let me know, I know people who do this stuff.)

So although it is well intentioned and the ‘just jump in’ / ‘just do it’ approach can get you started but it can also start building up bigger problems further down the line. If you know what to look for you can spot the problems and – at least try to – take corrective action. If you don’t then it is quite likely you’ll give the whole Agile experiment a bad name,

Agile failure modes – again Read More »

Book review: Crossing the Chasm

I wish I had read Crossing the Chasm (Geoffrey Moore, 1999) sooner. Not only does it give some good advice to technology (specifically software) companies but it also explains a lot about the technology and software market place. I guess I’ve known the basic argument Crossing the Chasm makes so I’ve never felt compelled to read it until recently. However I’ve missing out on so much more.

The basic premise is that for any technology product the buyers can be divided into five types. Initially the product sells to Innovators, then Early Adopters. While these groups will buy the product early they are quite small. Many companies sell to these segments and it looks like success but they are only addressing a fraction of the potential market.

The challenge facing a technology company is to break out of these two groups and sell to the much bigger Early Majority and then the Late Majority. These groups are so much bigger they represent the big money. (There is also a small Laggard group bringing up the rear but this is another small market so its not worth bothering with.)

The problem facing companies is that selling to the Early Majority segment requires a different approach and different strategy to selling to the Innovators and Early Adopters. In fact, while selling to these two groups can be profitable continued success here can harm your attempt to break into the Early Majority. This is where the Chasm comes from. Moore says that the difference between the first two and the third group is a Chasm, it is absolutely massive.

What Moore does is explain this problem and sets out a solution to overcoming it. The solution will be familiar to anyone who has followed my arguments on product management. The argument is also familiar from a bunch of other books too. The familiarity of this argument can probably be traced back to Moore anyhow. And the solution is…

To decide who you want to be your customers. To understand their problems. To craft a product that directly addresses your customer problems. To designate yourself the market leader in the field and make you solution utterly compelling.

It might seem an odd idea to declare yourself the market leader but what Moore says is: find a niche with your customers, define the niche so that you are the only people in it, then the niche is yours. It doesn’t matter if the niche is small to start off with – in many ways the smaller the better. The trick is to win one niche, one vertical market, then move onto the next. Over time you roll up the markets one by one.

I’ve worked with many software technology companies and I’ve seen many of them try to do this. Some successfully, most less so. Some of them knew they were following Moore’s advice but I think most of them were following it by default; i.e. they find their product fills a niche but don’t set out to do that. In these cases the companies have trouble moving from one niche to the next.

The main problem I see with companies following Moore’s advice is the need to say No. To select and dominate a niche companies need to focus on this niche. Most companies find it impossible to say no to a potential sale. The company’s software can be used for many things, the idea of turning a potential sale down is just not possible. Consequently they end up being all things to all men. Sure they take the sale for good reason – its money – but the short term fix damages the long term strategy.

And there you go. There is the link. You need a strategy. You need your sales-force and your development efforts to follow the strategy. Not to sell outside the niche and to develop products with features which directly assess the niche. Once again this is the role of product management. You need someone to do this.

So, if you are in the tech business, and especially if you are trying to create a successful technology company read this book. It is well written and easy to read. It might not be the strategy you follow but you should at least understand it. If you don’t follow Moore’s strategy you should know why you are not and why your strategy is right for your company in your market.

Book review: Crossing the Chasm Read More »