Final roundup of facts from Capers Jones

In two previous entries I’ve reported some interesting statistics and findings – possibly facts – from Capers Jones book Applied Software Measurement (see Software Facts – well, numbers at least and More Facts and Figures from Capers Jones). I want to finish off with a few notes from the later chapters of the book.

On packaged software

  • Modifications to existing COTS package – I assume he includes SAP and Oracle here – is high-risk with low productivity
  • When changes exceed 25% it may be cheaper to write from scratch. (I’m not sure what it is 25% of, total size of the package, 25% of function points, 25% of features?)
  • Packages over 100,000 function points (approximately 1m lines of code) usually have poor bug removal rates, below 90%

On management

  • Jones supports my frequent comment that comes redundancies and downsizing Test and QA staff are among the first to be let go
  • Projects organised using matrix management have a much higher probability of being cancelled or running out of control
  • Up to 40%(even 50%) of effort is unrecorded in standard tracking systems

On defects/fault

  • “Defect removal for internal software is almost universally inadequate”
  • “Defect removal is often the most expensive single software activity”
  • WIthout improving software quality it is not possible to make significant improve to productivity
  • “Modifying well-structured code is more than twice as productive as modifying older unstructured software” – when he says this I assume he doesn’t mean “structured programming” but rather “well designed code”
  • Code complexity is more likely to be because of poorly trained programmers rather than problem complexity
  • Errors tend to group together, some modules will be very buggy, others relatively bug free
  • Having specialist, dedicated, maintenance developers is more productive than giving general developers maintenance tasks. Interleaving new work and fixes slows things down
  • Each round of testing generally finds 30-35% of bugs, design and code reviews often find over 85%
  • Unit testing effectiveness is more difficult to measure than other forms of testing because developers perform this themselves before formal testing cuts in. From the studies available it is a less effective form of testing with only about 25% of defects found this way.
  • As far as I can tell, the “unit testing” Jones has examined isn’t of the Test Driven Development type supported by continuous integration and automatic test running. Such a low figure doesn’t seem consistent with other studies (e.g. the Nagappan, Maximilien, Bhat and Williams study I discussed in August last year.)
  • Formal design and code reviews are cheaper than testing.
  • SOA will only work if quality is high (i.e. few bugs)
  • Client-server applications have poor quality records, typically 20% more problems than mainframe applications

Documentation

  • For a typical in-house development project paperwork will be 25-30% of the project cost, with about 30 word of English for every line of code
  • Requirements are one of the chief sources of defects – thus measuring “quality” as conformance to requirements is illogical

Agile/CMM/ISO

  • There is no evidence that companies adopting ISO 9000 in software development have improved quality
  • Jones considers ISO 9000, 9001, 9002, 9003 and 9004 to be subjective and ambiguous
  • Below 1,000 function points (approximately 10,000 lines of code) Agile methods are the most productive
  • Above 10,000 function points the CMMI approach seems to be more productive
  • I wold suggest that as time goes by Agile is learning to scale and pushing that 1,000 upwards

Jones also makes this comment: “large systems tend to be decomposed into components that match the organizational structures of the developing enterprise rather than components that match the needs of the software itself.”

In other words: Conway’s Law. (See also my own study on Conway’s Law.) Its a shame Jones missed this reference, given how well the book is referenced on the whole I’m surprised.

Elsewhere Jones is supportive of code reuse, he says successful companies can create software with as much as 85% reused code. This surprises me, generally I’m a skeptical of code reuse. I don’t disbelieve Jones, I’d like to know more about what these companies do. As has to be more about the organisational structure than just telling developers: “write reusable code”.

Overall the book is highly recommended although there are several things I would like to see improved for the next revision.

First, Jones does repeat himself frequently – sometimes exactly the same text. Removing some of the duplication would make for a shorter book.

Second, as noted above, Jones has no numbers on how automated unit testing, i.e. Test Driven Development and similar, stacks up against traditional unit testing and reviews. I’d like to see some numbers here. Although to be fair it depends on Jone’s clients asking him to examine TDD.

Finally, Jones is very very keen on function points as a measurement tool. I agree with him, lines of code is silly, the arguments for function points are convincing. But, I’m not convinced his definition of function points is the right one, primarily because it doesn’t account for algorithmic logic.

In my own context, Agile, I’d love to be able to measure function points. Jones rails against Agile teams for not counting function points. However, counting function points is expensive. Until it is cheap and fast Agile teams are unlikely to do it. Again, there is little Jones can do directly to fix this but I’d like him to examine the argument.

I want to finish my notes on Jones book with what I think is his key message:

“Although few managers realize it, reducing the quantity of defects during development is, in fact, the best way to shorten schedules and reduce costs.”

Final roundup of facts from Capers Jones Read More »

Humans can't estimate tasks

As I said in my last blog entry I’ve been looking at some of the academic research on task time estimation. Long long ago, well 1979, two researchers, Kahneman and Tversky described “The Planning Fallacy.”

The Planning Fallacy is now well established in academic literature and there is even a Planning Fallacy wikipedia page. All the other literature I looked at takes this fallacy as a starting point. What the fallacy says is two-fold:

  • Humans systematically underestimate how long it will take to do a task
  • Humans are over confident in their own estimates

Breaking out of the fallacy is hard. Simply saying “estimate better” or “remember how long previous tasks took” won’t work. You might just succeed in increasing the estimate to something longer than it takes to do but you are still no more accurate.

Curiously the literature does show that although human’s can’t estimate how long a task will take, the estimate they produce do correlate with actual time spent on a task. In other words: any time estimate is likely to be too small but relative to other estimates the estimate is good.

Second, it seems the planning fallacy holds retrospectively. If you are asked to record how long you spend on a task you are quite likely to underestimate it. There seems no reason to believe retrospective estimation is significantly more accurate than future estimation.

Something else that comes out of the research is: psychologists and others who study this stuff still don’t completely understand what the brain is up to, some of the studies contradict each other and there are plenty of subtle differences which influence estimates.

Third, although we don’t like to admit it deadlines play a big role in both estimating and doing. If a deadline is mentioned before an estimate is given people tend to estimate within the deadline. People are also quite good at meeting deadlines (assuming no outside blocks that is), partly this is because estimating then doing is a lot about time management. i.e. managing your time to fit within the estimate.

While deadlines seem like a good way of bring work to a conclusion it doesn’t seem a particularly good idea to base deadlines on estimates. Consider two scenarios:

  • Simple estimate becomes a deadline: If we ask people to estimate how long a piece of work will take they will probably underestimate. So if this estimate is then used as a deadline the deadline may well be missed.
  • Pessimistic estimate becomes deadline: If people are encouraged, coerced, or scared into giving pessimistic estimates the estimate will be too long. If this estimate is then used as a deadline work is likely to be completed inside the time but there will be “slack” time. The actual time spent on the task may be the same either way but the total elapsed (end-to-end) time will be longer.

I’ve written my findings down in full, albeit somewhat roughly, together with a full list of the articles I examined in depth. “Estimation and Retrospective Estimation” can be downloaded from my website.

I would love to spend more time digging into this subject but I can’t. Anyone else want to?

Humans can't estimate tasks Read More »

Apology, correction and the Estimation Project

Over the last few months I’ve been thinking about how people perform estimates when developing software. I’ve take time – not enough – to look through some of the research on estimation and try and understand how we can improve estimates.

I’ve written a long (11 pages, over 6000 words), and rough, essay recording some of my findings. You can download “Estimation and Retrospective Estimation” from my website, I don’t intend to publish the essay anywhere but I expect some of the findings to be included in some future pieces. In this blog entry and the next I want to report some of the motivation and findings.

To start, a correction and an apology.

In January Jon Jagger sat in on one of my Agile training courses. Jon asked me about two references I appeared to cite. I have failed to be able to support these references with the necessary research and I must apologise to Jon and others who heard me say I could.

Now if someone, me or anyone else, goes around saying “there is evidence for this” or “this is backed by research’ they should be able to back it up. I have failed on this count. I apologise to Jon and everyone else who has heard me say these things. I can’t track down my references. I should not have claimed there is research which I can’t provide a reference for.

I’m going to repeat my claims now for two reasons. First, I’m hoping that someone out there will be able to point me at the information. Second, if you’ve heard me say either of these things and claim there is research to back it up then, well I apologise to you too. And if you hear anyone else say similar things, please pin them down and find the original source.

The first thing Jon pulled me up on was in the context of planning poker. I normally introduce this as part of one of my course exercises. Jon heard me say “Make your estimates fast, there is evidence that estimates fast estimates are just as accurate as slow ones.”

Does anyone have any references for this?

I still believe this statement, however, I can’t find any references to it. I thought I had read it in several places but I can’t track them down now. I’m sorry.

Thinking about it now, I think I originally heard another Agile trainer say this during a game a planning poker a few years ago. So its possible that others have heard this statement and someone has the evidence.

Turning to my second dubious claim, I said: “I once saw some research that suggests we are up to 140% in judging actuals.” It is true that I read some research that appeared to show this – I think it was in 2003. However, I can’t find that research so I can’t validate my memory, therefore this is not really valid.

By “actuals” I mean: the actual amount of time it took to do a piece of work.

Again, does anyone know the research I have lost the reference to?

And again, I apologise to Jon and everyone who has heard me state this as fact. It may well be true but I can’t prove it and should not have cited it.

Why did I say it? Well, I find a lot of people get really concerned about the difference between estimates and actual, whether some task was completed in the time estimated. To my mind this is a red herring, actuals only get in the way.

This is where things start to get interesting. I decided to look again at the research on estimation and specifically recording “actuals.” The first thing I learned was that what we call “actuals” is better called “retrospective estimation.”

The “Estimation and Retrospective Estimation” essay record my journey and findings in full. In my next blog entry I’m record some of my conclusions.

Apology, correction and the Estimation Project Read More »

Why Quality Most Come First & a comment on the 'System Error' report

On Monday night I presented “Why Quality Must Come First” at Skills Matter in London, the slides are now online (previous link to my website) and there is a pod-cast on the Skills Matter website.

This was a revised and updated of my talk at the Agile Business Conference last October. There are two key arguments:

  • High quality is essential to achieve Agile: if you have poor quality you can’t close an iteration, you can’t mark work done, and schedules will be unpredictable.
  • High quality (fewer bugs) is the means to achieve shorter delivery time: better and faster are not alternatives, they not even complementary, they are cause and effect.

That second point is one that I don’t think is appreciated enough, in fact I think a lot of people have a belief that you can deliver faster if you turn-down the quality. Many years ago I worked with a great developer called Roy. One day Roy went to our manager, the conversation went something like this:

Roy: Would you rather have the software sooner with bugs or later without?
Manager: Sooner with bugs

Both Roy and the Manager were seeing a trade-off that does not exist.

Resetting this broken mental model, unlearning this trade off is one of the biggest challenges that faces out industry.

Read my lips: High quality software is faster to develop. Invest in quality and you will finish sooner.

This is also one of the reasons I prefer the XP approach over the Scrum approach. It is also one of the reasons I was disappointed with the “System Error” report from the UK Institute for Government last week.

The reason the report got so much attention was because it recommended the UK Government adopt Agile and Iterative development in IT projects. By the way, the report is from a think tank, it is wrong to say this report means the UK Government will adopt Agile, or that the UK Government wants to adopt Agile, it just means some “thinkers” suggest it should.

I haven’t had a chance to do anything more than skim the report yet but what struck me was the lack of discussion about quality. The report continues the belief that quality is an effect not a cause.

The report lays out four principles for Agile projects:

  • Modularity
  • Iterative approach
  • Responsiveness to change
  • Putting users at the core

There is one big thing missing here: Quality.

Without high quality you can’t take an iterative approach because nothing is ever finished.

Without high quality you can’t be responsive to change because nothing is deployable and over time crud builds up which slows you down.

Without quality your users will not trust you, they will not believe you have changed, and will spend a lot of their time. Pushing bad buggy software one to users is not a sign that you have put them at the core.

Leaving aside the fact that Modularity is one of the most over used and consequently meaningless words how can anything buggy be modular? Do you want to (re)use a buggy module? Bugs are side effects, not something you want in modular code.

Why Quality Most Come First & a comment on the 'System Error' report Read More »

The Agile 10 Step Requirements Model

I’ve been talking about the Agile 10 Step Requirements Model for a year or so now. It has appeared in a few conference presentation and will be at the centre of my ACCU 2011 conference presentation. But I’ve not written about it.

Now I have. This months ACCU Overload has a short overview of the model. I say short because each time I’ve sat down to write down the model it has got very long. This was a deliberate attempt to keep it short.

You can ready it for free in Overload, or I’ve put the article itself on my website, The Agile 10 Step Model.

Many thanks to Ric Parkin and the Overload team.

The Agile 10 Step Requirements Model Read More »

How does Agile relate to CMM Level 5?

A question in my mail box: “How does Agile relate to CMM Level 5?”

As I started to tap out the answer I thought: this might as well be a blog entry. So here it is.

Think of CMM, or rather CMMI which replaced CMM about 10 years ago, as a ruler. It is a way of measuring software development effectiveness. Is your development process 1cm good? 2cm good? or 5cm?

CMMI doesn’t say how you should do development, only how you should judge effectiveness. Agile is a way of doing things – as opposed to “the state of Agile” which is a measure of effectiveness.

You can use Agile to be any CMMI level you like. CMMI doesn’t care how you do it. Watts Humphrey (the father of CMM) used to say something along the lines of “Work on you quality and processes first, the levels will come by themselves.” That statement is very Agile. Unfortunately some organisations get it the wrong way around. I was at Reuters when they imposed CMM and destroyed a big chunk of their development capability.

The late Watts Humphrey did issue some process recommendations in the form of the Personal Software Process and Team Software Process.

That’s the easy bit.

“The state of being Agile” might, or might not, conflict with Level 5 CMMI simply because different things are considered important. You might be CMMI level 5 and decidedly unAgile.

Ironically CMMI level 5 might mean you have to investigate Agile. Because being level 5 means you are “self optimising”. If an organisation is level 5 and hasn’t looked at Agile it should because that may help them improve.

Paradoxically, being level 5 itself makes it becomes harder to improve. The risk of change is a lot greater because the organisation has more to loWaterfallse – and probably lots of procedures to update making any change expensive.

CMM(I) tends to be associated with a certain way of doing things. Partly this is historic: when CMM(I) appeared Waterfall was the dominant model, NASA was the first organization to reach level 5 and (at the time) it was very Waterfall based, and because CMM(I) tends to be more common in military work which are also big and paper work intensive.

So, some people believe that CMM(I) means following a very structured, heavy, WaterfallWaterfall process. It doesn’t have to mean this but historically the two do tend to coincide.

The Software Engineering Institute issued a report in 2008 which discussed how CMMI and Agile could work together: CMMI or Agile: Why not embrace both! I don’t agree with everything in the report but I do agree with the general tone. CMMI and Agile are not alternatives and can be complementary.

How does Agile relate to CMM Level 5? Read More »

Offshoring: not as simple as its often reported

A couple of footnotes to my last blog post on Capers Jones book, Applied Software Measurement. One of the points I noted was Jones suggestion that rising prices and costs in India and elsewhere would mean IT offshoring loose its financial benefits from about 2015 onwards.

Jones, by the way, later says that Indian and Chinese outsources are aiming to compete on quality not price in the long run. So we can expect to see the nature of offshoring change.

Shortly after posting that blog entry I noticed a piece in The Economist on the Indian ID programme, Identifying a billion Indians (27 January, subscription required.) The Indian Government is embarked on a scheme to give unique identity numbers to the entire population.

The first thing that first got my attention was not that this was an IT based scheme (no surprise there) but who was doing it. Not Tata (TCS), not Wipro, not Infosys. It is Accenture and L-1 of the US and Morpho of France. I’m sure much of the work is being done in India but the Indian Government has given the work to foreign firms. Lets give a round of applause to the Indian Government for not feeling compelled to give work to local firms.

And then lets note that offshoring/outsourcing goes both ways. Its not all about US and Europe loosing out to India or China. It goes the other way too.

The second thing that got my interest was this: the work is split between these three firms: 50%, 30%, 20%. Progress is reviewed regularly and work reallocated. The most effective firm during any one period gets the bulk of the work next time. That is a very enlightened way of working and gives real incentives the firms to do their best. It also shows flexibility with contracts.

Separately, some US readers might be please to learn that Financial Times reports that the big Indian outsources plan to do less work with US firms. European readers won’t be so happy to learn that Europe will replace the US as their target.

The reason: not competition, not prices or quality. US Visa prices.

Offshoring: not as simple as its often reported Read More »

More facts and figures from Capers Jones

I continue my reading of Capers Jones Applied Software Measurement as discussed a few entires ago – Software Facts and I’d like to report some more of Jones findings.

These numbers are very insightful but, and its a but Jones acknowledges, the data is very shaky. As Jones says “software measurement resembles an archeological dig. One shifts and examines large heaps of rubble, and from time to time finds a significant artifact.”

He readily admits that there is not really enough data to make solid conclusions (less than 1% of the data needed is available) but this is where we are at. This is as good as it gets. Before anyone rushes to say “Capers Jones is wrong” ask yourself: “Do I have better data to make conclusions one?”

I find two weaknesses in Jones work. Firstly his reliance on Function Points. I agree with him that lines of code is not a good measure but I have some doubts about function points for two reasons.

  • They do not incorporate algorithm complexity
  • Automating function point counting is difficult and becomes an approximation. Thus true function point counts are expensive which makes them difficult to work to calculate often

But, function points, for all their faults seem to work.

My second issue with Jones work concerns his assumption of the waterfall model. Yes he acknowledges Agile, he even says it is the most productive approach in some circumstances but all his data and assumptions are cut through with the waterfall. I suppose this is only natural, it was (is) the dominant model in the industry for 30 years or more.

OK, on with some numbers and conclusions, again these are almost entirely US based but are probably very similar elsewhere….

Productivity

  • Jones repeatedly states and shows how quality and productivity are related. The most productive teams have the lowest bug counts and shortest schedules.
  • The three biggest costs on a software project, in order: Rework (Bug fixing), paperwork and meetings & communication. Sometimes managerial costs are actually greater than coding costs.
  • Jones states as little know fact “excessive staffing may slow things down rather than speed things up”
  • Inspections (code, design, requirements) are the most efficient known ways of preventing problems.
  • Studies show that some developers are 20 times more productivity than others, and some make 10 times as many errors as others.

Costs

  • Paperwork (requirements, technical specs, etc.) on the largest systems can be larger than one person’s ability to read during an entire career. (Think about that: you join a new project at 21, by the time you hit retirement you haven’t finished reading the documentation.)
  • Most corporate effort (time and money) tracking systems are incorrect and miss between 30% and 70% of the real effort.
  • Thus this data is essentially useless for benchmarking, creates cost over runs because they are so inaccurate and is actually dangerous because it puts future projects at risk when the data is used for estimates. (Before you question this statement go and check, does your tracking system measure unpaid overtime?)

Projects

  • “More than half of large projects are scheduled irrationally” – a delivery date is set external to the development group without reference to capabilities.
  • Over 50% of large (more than 10,000 function points) projects are cancelled and the rest are almost always late and over budget.
  • In 2008 at least 25% of new projects used some elements of Agile.
  • Maintenance is now the dominant form of software development and is likely to stay that way.
  • There are several Y2K like problems likely to occur in the next 40 years.
  • Even waterfall projects overlap phases – typically 25% of any phase is incomplete when the next starts. This rises to 50% when the handover from design to coding is considered.
  • It is even difficult to determine when a project really starts – less than 1% have certain start dates – and 15% have ambiguous end dates.

Outsourced & Offshore

  • Outsourced software development companies are generally better than their clients and have a better record of delivering large systems.
  • Offshore work tends to be less product than onshore work in the US – that does not mean it is not cost effective but it is less productive in terms of function points per man/time period
  • Inflation and other cost changes mean that by 2015 the economic advantages of offshoring development work are likely to be gone. (Considering that it takes time to transfer work and offshore teams to become productive it is likely that very soon it will not make sense to send work overseas.)
  • Worryingly Jones reports that 75% of companies have deficient requirements practices. Since offshore work is particularly dependent on good requirements I would expect offshore projects to be more troublesome in this regard.

Standards and Quality

  • Jones finds no solid evidence that ISO 9000 improves quality or the ability to deliver; almost the opposite it does increase the cost of projects because it increases the amount of paperwork. Ironically some ISO 9000 advocates have defended the approach to him on the grounds that ISO 9000 is not designed to improve quality! (Sounds like many people have been misled over the years)
  • Military systems are the largest systems in the world, some over 350,000 function points. They are also the most expensive and usually paperwork driven.
  • Department of Defense (US) standards do not seem to improve software quality, they have sometimes been impediments and have certainly raised costs.
  • Microsoft, Oracle and SAP are singled out as having very poor quality control.
  • Interestingly he also suggests that once 25% of a large COTS application needs to be customised it is less expensive to build the same application from scratch.

Jones says that as of 2008 there were 700 known programming languages with one a month appearing. The average life of a language is 15 years but the average life of a major software system is 20 years. These figures might be a little misleading, C is now nearly 40 years old, other languages survive only a few. But the point is clear: there are lots of dead languages and orphaned languages out there (over 500 Jones thinks.)

As noted before, most schedules are planned with an externally determined date. Jones gives this as the number one reason for project failure – number two is excessive schedule pressure which is very similar. Later he proposed schedules are so bad as to constitute professional malpractice.

This is interesting. One the one hand businesses need those deadlines. On the other, if projects were left to run their natural duration would the failure rate be so high?

He also comments that US development teams tend to be strong on technical skills but are let down by weak management and organisational issues. Later in the book he attributes much of this down to western managements pre-occupation with cost control which he contrasts with eastern concern with quality.

This argument seems reasonable but I’m not sure it stands up to analysis. Although it might explain why Scrum can be “successful” even when technical practices are ignored.

In my experience US and European teams don’t always have the technical skills they should have – certainly talk to Steve Freeman or Jason Gorman and you will hear some horror stories. While India has more CMM certified organisations than anywhere else my personal experience is, while India has some very good developers the IT bubble has attracted in not just second rate developers but third rate ones as well. Its also worth pointing out that Japan isn’t a software powerhouse, Toyota even struggle to apply Lean thinking to software.

Jones believes, from his studies, that management is the greatest contributor to success and failure at the project and company level. In some ways that is reassuring, it shows that there might be something in good management.

Interestingly Jones also supports one observation I have frequently made myself: when redundancies occur Test and QA staff are usually the first to be let go, and this is counter productive in anything except the very short tun.

I’ll continue reading and maybe blog some more data. In general, the Applied Software Measurement has a tendency to repeat itself and I wish he used more graphs for his data but I still think it is worth reading and I recommend it.

More facts and figures from Capers Jones Read More »

Scrum + Technical practices = XP ?

My last blog post (Scrum has 3 Advantages Over XP) attracted a couple of interesting comments – certainly getting comments from Uncle Bob Martin and Tom Gilb shows people are listening. And it’s Tom’s comment I want to address.

But before I do, lest this come across as Scrum bashing I should say that I think Scrum has done a lot of good in the software industry. Yes I had my tongue-in-my-cheek when wrote “Scrum has 3 advantages over XP” but if Scrum hadn’t made “Agile” acceptable to management it is unlikely that XP ever would have take Agile this far.

I suppose I’m a little like Uncle Bob, its a bit of a surprise to me that Scrum Certification has come to mean so much to the industry when really it means so little. I’m also worried that brands poor people “certified”, good people “uncertified” and stifles innovation.

Second the ideas of the Scrum originators are good. And lets not forget that while Schwaber and Sutherland have become celebrities in the community there were five authors of the original “Scrum: A Pattern Language for Hyperproductive Software Development”, Beedle also co-authors one of the early Scrum books, Devos still works as a Scrum Trainer, while Sharon seems to Scrum’s Fifth Beatle – where are they now?)

Getting back to the point. Tom Gilb commented: “Sutherland did say in a speech that Scrum works best with XP”. Chris Oldwood also says he’s heard Jeff say this.

In fact, I think I’ve heard Jeff Sutherland say something similar, heck, not similar, the same and reported it in this blog. And Jeff’s not alone, I’ve heard the same thing from other Scrum advocates, trainers, and cheer-leaders.

But, this view is not universal. There are Scrum trainers who don’t. Specifically there are those who train Scrum and specifically say XP practices like TDD and on-site customer should not be used. And as I sarcastically commented previously, part of the attraction of Scrum to management types is that it doesn’t delve into the messy world of coding.

Lets just note: there are multiple interpretation of what is, and is not Scrum. Nothing wrong with that, it just complicates matters.

Back to XP though….

There are, broadly speaking, two parts to Extreme Programming (XP). The technical practices (Test Driven Development, Continuous Integration, Refactoring, Pair Programming, Simple Design, etc.) and the process management side (Iteration, Boards, Planning Meetings, etc.)

It is these technical practices that Jeff, and others, are saying Scrum software development teams should use. Fine, I agree.

Look at the other side of XP, the process side. This is mostly the same as Scrum. Do a little translation, things like Iteration to Sprint and it is the same.

(OK there are a few small differences. I just checked my copy of Extreme Programming (old testament) and to my surprise stand-up meetings aren’t there. I think many XP teams do stand-up but it isn’t originally part. The book does include “40 Hour Work week” which isn’t present in The Scrum Pattern Language, or “Agile Project Management with SCRUM” but I’ve heard Scrum advocates talk about sustainable pace. I’m still not sure how you marry this with commitment but I’ve written about that before. However these points are knit picking.)

Essentially Project Management XP style is Scrum.

This shouldn’t be a surprise. In the mid-90’s when Scrum and XP were being formed and codified many of these people knew one another. If nothing else they saw each other’s papers at PLoP conferences. The Scrum Pattern language was presented at PLoP 1998, Episodes (the basis for XP) was at PLoP 1995, and both draw on the work of Jim Coplien.

I have every reason to believe that XP and Scrum didn’t appear in insolation. There was cross-fertilisation. I have also been told there was competition.

If you accept that Scrum and XP project management are close enough to be the same thing, and you accept Jeff Sutherland’s recommendation (namely do XP technical practices with Scrum) then what do you have?

  • XP = Technical Practices + Scrum Style Project Management
  • Jeff Sutherland says effective development teams = Scrum + Technical Practices from XP

Think about that. Even talk about it, but don’t say it too loudly. For all the reasons I outlined before, XP isn’t acceptable in the corporation. Scrum is. Therefore, we must all keep this little secret to ourselves.

Scrum + Technical practices = XP ? Read More »