Solving the Problem of Large Project Failure

Measuring Progress

If not documents, then what? Two important kinds of outputs of a slice of a software development process are shown in Figure 1: (1) knowledge and understanding on the part of the system builders, and (2) documentation of that understanding.


Figure 1: Milestones defined as measurable increases in knowledge.

Documents are merely evidence that a person has performed certain intellectual activities. For example, a test plan is evidence that a test planner has enumerated the tests that need to be done, and explained their rationale. However, one does not know if test planning is actually complete (has sufficient coverage) unless someone credible and impartial assesses the plan. That is, the plan needs to be verified.

Progress should be measured through tangible outcomes whenever possible, or through independent assessment when there are no tangible outcomes. The outcomes or the assessment are the credible indicators of progress, not the documents. For example, how do you know whether a design is robust enough to proceed with development? The assertion that a design document has been completed is not a reliable indicator, because it is well-known in software development that designs evolve substantially throughout implementation.

How then can one tell whether one is at a point at which proceeding with development will be productive or lead to lots of rework and perhaps even scrapping a first attempt at building the system? Prototypes are useful for this purpose, and so the successful completion of prototypes that address critical design issues is a better indicator of readiness than the completion of a design document. In any case, progress should be seen in terms of the creation of actionable knowledge, not artifacts.

The Scaling Problem

As projects scale, the effects of a document-centric process become more prominent, because those who create the documents tend to be less available to answer questions. Teams create documents and pass them on to other teams, and the original teams are often re-deployed to other activities. They might even be located at a separate site. Programmers, testers, and others are expected to pick up the documents and work from those alone. It is as if someone sent you a book of calculus and said, “Here, build a program that implements this.” No wonder large projects tend to fail. Due to pressure to optimize the deployment of resources, large projects tend to consist of many disjointed activities inter-connected by the flow of documents. But, since documents are information and not knowledge and are therefore not actionable, these flows tend to be inadequate.

Agile methods have been extended to large projects. For example, see Scott Ambler’s article Agile and Large Teams. Ambler is Agile Practice Lead for IBM/Rational and tends to work on very large projects. The basic approach is to decompose the project into sub-projects, define interfaces between the associated sub-components, and define integration-level tests between these sub-components. This is very much a traditional approach, except that documents are not used to define all of this ahead of time. Instead, the focus is on the existence and completeness of the inter-component test suites, on keeping interfaces simple, and allowing interfaces (including database schema’s) to evolve while keeping the inter-component tests up to date.