Restoring Respect for the Computing Profession

We’ve made insecurity and unreliability the norm. It’s true and, as an industry, we can no longer deny it.

You can look at studies such as the one performed at the National Institute of Standards and Technology (NIST) in 2002 that implicate defect removal as a major cost of software deployment. Or, you could simply take a look at pop-tech culture and observe that our buggy software is creating new dictionary entries: spam, phishing, and pharming are only a sampling.

Are bad apps so prevalent that we have resorted to assigning amusing monikers to our failure to protect our users? Is this a situation that any self-respecting software professional can be proud of?

The answers are clearly “Yes” to the first and a resounding “No” to the second. Investigating why this is and what we might do about it is one of the most worthwhile tasks that our industry can undertake.

In fact, it may be the very thing that saves us from creating the next generation of security holes and quality problems that plague our users.

The Off-Target Past

Past attempts at writing secure and reliable code have been decidedly front-loaded. The focus of software development practices has been on specification, architecture and development: the early parts of the software development lifecycle. The intuition being that we need to focus on preventing defects because “quality cannot be tested in.”

This concept was so intuitively pleasing that many software construction paradigms picked up on it as early as the 1970s: structured analysis/structured design, clean room, OOA/OOD/OOP, and aspect-oriented programming are some examples.

Software defects continued and so did the process communities’ ill-fated attempts to squash them: design by contract, design patterns, RUP, and yes, oh yes, there were more.

Finally, we woke up and realized that such front-loaded processes simply don’t work. The idea that we can specify requirements and plan tests in advance when reality was changing too fast to predict, hit our industry square in the face.

And we answered with more methodologies: extreme (spell it whatever way you wish) programming and agile development took center stage. Progress? Hmm, well the jury is still out but we are not holding out much hope.

You see the problem with all of these methodologies is that they teach us the right way to do things .

Now granted, many industries have figured out the right way to do things. Artists study Picasso, Rembrandt and the many other masters of their craft. Musicians have no lack of masters to study: Beethoven, Handel, Mozart and Bach are only a few. Architects can study the pyramids, the Taj Mahal and Frank Lloyd Wright for that matter.

All these professions have existed for long enough that there are many, many examples of people getting it right so that those wishing to follow in their footsteps and master the craft have examples to study.

But it is our sad misfortune (and grand opportunity) to be in the software game so early that no such examples of perfection or inspiration exist. If they did, we’d be studying these “classic” programs so that the new generation of programmers could learn the discipline from those that went before them.

On To Better Ideas

Is it even possible to construct a software development methodology without prior knowledge of how to do software right? We say no and the evidence we present is that software is getting no better.

Indeed, we would argue that the complexity of the systems we build is far outpacing the small advances that any of the current menu of development methodologies.

Throw them all away and face the fact that we have no idea how to build a high-quality software system of any reasonable size.

When pop-tech culture stops naming our bugs and the other headaches we create for our users, that may be an indication that we are progressing. But until then, we need a better plan.

We cannot study success in an environment where only failure exists. So we propose, instead, that we study failure and build our development processes rear-loaded.

Let us explain what we mean: there is no more clear indication of what we are doing wrong than the bugs we write, fail to detect and then ship in our products. But all of the past methodologies treat bugs as something to avoid, something to hush-up.

This is unfortunate and we propose we stop treating bugs as a bad thing. We should embrace our bugs as the only sure way to guide them to extinction. There is no better way to improve than by studying the very thing that makes our industry the laughing stock of engineering disciplines.

We should be studying our bugs.

IT Entomology

We propose starting with bugs and working backwards toward a process that just might work.

Here’s how we think we should proceed:

Step 1: Collect all the bugs that we ship to our customers (paying special attention to security vulnerabilities). Instead of treating them like snakes that might jump out and bite us, consider them corporate assets.

After all, they are the surest indication of our broken processes, misdirected thinking, and mistakes that we have made. If we can’t learn from what we are doing wrong, then shame on us. If we refuse to admit that we are doing wrong, then we have a bigger problem.

Step 2: Analyze each of these bugs so that we a) stop writing them, b) get better at finding them, and c) understand how to recognize when they occur.

Step 3: Develop a culture in our organizations in which every developer, tester and technician understands every bug ever written.

Step 4: Document the lessons learned. This becomes the basis for a body of knowledge about the bugs we write and the basis for a new set of methodologies that are aimed squarely at preventing our most egregious mistakes.

We can do this by questioning our bugs.

We think the following three questions are a good start and will teach us a great deal about what we are doing wrong. For each bug we ship, we should ask ourselves:

What fault caused this bug in the first place?

The answer to this question will teach developers to better understand the mistakes they are making as they write code.

When every developer understands their own mistakes and the mistakes of their colleagues, a body of knowledge will form inside our development groups that will reduce mistakes, help guide reviews and unit tests and reduce the attack surface for testers.

The result will be better software entering test.

What were the failure symptoms that would alert us to the presence of this bug?

Remember that we are proposing to study bugs that shipped so the assumption is that somehow the bug slipped by or was found and not fixed purposefully.

In the former case, testers will create a body of knowledge and tools about how to better isolate buggy behaviors from correct behaviors and in the latter the entire team will learn to agree on what an important bug really is.

The result will be better software shipping to our customers.

What testing technique would have found this bug?

For those bugs that were totally missed in test, we need to understand what test would have found the failure and helped us diagnose the fault. Now we are adding to the testing body of knowledge with tests that actually work to find important bugs.

The result will be more effective tests and a shorter test cycle.

Since we cannot possibly understand how to do software right, let’s understand how we’re doing it wrong and simply stop doing it that way. The resulting body of knowledge will not tell us what to do to develop software, it will tell us what not to do.

Perhaps we can follow this rear-loaded process using our existing front-loaded methodologies and meet somewhere in the middle.

Now that’s what we call progress toward a discipline we can all be proud of.

Let the celebration of bugs begin!

Herbert Thompson is director of Security Technology and Research at Security Innovation. Dr. Thompson trains software developers and testers at the world’s largest software companies on security techniques and can be contacted at [email protected].

Founder of Security Innovation, James Whittaker is recognized in business, government and academic circles around the world as a leading authority on software testing. A prolific author and speaker, he has written dozens of papers and articles, and is a frequent keynote speaker for industry and corporate conferences. Contact him at [email protected].