Quality Quest: Mission Impossible–Meeting Software Testing Objectives

By Linda G. Hayes

I was rendered speechless when a fellow professional said, in all seriousness, she was going to discard the majority of her regression tests because they had failed to find errors. After I recovered my composure–and my voice–I asked why she was considering such a thing, to which she confidently replied, “Well, so-and-so says tests that don’t find problems aren’t worthwhile.”

As it happens, the crazy claim turns out to be based on the earliest and most commonly quoted definition of software testing. Published in Glenford Myers’ 1977 book, The Art of Software Testing, the definition states: “The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault or weakness in a work product.”

Based on this found meaning, I can see where my colleague and her informant got the idea that tests that find no errors have no value. I can also see why software testers might rival dentists for having the top depression and suicide rates in all professions.

Proving a Negative

Simply finding errors is an unacceptable purpose for software testing. The approach requires software testers to prove a negative–there are no more errors to find. To demonstrate this, they must know how many errors there are to begin with and where the errors are. If we knew that, we would not need to test; we would just need to fix the errors.

Furthermore, if you don’t know how many errors exist, how do you know when you will be finished testing? How can you measure your tests’ effectiveness? Does this mean as you contribute to the overall improvement of the software development process, your effectiveness as a tester declines as well?

Proving the Pointless

Another reason this “no errors-no value” definition is dangerous is it lends credence to the idea all software errors are created equal. It presumes that finding an error, regardless of what or where it is, is valuable. This belief leads testers to invest valuable time and resources creating obscure, random, and meaningless situations in the hopes of “catching” the programmer unable to adapt to the changes. All the while, the testers are eschewing the most basic and obvious tests, assuming they will work. But what if they don’t?

Ironically, the true meaning of the term “regression testing” is to look for software functionality that used to work but no longer does, i.e., the software has “regressed.” But, based on Myers’ definition, there is no point in running a test that has found no errors, so once a software function works it is immune from further testing. Yet, the functionality that no longer works following a regression test poses the greatest risk, since it is still in use. The new functionality that doesn’t work may be irritating, but it is probably not devastating.

Proving Progress

To give credit where credit is due, more recent authors have improved upon the no errors-no value testing definition. In Software Test Automation, written by Mark Fewster and Dorothy Graham in 1999, the purpose of software testing is “to give increased confidence in those areas of the product that work and to document issues with those areas of the product that do not work.” Notice this terminology introduces the value of establishing what does work as well as what doesn’t.

Similarly, the most recent glossary of standards from the British Computer Society Specialist Interest Group in Software Testing (BCS SIGIST) defines testing as “the process of exercising software to ver