Whether we know it or not, we’ve probably all been duped by a well crafted IT survey at one time or another.
This is not to say all survey’s are bad or useless. But sometimes results can be misleading. Leading questions can skew responses to get a certain result or a faulty data collecting methodology can make a survey’s results unrepresentative of the larger truth.
‘Are you still using Windows?’, for example, is a leading question with an assumption built in, i.e. that you use or, at least, used, Windows in the fist place. The statistical results from such question are not particularly meaningful because it doesn’t take into account other operating environments.
Vendor “surveys” in particular are notoriously skewed, says Joshua Greenbaum, a principal at Enterprise Applications Consulting. Siebel Systems is an example of company that uses customer satisfaction surveys to great effect; quoting numbers from leading questions in its quarterly earnings reports as proof its customers rate the company very highly. (See Greenbaum’s Datamation column on this subject.)
“I have survey research background and what I found out is to do anything that is methodologically pure is virtually impossible in this market,” he says.
Even IT research firms, which live and die based on reputation, still have an underlying incentive to sell services to customers. The more problems uncovered, the more services can be sold, says Greenbaum. In Greenbaum’s experience, 90% of business surveys are methodologically flawed.
“And that may be charitable,” he says. “But to paraphrase Sir Winston Churchill, ‘Democracy is the worst system possible unless you consider the alternatives’. The point: … in 90% of cases, this is the best data you can get. Some data is generally better than no data.”