CIO Update Q&A with CheckFree

Nine years ago CheckFree, after a buying binge involving three competitors, decided they needed a better, integrated solution to manage the billions of dollars in daily transactions banks and other financial institutions depend on the company to handle.

After going live with their new Genesis transaction engine everything was going well from a transactional standpoint but over the past few years CheckFree’s management decided their business was becoming commoditized and they had better figure out a new role for the company if they wanted to remain relevant to their customers.

Providing insights and information on its customer’s customers, it was decided, was the key to the company’s future. Only one problem: the data they had so meticulously collected over 25 years was in such disarray and poor quality, it was essentially useless.

This is when CIO Kevin McDearis stepped in with a data stewardship initiative to turn things around.

CIO Update caught up with McDearis to talk about the problems the company encountered because of it’s data-quality issues and what he and his team have done to remedy this all too common problem.

CIO Update: Kevin can you explain what the basic problems were?

McDearis: Because those three (acquired) platforms were so different and none of them had incredible scale-characteristics … we built a fourth platform and that was the Genesis platform.

So, long story short, we were in a hurry to get to three platforms merged into one and did a lot from the ‘How do you make sure you can deliver high-quality transactions’, and didn’t do a lot around, on a go-forward basis, how do you make sure that changes to the business model and resultant changes to data or business rule modifications are consistent.

And over the last few years the other thing that came out of this was reporting—and kind of ancillary services downstream—were the last things we thought about instead of the first, which, we’ve learned, is how it should have been. It’s taken us 10 years to figure that out.

One of the struggles was the Genesis platform has thousands of tables and tens-of-thousands of data elements and business rules but you only need a few hundred to commit a transaction accurately.

So that means there’s tens-of-thousands of pieces of business data out there that we saw value in collecting … and yet, because you don’t have to have that data be accurate to complete a transaction, it’s not very accurate.

What kind of data were you collecting?

One of things we struggled with early on was we couldn’t provide very good information for billing system or even just analytics around (customer) behavior patterns to the bank, or to us, or to any of our partners for that matter, around who’s using what products.

And so as we delivered this platform we realized probably five years ago while the sum of what we are at CheckFree is not really a transaction service as so much as where we’re headed is customer insights.

Our biggest assets are data and helping our customers (banks, financial institutions) understand their customers and we can’t do that if we’re sitting on top of terabytes worth of data that would be of higher business value to us and out customers if it were more accurate.

So that spawned a series of initiatives to look at how do we begin to manage our data more effectively. Our first initiative was successful on the technology side but not so much on the business side … we made it through the first four years of Genesis without a really good data dictionary, it was in Word document, impossible to share, and it was never up to date.

And so we built this data directory that really tracked tables, databases and business rules and valid values around those columns. That began to help.

What did you do about this problem?

At the same time we rolled out this metadata data dictionary for our databases we … started a project called the Enterprise Process Model where we said if we want to be successful as a company we need to understand our high-level business processes.

So we came up with this kind of circular diagram which defined out 14 major high level business processes and we really began a concerted Six Sigma effort around designing our business processes for success.

And part of that meant understanding what data does this process create what data does it consume. And one of our core principals around this whole concept of enterprise management was ‘If your process creates the data you own the data’.

And for each of these high level 14 processes there was (an executive) that owned that process. And because that rule was there … we actually began to get really good traction around Hey if I’ve got business rules that’s changing for this piece of data who do I ask? Well then we could ask that question according to that principal.

So our next step was creating a well documented set of roles and a hierarchy set of responsibilities around those roles for our data stewardship program … and it all began to fit together.

What was the business case for doing all this?

For us … quality is a huge strategic differentiator over our competitors. We literally set the standard for availability for our bill payment and our e-billing service. Nobody can come close to our quality.

But in the early days we didn’t have the quality we have today and one of things we began to look at was why not: ‘Well, we don’t have consistent quality because we don’t have consistent processes’, and that led to ‘Well, another big reason we don’t’ have consistent quality is we don’t have consistent data management either’.

So we actually have now-a-days very rigorous SLAs that have penalties if we have extended outages. And if when working it for Six Sigma it means you can have a one minute problem every ten thousand minutes, which is not that much time.

So that really drove a lot of our focus on process, which then enabled our focus on data. A good number of our problems in even the last twelve to 18 months are still our bad data practices catching up with us.

What has been the result of all your efforts around data quality?

We see ourselves five years from now being less an transaction service provider and more an information service provider and if our data isn’t of the quality to enable that then we actually feel we’ll end up just being a big boy in a commodity market but we’ll experience price compression that will negate our growth goals.

If you think about it, as banks continue to merge and technology continues to evolve, it’s going to be cheaper … to move the money themselves. But the real value that CheckFree provides is the cheapest way to route it. So we see ourselves as being able to use that information to provide value on how to make connections efficiently.

We also see ourselves as being able to provide cross-sell, up-sell and profiling revenue opportunities for billers and financial institutions as well.

How big a problem is the data quality issue in large companies in general?

This is something that people especially large, operations-oriented companies are struggling with today because in the same token there’s a whole host of compliance issues around data quality issues as well, around managing your data.

I’d really say it’s one of the more burning issues especially if you operate large databases with lots of customer data.

I think most of the problems that companies experience today at any level is really the fact that we didn’t have appreciation for the data in the early days as an asset. So we didn’t try to understand why it was we needed it so we just collected it anyway.

If you want to deliver on customer expectations you have to start with understanding very well what business processes … are you automating what data do I need to do it; what applications will do that; and the infrastructure underneath it.

And if you handle those four specific areas in that order you can deliver scalable, high-quality, highly-available, excellent data quality systems to you customers.