How to Invest in Advanced Analytics

Like just about everything in the so-called “IT Service Management Industry,” the term analytics will provoke many different definitions depending upon whom you ask. For some, it will fall purely into that rarefied realm of IT data warehousing which is, nonetheless, on the rise. For others, it may refer to a predictive self-learning algorithm that just made service delivery a whole lot easier by assimilating many multiple data streams and discovering disruptive anomalous patterns across them. While for others it might mean “if/then” project planning that can consume multiple inputs and spit out a likely outcome in terms of cost, benefit and effort.

For the purposes of this column, I am considering all of the above as

analytics as well as more pervasive capabilities such as effective cross-domain event correlation and case-based reasoning for knowledge management in the service desk. There’s a reason for this all encompassing approach to analytics in IT service management (ITSM) from a management software investment perspective. That’s because the industry, market and more mature adoption strategies are all evolving to approach analytics as a tiered set of layered and complementary investments.

Five years ago, EMA researched various analytic capabilities across 44 service management vendors and saw that most of those we targeted at least supported multiple analytic approaches in their solutions. For instance, 32 vendors supported correlation (as in event correlation) and just as many (although not all the same vendors) supported anomaly detection. Only eight vendors’ queried supported case-based reasoning, but the survey didn’t target service desk vendors particularly. Twenty vendors claimed to have supported data mining and OLAP, although most of these supported data mining as it applies to versatile query-based reporting versus true, OLAP-cube-driven, data warehouse analytics. Oh, and back then five claimed support for neural networking, six claimed support for fuzzy logic and four claimed support for chaos theory — somewhat scary terms that you’re not likely to see in marketing literature today.

I should stress that these analytics and vendors crossed many different domains and functions from performance and availability management to asset management and financial planning, to configuration management and capacity planning.

Analytics or, at least, intelligence, can conceivably apply at almost every layer of a logical management architecture from data collection, to data normalization and assimilation, to relationship modeling and topology building, to analytics (if there is a single analytics layer this is it), to automation, visualization and reporting. However, what’s really worth noting is that the industry is gradually evolving towards a paradigm that will liberate every tool from having to be a champion at performing every stage of this process. Some tools, for instance, excel in data collection. Others in data assimilation and reconciliation, etc. And while most tools or solutions excel at a few of these features, almost none do a great job at all of them.

Instead, they are increasingly becoming designed to optimize as a family — including across brands — so that the best data collectors can inform the best analytic and modeling capabilities which in turn drive automation, visualization and analysis. And no, I’m not dreaming. It’s happening at something of a snail’s pace, but it’s happening nonetheless.

Back in 2002, EMA projected a next-generation architectural model for service management that envisioned a future of “federated data stores supporting cooperative analytic engines” much like a super highway can support better cars capable of faster driving speeds. We believed it would take 30 years.

If anything, the pace has accelerated thanks in part to the advent and improvement of CMDBs towards more dynamic, federated systems, application discovery and dependency mapping, cross-domain automation (sometimes called runbook) and most recently a significant uptake in analytics of almost all kinds. And while standards have predictably failed to deliver a quick-fix Holy Grail growing support for Web services across the industry is taking at least a few bites out of the dread challenge of solution integration.

Although there’s a lot more to say about analytics, for now I’d like to wrap up with some high level ideas for best practices to help you sort through and optimize this increasingly valuable jigsaw puzzle.

Sorting the claims

Don’t take claims for the miraculous on face value. Try to understand enough about how the value was delivered so that you can actually focus and optimize your investments.

Vendor claims for benefits generally tend to be through-the-roof, and this vice is often magnified when analytics come into play. Most vendors can show you case examples that seem to bear their claims out, but if you don’t take a peek under the hood, you may still be riotously misled: Does the analytics solution gather its own data primarily? If so how, where and what data? Or does it collect data from many multiple sources? Is it predictive or reactive?

Understanding the high-level logic of analytic design is more than just an elite curiosity, it’s the only way you can meaningfully invest in complementary solutions that will work together. And that brings on the next recommendation.

Get products that work together. While siloed analytic solutions that do just one or two things well have value and have been around for a long time, being able to share analytic insights across a community of solutions can significantly magnify the value of your investments. This is true at both ends of the process: assimilating data from other tools can bring great value, as well.

Like just about everything in the so-called “IT Service Management Industry,” the term analytics will provoke many different definitions depending upon whom you ask. For some, it will fall purely into that rarefied realm of IT data warehousing which is, nonetheless, on the rise. For others, it may refer to a predictive self-learning algorithm that just made service delivery a whole lot easier by assimilating many multiple data streams and discovering disruptive anomalous patterns across them. While for others it might mean “if/then” project planning that can consume multiple inputs and spit out a likely outcome in terms of cost, benefit and effort.

For the purposes of this column, I am considering all of the above as

analytics as well as more pervasive capabilities such as effective cross-domain event correlation and case-based reasoning for knowledge management in the service desk. There’s a reason for this all encompassing approach to analytics in IT service management (ITSM) from a management software investment perspective. That’s because the industry, market and more mature adoption strategies are all evolving to approach analytics as a tiered set of layered and complementary investments.

Five years ago, EMA researched various analytic capabilities across 44 service management vendors and saw that most of those we targeted at least supported multiple analytic approaches in their solutions. For instance, 32 vendors supported correlation (as in event correlation) and just as many (although not all the same vendors) supported anomaly detection. Only eight vendors’ queried supported case-based reasoning, but the survey didn’t target service desk vendors particularly. Twenty vendors claimed to have supported data mining and OLAP, although most of these supported data mining as it applies to versatile query-based reporting versus true, OLAP-cube-driven, data warehouse analytics. Oh, and back then five claimed support for neural networking, six claimed support for fuzzy logic and four claimed support for chaos theory — somewhat scary terms that you’re not likely to see in marketing literature today.

I should stress that these analytics and vendors crossed many different domains and functions from performance and availability management to asset management and financial planning, to configuration management and capacity planning.

Analytics or, at least, intelligence, can conceivably apply at almost every layer of a logical management architecture from data collection, to data normalization and assimilation, to relationship modeling and topology building, to analytics (if there is a single analytics layer this is it), to automation, visualization and reporting. However, what’s really worth noting is that the industry is gradually evolving towards a paradigm that will liberate every tool from having to be a champion at performing every stage of this process. Some tools, for instance, excel in data collection. Others in data assimilation and reconciliation, etc. And while most tools or solutions excel at a few of these features, almost none do a great job at all of them.

Instead, they are increasingly becoming designed to optimize as a family — including across brands — so that the best data collectors can inform the best analytic and modeling capabilities which in turn drive automation, visualization and analysis. And no, I’m not dreaming. It’s happening at something of a snail’s pace, but it’s happening nonetheless.

Back in 2002, EMA projected a next-generation architectural model for service management that envisioned a future of “federated data stores supporting cooperative analytic engines” much like a super highway can support better cars capable of faster driving speeds. We believed it would take 30 years.

If anything, the pace has accelerated thanks in part to the advent and improvement of CMDBs towards more dynamic, federated systems, application discovery and dependency mapping, cross-domain automation (sometimes called runbook) and most recently a significant uptake in analytics of almost all kinds. And while standards have predictably failed to deliver a quick-fix Holy Grail growing support for Web services across the industry is taking at least a few bites out of the dread challenge of solution integration.

Although there’s a lot more to say about analytics, for now I’d like to wrap up with some high level ideas for best practices to help you sort through and optimize this increasingly valuable jigsaw puzzle.

Sorting the claims

Don’t take claims for the miraculous on face value. Try to understand enough about how the value was delivered so that you can actually focus and optimize your investments.

Vendor claims for benefits generally tend to be through-the-roof, and this vice is often magnified when analytics come into play. Most vendors can show you case examples that seem to bear their claims out, but if you don’t take a peek under the hood, you may still be riotously misled: Does the analytics solution gather its own data primarily? If so how, where and what data? Or does it collect data from many multiple sources? Is it predictive or reactive?

Understanding the high-level logic of analytic design is more than just an elite curiosity, it’s the only way you can meaningfully invest in complementary solutions that will work together. And that brings on the next recommendation.

Get products that work together. While siloed analytic solutions that do just one or two things well have value and have been around for a long time, being able to share analytic insights across a community of solutions can significantly magnify the value of your investments. This is true at both ends of the process: assimilating data from other tools can bring great value, as well.