Sourcing: Keep it Simple and Standard

In a traditional global IT enterprise, different business units, geographic regions, and functional entities each do things their own way. Service providers, meanwhile, have generally accommodated these specialized requirements. After all, the customer is always right, right?.

This mindset is changing as business organizations gradually accept the proposition that a great majority of their IT requirements can be addressed through standard services. In other words, CIOs are realizing that, most of the time, they don’t need 20 different flavors of service delivery when vanilla will do. Vendors are recognizing that standardized delivery allows them to benefit from significant economies of scale.

This growing emphasis on standard services is making the longstanding concept of “utility computing” or usage-based pricing increasingly feasible. Under a utility model, businesses pay internal or external providers only for IT resources consumed, rather than for infrastructure and equipment. Top-performing client organizations and service providers are making significant progress in applying this concept to IT delivery and consumption.

Benefits of utility computing

Just in case you forgot, a standard services delivery model that fully leverages utility computing can deliver significant benefits. Compass data shows that traditional improvement initiatives drive incremental efficiency gains within the existing operational environment, and typically produce annual savings of 10 to 20 percent. Meanwhile, transitioning to a utility-based model and a standard service platform can produce savings of 40 percent and more.

In other words, rather than improving performance within the context of the existing model of operations, IT leaders are using standard services to raise the bar and define a new standard of performance.

This “new way of doing things” is being driven less by technology than by a fundamental shift in the commercial terms of IT service delivery. In a infrastructure utility model (IaaS), for example, rather than paying a service provider X per server, a client pays Y for a CPU minute. This means that the service provider is now incentivized to deliver that CPU resource as efficiently as possible, and no longer has a financial stake in delivering more servers.

The client, meanwhile, no longer has to worry about having too many or too few servers, and now has transparency into how the business consumes IT resources, and can make more informed decisions around consumption priorities.

Put simply, the move towards standard services drives efficiency from the delivery side, and improved demand management from the consumption side.

Operational constraints

While the concepts behind standard services and utility computing are straightforward, implementing the model presents some formidable challenges. Specifically, a successful utility computing initiative must overcome the deeply ingrained inefficiencies and operational constraints that characterize most existing client enterprises as well as service provider organizations.

For example, clients often dictate a specific solution or approach, in many cases for no better reason than “that’s the way that things have always been done.” Rather than push back to impose process discipline, service providers often accommodate the unique requirements. The net result is that, rather than leveraging standard capabilities across multiple client environments, each service provider account team becomes a silo focused on a particular client’s particular needs.

In many cases, both the client and service provider recognize the inefficiency of the status quo approach, but lack the incentives to drive significant change. Practices that promote standard service delivery include external benchmarks that promote internal comparison and competition, as well as identifying constraints and quantifying their impact on performance. Benchmarks can also show the cost of unique requirements and allow the business to subject them to value-based reviews. Similarly, service levels can be gauged against business requirements.

In addition, a variety of “tensioning” techniques can encourage account and delivery teams to provide standard services rather than custom solutions. For example, service provider account teams should be compensated on margin rather than total revenue. Clearly defined demand management controls can drive standardization by creating the need for greater transparency into billing and consumption. This, in turn, builds awareness of cost drivers. Service provider supply teams can be incentivized to correlate service definitions to market offerings, rather than to client specifications.

At a higher level, a successful initiative requires that both the client and service provider organizations understand and accept the concept of a standard IT services delivery model and its characteristics. Accordingly, all parties must agree on and work jointly toward defining pricing mechanisms that create incentives to drive efficiency and eliminate operational constraints.

In planning the change initiative, a baseline analysis of the existing environment is necessary to quantify the current state and the “size of the prize” of an optimized environment. With the target identified, the roadmap to the standard delivery model can be charted. The implementation plan is characterized by a detailed analysis of existing constraints and inefficiencies. Around this, the services framework and pricing mechanisms that drive out the constraints and implement utility delivery can be built.

Further, the plan should ensure that potential benefits are delivered throughout the change process. This allows the implementation to be self-funding, as the benefits realized along the way can be invested in driving further efficiency and improvement.

Max Staines is North America President of Compass Management Consulting.