Speed Bumps on the Road to True Utility Computing

By Max Staines of Compass Management Consulting

Discussions of cloud computing benefits often turn to the virtues of dynamic, usage based pricing where IT resources are delivered on-demand to customers in response to changing business requirements. Users pay only for what they consume, and don’t have to worry about insufficient capacity or idle resources. The cloud, the argument goes, will finally deliver the longstanding promise of pay-by-the drink utility computing.

In reality, the benefits of utility computing have less to do with the cloud or with technology in general, and more to do with the commercial terms by which IT resources are supplied and consumed. Similarly, the obstacles that exist to true utility computing have less to do with technology solutions and more to do with economic structures and incentives; with organizational processes; and with the complexities around how IT is measured, reported, and priced.

Organizational dynamics

Let’s start with usage based pricing, the foundation of utility computing: Use more IT, pay more; use less, pay less. While appealing in theory, in practice this means less control and predictability for both client organizations and service providers. Financial officers might like the idea of lower costs when demand for IT wanes, but when it spikes … not so much.

Service providers, meanwhile, must report expected revenue stream to shareholders, who are notorious for not liking surprises. And while vendors are happy to help clients develop and implement mutually beneficial growth strategies, they’re less receptive to talk of shrink strategies.

This dynamic will change over time, as the need for scalability and flexibility will increasingly trump predictability. Revenue generating applications and development environments that need to scale up and down quickly will drive utility models, as will compute intensive applications that can run processes in parallel across multiple machines (today’s “grid” applications).

For service providers, a true utility model will be more attractive to new vendors aggressively seeking market share. Players such as Microsoft, Amazon, and Google can afford to adhere to the utility model more closely, but so far they still attempt to lock in customers via the requirement to re-architect applications for their platforms.

In the near term, traditional outsourcers will be incented to reduce financial risk, reassure investors, meet sales targets, and recognize revenue. As such, they’ll continue to require long term contracts with minimum volume levels that limit the “save as you shrink” risk side of the equation.

Operational constraints

In addition to the economic challenges, a successful utility computing initiative must overcome the deeply ingrained inefficiencies and operational constraints within most existing client enterprises and service provider organizations.

On the client side, these constraints can include requirements for a specific solution or approach, resulting in sub-optimal performance. Clients may, for example, require weekly reports on incidents and problems when monthly reports are more than adequate. Misaligned service levels can be a constraint, as can legacy systems, multiple architectures, and inadequate asset management.

These inefficiencies and constraints are then duplicated on the service provider side. Rather than push back to impose process discipline, service providers often take a “customer is always right” attitude or (from a more cynical perspective) take a “we get paid by the hour” attitude. Regardless, instead of leveraging standard capabilities across multiple client environments, each service provider account team becomes a silo focused on a particular client’s particular needs.

When that happens, the economies of scale that a utility model requires can’t be achieved.

All of these problems, moreover, can create confusion about roles and responsibilities, both within the client organization and between the client and service provider team. This results in duplicated effort and further inefficiencies.

To break this impasse, both parties need meaningful incentives to overcome the inertia of the status quo and drive transformation of how IT service is delivered.

Various “tensioning” techniques can encourage the implementation of standard services rather than custom solutions. For clients, chargeback and demand management mechanisms can introduce transparency into billing and consumption. For service providers, rewarding account teams for growing margin rather than total deal size encourages efficient delivery. Vendor teams can be further incentivized to correlate service definitions to market offerings, rather than to client specifications.