Finding the Business Case for Grids


Until recently, grid computing was an academic tool, used as a cost-effective means for tackling computationally intensive problems. In disciplines like particle physics and meteorology, when problems involved computations that could run in parallel, grids provided what were essentially cheap supercomputers.

Most enterprise applications, though, are designed for centralized processing, and couldn’t easily be diced up and distributed. This partially explains why grids have been slow to take off, despite the hype. Even so, Steve Tuecke, CEO of Univa, a grid software startup, believes that grid computing is a case of Internet history repeating itself.

“As with the Internet, it was first adopted by academics. Then, it moved into large, leading-edge companies before finally gaining wider adoption,” he said.

Wider adoption happened after compelling applications, namely email, took off.

Thus far, Tuecke’s vision is panning out, with grids slowly making their way from academia into large enterprises. Financial institutions, energy companies, and insurance firms have found that they, like academic institutions, have large problems that can be broken down into smaller computation problems and distributed.

But, if grids stick to this supercomputer model, they will eventually hit an adoption wall. As adoption grows, however, and as the upstart grid industry prepares for the assault on the mid-tier, the concept of grid computing is changing, evolving to encompass more than just distributed computations.

Three Types of Grid

According to Frank Gillett, principal analyst at Forrester Research, many potential enterprise customers are confused by what “grid computing” means. How can you determine whether or not a grid will help your organization if you don’t even know what a grid is?

Forrester separates grids into three categories: 1) compute grids, which is the supercomputer model, or using linked servers and desktops to create a mainframe on the cheap, 2) resource grids, which utilize not only distributed processors, but other IT resources as well—think next-generation data centers, and 3) data grids, which distribute database information and storage.

Data grids are receiving less hype than compute and research grids. They allow organizations to distribute applications that rely on huge amounts of data or that access large databases. Rather than replicating information from site to site, data-intensive applications like ERP and CRM can be distributed, allowing data to be shared, managed, and secured across sites.

While this is a useful computing model, most current enterprise grids are compute grids. If grids do gain widespread adoption, most bets, though, are being placed on resource grids as the driver.

Hartford’s Compute Grid

Insurance and investment giant The Hartford is a good example of this. The Hartford adopted a compute grid after it ran into a scalability problem, but it’s looking ahead to resource grids of the future.

As the largest seller of variable annuities in the world, The Hartford needed a way to measure the risks involved with these policies. The Hartford’s variable annuities include a living benefit, which assures policyholders that they will at least be able to recoup their principal from the annuity if the principal is withdrawn slowly over time.

The risks involved here are complex, since they combine both traditional insurance risks and financial market risks. To protect itself from losing money on these annuities, The Hartford must hedge the risk of guaranteed returns, making sure that no matter what happens in the stock market they are covered.

Their equity market hedging application required huge numbers of computations. Not only must the program account for a client base of nearly two million, but it must also consider the many possible permutations of financial market behavior.

“There were alternatives to grids, such as using a mainframe,” said Chris Brown, director of advanced technologies at Hartford Life, a subsidiary of The Hartford. “But the alternatives were expensive, and the ability to scale with a grid was much better.”

While many enterprises choose grids to lower operational costs, Brown said that The Hartford focused on scalability. Even so, Brown noted that you can’t overlook the cost equation. “It wasn’t the principal reason we for building our grid, but versus a more conventional solution, the grid has saved us millions of dollars.”

The hedging program was originally run on a server cluster, but adding capacity in a cluster format involved a more manual process of scaling (which is also a more expensive process). With a grid in place, new severs and eventually other IT resources can be added in an almost plug-and-play fashion, and The Hartford avoids sinking money into excess capacity that may or may not be needed.

Since the grid-market is still rather nascent, The Hartford built its own using the compute-grid model pioneered in academic circles and opting for the University of Wisconsin’s open-source Condor grid software.

Currently, The Hartford is running a server-based compute grid, but Brown noted that desktops will be brought into the mix soon. Further out, he is contemplating the resource-grid model, considering tying in more and more IT resources, and if resource grids gain traction as an IT model, the company may even explore offloading certain internal functions to service providers.

Resource Grids

Much of what resource grids promise can be summed up by two words: efficiency and flexibility.

Existing data networks tend to be built using the silo approach. There is often excess capacity, which is underutilized, and when a business priority shifts, it is difficult to redeploy resources. With resource grids built on such technologies as blade servers, open-source operating systems (OS), and modular storage, resources can be ramped up as needed, while capacity previously used for outdated applications can be shifted to more pressing needs on the fly.

“The shared usage model—allowing distributed users to access different applications and resources—makes a lot of sense,” said Univa’s Tuecke, “but you need a way to mediate that, to prioritize users and applications, to centralize security, to manage how data is accessed and stored.”

Companies like Univa, Platform Computing, and even IBM are exploring software solutions to make resource grids enterprise-friendly.

Developing what is essentially a grid OS is certainly a step in the right direction, but is that enough to lure more enterprises?

Joe Clabby, vice president and practice director at Summit Strategies, a market research firm, doesn’t think so. “Grid vendors are starting to do a good job of making middleware transparent and easy to deploy,” he said, “but where are the applications?”

A grid OS (or middleware) will attract large organizations that can devote IT resources to porting existing applications to grids or to writing new applications for grids. However, the mid-tier won’t be enticed until there are plenty of easy-to-manage applications that are grid friendly.

The Cost Hurdle

The cost effectiveness of grids, it has been argued, could pull in the mid-tier, but the cost equation isn’t even close to being solved. Specifically, many enterprise applications are priced on a per-CPU basis. When you move those applications to grids, you may save on infrastructure, but you’ll get buried by the licensing fees.

Even so, Tuecke argues that applications are already beginning to emerge and that the pricing models will be worked out soon enough.

“When you look at IT operations, you see that hardware and software costs are dropping, but overall IT expenditures are still going up,” he said. “Why? It’s expensive to deploy and manage those IT resources.”

Grids make IT management more cost effective and flexible, and that may soon be enough to justify their adoption.