Like most enterprises, Southern Farm Bureau Casualty Insurance (SFBCI) has production servers and servers for testing. And like most enterprises, this means the company’s computing resources are not being optimally deployed.
“The test servers sit idle much of the time, but when they’re needed, they’re really needed,” said Kenneth McCardle, the company’s assistant vice president for Information Systems.
The solution to this problem goes by many names: on-demand computing, grid computing and utility computing are most commonly used. In simple terms, those terms refer to software and hardware systems that dynamically allocate computing and storage resources based on need and usage rather than using rigid and often inefficient allocations.
McCardle said his company is just sticking its toes in the on-demand world, but the benefits have already become apparent.
“On many occasions so far, there have been servers I didn’t need to buy,” McCardle said. “And the ROI grows as the shop grows and as the need grows. This has to be the wave of the future.”
A Growing Need
On-demand computing is a relatively new and still-evolving area and can be difficult to define precisely, according to Nancy Hurley, a senior analyst specializing in on-demand computing for the Enterprise Strategy Group.
“The basic premise is that when applications need additional compute power, of if they do not require compute power they’ve been assigned, the environment automatically assigns or unallocates that power and re-assigns it to another application,” Hurley said. “It’s a very dynamic environment.”
To implement on-demand computing, you don’t install a single instance of hardware or software, but rather a variety of hardware and software, she noted.
“An on-demand environment is very aware of application usage and the needs of applications — that’s a software function,” she noted. “But from there, it triggers actions to give more (or less) compute power to applications and that’s partly a hardware function. It’s an extremely integrated orchestration of computing hardware, storage hardware, networking software as well as software to monitor and manage the process and to manage each individual element within the enterprise.”
This level of monitoring and automated allocation can save not only money, as McCardle pointed out, but a lot of time. For instance, when adding a new application, it could take days to find server capacity, install the software and re-allocate resources. On-demand computing can reduce that process to hours — or less, said Hurley.
Because of its complexity, on-demand systems are being championed by vendors, such as IBM, Hewlett-Packard and Computer Associates, which already have their fingers in many parts of your enterprise infrastructure, Hurley noted. Even so, no one vendor can supply all the pieces for a total on-demand environment.
That complexity, and the diversity of vendors, is one big reason why Southern is sticking its toe in the water slowly, McCardle said.
“There’s are several stages and we’re just at the beginning,” McCardle said. “Our first stage is monitoring the network and detecting downtime. Stage-two was automating recovery when there’s failure instead of just paging somebody. The third step is proactive — you might determine what a problem is before it’s a problem. Say, you can determine if a drive is at 80% of capacity or if a server has consistently high CPU cycles.”
McCardle defined the fourth stage as automatically shifting resources. “We’re already detecting when these things are full. Stage-four is automating that process.” To do that, his company is using don-demand modules that are part of Computer Associate’s UniCenter.
Using the software to efficiently use test servers showed him that the concept of on-demand computing is worthwhile. In this case, the issue wasn’t to allocate test servers for use in a production environment — that wouldn’t be good policy, he said. But because the test servers were being used far more efficiently, the company was able to buy or replace fewer of them. That’s what made it obvious that the company should expand its deployment of on-demand computing to the production side.
“Just today, somebody put in a request for a new application and we told them we didn’t need a new server — we could take a slice off an old server,” he said. “We saved a lot in hardware costs.”
He noted that the benefits extend well beyond acquisition costs and time savings.
“It’s the floor space, the utilities it takes to run (a server) and the administration — somebody going in there and setting it up,” he said.
On-demand computing should significantly reduce those expenses and he hasn’t even started thinking about applying the dynamic allocation benefits to storage.