By far one of the biggest energy consumers for most IT departments is the datacenter. How much power is used and how much can be saved depends on each individual company’s compute and storage requirements and how far along they are in adopting new, energy-saving technologies and datacenter designs.
Most datacenter power (about 50%) goes towards running compute cycles — either on servers, blades or mainframes—roughly 25% goes towards cooling and the remainder is spread around to things like air-handlers and lighting.
What this means in real-world terms, according to Noah Horowitz, senior scientist with the Natural Resources Defense Council is, worldwide, about 50 large power plants are dedicated to supplying just servers and air conditioners with power. “And that number is growing exponentially as data needs and storage is increasing.”
This is because the need for datacenters is, once again, on the rise, said Dave Driggers, CTO of Verari, a maker of high-density blade server clusters, rack-optimized servers and software solutions. Post dot-com-bust, datacenter capacity, like bandwidth, was cheap. But all those cheap CPU cycles have been absorbed. That means new datacenters are getting more and more expensive to build and provision.
“We were just with AT&T and AT&T talked about their hosting business as just a cash cow, money-making machine,” said Driggers. “They bought quite a few of the companies that were on the chopping for hosting and now they say that group is printing money.”
To offset the high cost of building new datacenters, which, according to Driggers, is 10x what it was just five years ago, most CIOs are looking to maximize the output and capacity of the ones they already have. To do this, they need to get more compute power into (and out of) the same physical space.
Luckily, vendors have stepped up to meet this need with new products like more powerful chip sets, multi-core processors, blades, and networked-attached processing, datacenter management software, virtualization, and rack-mounted liquid cooling, to name a few.
“Basic datacenter design has been pretty much static the last 20 years … from a cooling and power perspective, said HP’s Ron Mann, director of Engineering, Enterprise Infrastructure Solutions.
“You try to get as much as you can in that rack because that raised floor, that cooling space, is at a premium. That’s why 1U servers came about, that’s why smaller drives came about, that’s why blades came about, because you want to maximize your compute based on the square footage of datacenter space you have available.”
In real terms what this means is most datacenters are actually consuming more power than just a few years ago when a typical rack drew about two-to-three kilowatts. Today, most racks draw between seven-to-10 kilowatts. This means more heat and more power to run the servers and cool them off.
On the plus side, this is being done using the same amount of space, which, from a maximization perspective, is exactly what CIOs are after—more computing power from existing facilities.