But, as the demand for faster, more-reliable computing continues to increase unabated, new energy-saving solutions will have to be employed to continue this trend. This is because inefficiencies generate every datacenter’s worst enemy: Heat.
Otherwise, many companies may run into situations where they can no longer squeeze anymore compute cycles out of their existing infrastructure and will have build new capacity or buy it—at a premium—elsewhere.
Fortunately, solutions are available to begin this process. New, high-efficiency server power supplies are coming online that offer up to 90% efficiency, said Peter Panfil, VP of Power Engineering for Liebert Solutions, an Emerson Network Power company. Google, for example, just annouced an initiative to get 90% efficient, 12V power supplies into home computers and their line of servers.
Today’s, server power supplies are typically only about 70% efficient. In other words, only 70% of the energy the server consumes is turned into work. Eighty-percent efficient units are available but buyers have to specify when they order new servers they want these units. Unfortunately, you cannot retrofit existing servers with more efficient power supplies.
The goal, of course, is to reduce heat so that more and more computing can be done in the same space. To do this, Panfil recommends his clients take some simple steps like clearing air duct obstructions, getting rid of excess cabling, and using hot-aisle/cold-aisle set ups.
“There’s things you can do today in your datacenter to improve efficiency without doing heroic things,” said Panfil. “What we talk to folks about is to prioritize the efficiency improvements: A 10-percent reduction in the IT power consumption is bigger than a 10-percent reduction in the cooling and it’s bigger than a 10-percent reduction in power.”
HP’s Mann agrees. He counsels clients to look at the problem holistically. You can’t do one thing without affecting another. “You can’t just look at it from a chip-perspective or a compute-perspective.”
That’s why HP offers things like a thermal inspection to see where heat is being generated and cooling lost. They are also offering up a new class of servers, c-class blades, that use “power capping” software to throttle back chip power when not in use. This makes them up to 40% more efficient that convention chip sets in always-on mode.
Azul Systems, for example, isn’t in the energy saving business but, sensing an opportunity, it is offering up its network-attached processing (NAC) solution as a way to help cut costs. The company’s 11U, 16 processor (384 cores) on-demand appliance draws just 2.7 kilowatts of power. For datacenter folks what this means is they can have a huge reserve of back-up compute power available just when they need it most—without having keep a bunch of servers sitting idle.
“The reason that the compute utilization is so low across the datacenter today is people have no idea how much compute they really need at a given moment for a given application,” said Azul’s COO and co-founder, Scott Sellers. “So the only thing they can do is throw more servers at it.”
Virtualization is also playing an important part but allowing admins to run more applications on fewer servers. Software that optimizes severs loads and power usage is also available. Couple this with facilities improvements such as smart air-handlers that maximize air movement, and you begin to put together a viable, leaner, more energy-efficient space that will meet you future needs—at least for now.
“Today’s datacenters, most of them run less than 50% efficient,” said Verari’s Driggers. “So the majority of the power that is going in there is being wasted. The best way to improve the performance of datacenters is through conservation, by not wasting.”