Tomorrow’s Data Center Today

As companies continually look to get more data-crunching ability from the same real estate, the main ingredient of future data centers won’t necessarily be greater processing power or more servers, but improved heat dissipation.

Today, data centers are being pushed to the limit. Blade servers, more “pizza-boxes” (1-U and 2-U rack-mounted servers) crammed into server racks, and the higher-performing (and higher power-consuming) processors in these devices all add up to data centers that can crunch ever-increasing volumes of information.

Over the last four years, for example, data center power consumption has gone up, on average, about 25%, said Christian Belady, distinguished technologist with Hewlett Packard. At the same time processor performance has gone up 500%. “That tells you, you are getting a hell of a lot more transactions per watt.”

But this increased processing power comes at a price: heat. And a hot data center is a down data center, said members of an expert panel at Liebert’s Incredible Universe user’s show in Columbus, Ohio on Wednesday.

Panelist included HP’s Belady, Robert Ogden of Lowes, Mike Fluegeman of data center design firm Syska Hennessy Group, Peter Panfil of Emerson Network Power (Liebert’s parent company), and Kevin Shinpaugh director of Cluster computing at Virginia Tech (and the man who put together VT’s new Apple G5-based super-computer).

To counter the heat problem and continue the current trend towards more powerful processors, data center designers are resurrecting an old technology from the mainframe days: liquid cooling.

But to effectively incorporate liquid back into data centers, standards will need to be developed that allow competing companies to supply existing customers with solutions that don’t require expensive custom retrofitting and vendor lock-in, said Belady.

“We’re at a cross-roads,” said Belady, “that, if the (data center) industry comes together and aligns on a roadmap, we could lick this problem. There’s no reason we shouldn’t be commoditizing the data center. It actually makes the market bigger for everyone.”

Every panelist agreed that standardization is an important factor going forward, and most said their organizations are actively working to promote more standardized solutions for customers. However, these are early efforts and it may be some time before they bear fruit.

While it is more costly to use liquid (either water or refrigerants) to cool server racks and, in some cases, the servers themselves, it is a far more efficient means of removing heat than the air-circulation systems of today — up to 5,000 times more efficient, said Belady. And the more efficiently you remove heat from the data center, the more efficient and expandable the data center becomes.

By switching to “extreme density cooling and power” solutions, which include liquid cooling, maxed-out data center operators will be able to cram more processing power into existing real estate, eliminating the need (in some cases) to either expand facilities or build new ones, said Panfil, vice president of UPS Engineering at Emerson.

Many data centers today already employ some form of liquid cooling but it is not the dominant form of heat dissipation. And the answer to data center productivity and longevity lies somewhere in the combination of existing air cooling systems and liquid systems, panelist’s said.

Many data-center designers (and their clients) would like to build for a 20-year lifecycle, yet the reality today is most cannot realistically look beyond two to five years, said the panelists. But, with the introduction of more efficient heat dissipation technologies and the standards to support them, many in attendance Wednesday believe a 20-year lifecycle is not only feasible but probable.