by Chuck Tatham of CiRBA
The green movement and “greening” of the data center was a trend that attracted some attention for a time but never really caught on as a meaningful trend. Having a greener data center is admirable from a corporate responsibility perspective, but in many cases power is of interest due to more pressing operational concerns. It’s not unusual to hear about data centers that simply can’t access more power for growth or that power costs are too big as a portion of IT spend.
If taking care of these problems results in the ability to be more “green” then great, but the prime concern is dealing with the core issue at hand: power constraint could result in the need to build a new data center to the tune of $10’s of millions.
The challenge in planning or managing power consumption is that, historically, it has been dealt with at a macro level. An organization may have a sense for real power consumed at the facility (provided by the utility) or server rack level but, generally, not beyond these high level points of aggregation.
The issue is that the rate of power consumed relates to how much “work” is being performed by the equipment housed in the data center. Servers, storage and networking gear all consume more or less power depending on how utilized they are. To get to the root of power issues it has become necessary to understand consumption at the server level.
Power should be considered as part of the general infrastructure capacity food group along with CPU, IO, RAM and storage etc. Historically, this hasn’t been possible without investment in specialized plug level power monitoring and even then there was no correlation between actual workloads and consumption; pretty crude.
There are many systems that can track the utilization of servers from a CPU perspective. Combined with manufacturer’s specifications on power draw it is possible to relate CPU activity and estimate power consumption as a function of CPU behaviour. For example, if a server is operating at 50% of its CPU capacity, then its power consumption will be some portion of its maximum power draw.
A number of companies have come up with calculations that can get you fairly close to what the actual draw may be. This kind of estimate, although rough, can go a long way toward understanding power needs at a workload level.
Intel’s latest server architecture can now provide real time power consumption data. This data can be tracked over time alongside other capacity data such as CPU, IO and RAM providing a direct correlation between server utilization and power.
Whether you choose to use estimated power consumption or invest in more advanced servers there will be a need to make sense of it and factor it into decisions. Analytics that can examine server behaviour and key metrics combined with power along with business policies can ensure that your hardware choices, workload layouts and even time of day placements are fully informed by the realities of power consumption.
Having this kind of visibility can help you avoid costly data center builds, maximize the density of what you already own. Understanding the demands of mission critical workloads allows for DR planning that includes prioritizing applications to fit on a subset of infrastructure while you work to bring everything back to normal.
The pinnacle of sophistication in power management is placement of workloads by time of day or off peak periods such that you can even turn some servers off for meaningful periods of time and save money and some trees.
Chuck Tatham is SVP of operations and business development of CiRBA, a data center intelligence analytics software provider that determines optimal workload placements and resource allocations required to safely maximize the efficiency of Cloud, virtual and physical infrastructure.