The Path to Software-Defined Power
Fulfilling the potential of green computing in the datacenter requires more than innovative temperature taming techniques such as liquid or air cooling. The IT equipment has to operate with greater efficiency, and part of that could be incorporating an additional software layer to optimize power usage rates.
Last month, we spoke with Clemens Pfeiffer, the CTO of Power Assure, to discuss the state and declining usefulness of certain metrics (especially PUE) in evaluating the efficiency of datacenters. Pfeiffer recently joined us again to discuss how datacenters can utilize the notion of software-defined power under a total software defined data center (SDDC) umbrella to control and optimize power usage within a facility.
For Pfeiffer, there are two key aspects to implementing a software-defined power infrastructure to lie atop the datacenter. The first is software-defined cooling, which would, according to Pfeiffer, “allow for dynamic adjustments of cooling capacity based on the actual heat output of IT equipment under variable load conditions.”
This software would need an underlying monitoring system to feed it. However, for those facilities interested in examining metrics such as PUE, that monitoring system already exists to a certain extent. With that said, more intricate measuring protocols would be ideal, such as that of the Google datacenter, which monitors and models air and heat flow, as discussed here in March.
The second portion is a software-defined power solution. This part is a little more intricate on the IT side, as it has to identify which applications are likely to induce significant power costs, potentially before they actually run. According to Pfeiffer, software-defined power would ideally “migrate applications from one datacenter to another and provide power grid integration to intelligently determine the most reliable configuration for datacenters at any given time.”
The idea is to develop a system that would automatically draw upon power usage information from all units within a datacenter and even across multiple datacenters to determine and optimize job scheduling in a power usage context. Further, applications would ideally be moved from one location to another to satisfy power optimization constraints.
“If you have it automated and can shift applications from one location to another, then you can tie it in with all the information that is out there,” Pfeiffer said in explaining the advantages of shunting applications across or among facilities.
According to Pfeiffer, one of the major roadblocks preventing the execution of software-defined power and cooling lies in the disconnect that frequently exists between executives and IT engineers. As Pfeiffer noted in a previous conversation, executives are more liable to rely on singular metrics to tell a whole story.
In his example, he mentioned the dilemma of an engineer who installed IT equipment that reduced the overall power consumption yet increased the PUE of the facility, a scenario which would not please the executives. In favoring total power efficiency, the addition of a software layer changes the reporting paradigm, making it harder to achieve “good” results simply by playing to one number.
As such, full implementation of a software-defined power paradigm has yet to happen in a facility today. However, he noted that three facilities on the financial services side will be instituting such a system within the next twelve months. The idea is for it to spread to other related dedicated datacenters, such as ones in the oil and gas industry, over the next two to three years.
“We are at the point where we are doing the implementation of software-defined power,” Pfeiffer said.
Performance issues aside, as datacenters, both the public and private kind, become more numerous, their impact on the environment must be diminished. A software-defined solution encourages efficiency improvements in all datacenters as opposed to only ones in the most temperate locales.