Advanced Computing in the Age of AI | Thursday, April 18, 2024

Are Clouds the Fastest Path to Green Computing? 

Public and private clouds have been promoted as paradigms of efficiency across a number of dimensions, not the least of which is energy efficiency and the associated carbon footprint. But is it true? That's what a recent study by the Natural Resources Defense Council (NRDC) and WSP Environment & Energy LLC (WSP) set to find out.

Public and private clouds have been promoted as paradigms of efficiency across a number of dimensions, not the least of which is energy efficiency and the associated carbon footprint. But is it true? That's what a recent study by the Natural Resources Defense Council (NRDC) and WSP Environment & Energy LLC (WSP) set to find out. What they discovered revealed a lot about how to make datacenters green, or at least greener.

More specifically, the study, which can read in its entirety here, wanted to see if on-premise computing was as efficient as cloud computing from an energy and carbon footprint point of view. The intended audience here is small and medium-sized organizations (SMOs), since they are the ones that have the most to gain from sprucing up their datacenters. The Googles and Facebooks of the world have already figured a lot of this out, albeit empirically.

What the study concluded was that, in general, clouds did offer the most efficient environment, but there were some exceptions. And it was these that offered the most interesting revelations about how to shrink a datacenter's energy and carbon budget.

The study used publicly available data from existing research and took into account such things as datacenter Power Usage Effectiveness (PUE), server utilization, server refresh, virtualization rates, and carbon emission measurements (kilograms of CO2 equivalents per kilowatt-hours). Four applications domains were considered: office productivity, content management, business admin, and utilities. As it turns out, the type of application didn't matter much compared to the five key metrics listed above.

With regard to the computing environment, the comparison was more granular than just cloud and non-cloud. It actually studied five different types of facilities: on-premise (not virtualized), on-premise (virtualized), colocation (not virtualized), private cloud, and public cloud. Each facility environment was further broken down into worst case, average and best practice subtypes.

Public clouds that reflected best practice deployments yield the best efficiencies – a datacenter PUE of 1.1, a 70 percent server utilization, a refresh period of one year, a 12:1 virtualization ratio, and a carbon emission factor of just 0.268. At the other end of the spectrum were the worst case versions of non-virtualized on-premise datacenters: a PUE of 3.0, a paltry 5 percent server utilization rate, a refresh period of 5 years, and an emission factor of 0.819.

None of that should be surprising. The large public clouds, exemplified by operations like Google, Amazon, Facebook and others, can take advantage of their economies of scale, and have the resources and financial incentive to squeeze the maximum compute from their hardware. On-premise set-ups, on the hand, are at the mercy of local budgets, small-scale computing setups, and often minimal expertise.

What is surprising though, or maybe not, is that best practice on-premise datacenters reflected nearly the same results – a PUE of 1.5 and 60 percent server utilization – as the top public clouds. Likewise, best practice private clouds delivered similar numbers.

As it turned out, the most striking driver of datacenter efficiency turned out to be virtualization. As the study points out, it doesn't take a lot more energy to run a fully loaded server than a partially loaded one. So if you can decrease the number of servers by consolidating applications into fewer boxes, it is bound to make a big difference in power usage.

The higher ratios of applications per server mapped quite well to server utilization rates. For example, a ratio of 10:1 or higher seemed to be required to get about the 50 percent utilization mark. It's worth noting that even the best non-virtualized on-premise and colo setups only managed to keep the servers running at 25 percent capacity.

Carbon emissions, though, seemed to be aligned with PUE and server refresh. Not surprising – invariably older and less efficient systems resulted in higher carbon emissions. Low server utilization rates did not necessarily lead to high emissions, although the lowest utilization numbers did correspond to the highest emissions. The results here seemed a bit muddled, since the source of electricity can have a big influence on CO2 emissions – coal, nuclear, hydro and wind obviously have very different carbon profiles – and these weren't correlated in the study.

For on-premise, non-virtualized datacenters, the authors recommended virtualization as the low-hanging fruit. Shortening refresh cycles and upping the facilities PUE also has significant value, together offering a 10 to 30 percent boost in efficiencies. But if you're after minimizing the carbon footprint, move your apps to a public cloud running off hydroelectric power, e.g, in the Pacific Northwest. That can reduce CO2 by nearly a factor of 50.

Missing from the study is the cost involved in implementing these changes. Things likes the ROI for a forklift upgrade of your server room is left for the reader to discover. Likewise, there are no price comparisons comparing moving to the cloud versus implementing changes in-house. That's would require a study of much broader scope and ambition.

EnterpriseAI