The temperature's rising in online brokerage Scottrade Inc. 's data center — and that's a good thing. The move has allowed the St. Louis-based company to reap enormous energy savings while increasing reliability.
Six months ago, CIO Ian Patterson hired the engineering firm Glumac to construct a computational fluid dymanics (CFD) model of Scottrade's data center. The model provided a complete picture of thermal airflows.
Samuel Graves , chief data center mechanical engineer at Glumac, oversaw the effort. “Much can be learned from a thermal CFD model, and going forward, the model becomes an excellent tool to help determine the effectiveness of potential solutions,” he says.
As is the case in many large data centers, Scottrade was overcooling the room. The solution: Fix the airflow problems and hot zones in its hot aisle/cold aisle configuration and turn up the computer room air conditioning (CRAC) unit's thermostat. That sounds scary, but Patterson says implementing the recommendations cut power consumption by 8% and improved equipment reliability — all without affecting the performance of the data center.
Power and cooling infrastructures are a large piece of the data center's overall operating cost. The hard dollar savings from some fairly straightforward changes were “significant,” Patterson says.
Scottrade didn't just reap those savings by retrofitting an old, poorly designed facility. Quite the contrary, Patterson achieved the efficiency gains in a state-of-the-art, 34,000-square-foot data center that Scottrade had rolled out in 2007. The cost benefits weren't just limited to power and cooling bills: Scottrade also reduced the load on backup power systems and reduced the number of backup batteries needed.
The savings that Scottrade achieved are actually on the low side, says Graves. “Scottrade was already doing a lot of things right,” he adds, noting that Glumac has seen some data centers that achieve a 25% decrease in cooling costs when tuned properly.
The CFD model identified three key areas for improving efficiency. First, it found that a “thermocline,” or plane of warmer air, was floating in the upper half of the data center space. That hot layer started at a height of about five and a half to six feet and extended all the way to the 10-foot ceiling. Thus, the equipment in Scottrade's top racks was in the hot-air cloud.
The second issue was the configuration of the racks themselves. Not all racks were fully populated, but equipment was always concentrated at the top of the racks, where it was subject to those higher temperatures. In fact, says Patterson, the hottest-running servers tended to be mounted at the top, where cooling efficiency was lowest. To address that, Scottrade had lowered the CRAC system temperature settings, overchilling the rest of the room.
“Scottrade was running the overall data center temperatures colder than necessary to keep the temperatures at the top of the racks within acceptable ranges,” Graves explains.
Finally, the balance between the heat load produced by the server racks and the quantity of air supplied to the cold aisle was out of whack. Engineers redistributed the perforated tiles on the aisle floor to match the output required. “A thermal balance was noticed immediately,” says Graves.
Achieving Balance
Air conditioning systems perform most efficiently when the temperature differentials are higher, so Glumac implemented changes that made the cold aisles colder and the hot aisles a few degrees warmer. “We weren't optimizing the heat-to-cooling ratio that the AC units needed. You have to get that balance,” Patterson says.
To address that thermal layer problem, Glumac engineers adjusted the CRAC system by raising the height of the air-return intakes by one and a half to two feet. That pushed the thermocline layer above the tops of the racks, providing a better thermal environment for equipment located there.
Once the airflow balance was achieved in the aisles, engineers turned their attention to what was inside the racks. “There's an optimal temperature point where you want your chips running,” says Patterson. With that in mind, Scottrade reorganized the racks, moving power-hungry servers lower to balance the heat distribution within the racks.
It also helps that Scottrade's new data center is using energy-efficient servers. The 1U and 2U Dell PowerEdge blade server models it has chosen use low-voltage processors, variable-speed fans that accelerate and decelerate depending on processing power consumption, and high-efficiency power supplies. (Those units came with VMware virtualization software embedded on ROM, making setup easier.) “It draws less energy, and it keeps the internal temperatures in the boxes cooler,” Patterson says.
Newer and Hotter
But there's another advantage to newer servers that data center managers may miss: They are able to run fine at higher operating temperatures than the previous generation of equipment was able to. That means that server racks can run warmer.
“Data center operators who take advantage of these higher-temperature capabilities can gain significant energy efficiencies in their cooling infrastructures,” says Graves.
Those changes “improved our power consumption, our air conditioning costs, and reduced our total costs of running our business,” says Patterson.
Scottrade needs low latency levels in order to fulfill its commitment to completing trades quickly. The firm relies on the highest possible server performance to support split-second transactions for its customers. Fortunately, the redesign required no compromises: Moving to a hotter data center didn't reduce performance or affect longevity of the computing equipment, Patterson says. Instead, the changes improved reliability by keeping equipment within optimal operating ranges.