Contact Us
News

New Cooling, Computing Advancements Could Mean Savings For Data Centers

A key way to save on costs when it comes to data centers is saving on power, so those who are building and running data centers are constantly thinking of ways to improve efficiency — most notably in cooling, which tends to be the biggest power draw.

The challenge is either to find new and better ways to cool centers or advances in server equipment that allow them to run at higher temperatures than previous generations. Those in the industry are exploring both.

Placeholder

"Everybody in this room has felt the increased need to conserve power and be more efficient," QTS Vice President of Development and Strategic Procurement Lane Anderson said at Bisnow's recent Data Center Investment Conference & Expo, West in San Jose.

Most of the innovations are in cooling, he said. The challenge is making advancements that can be scaled up and rapidly deployed.

Just how cooling is handled can depend on location as well as the type of data center.

One of the best ways to lower costs is the use of water for cooling, but using a lot of water isn't a popular position in the Bay Area, where drought is a constant undercurrent even during the rainy season. Somewhere like Portland it may be less of an issue, Anderson said.

"It's a great way to save power; it's a great way to reduce your carbon footprint, but then ... there's conflicting directions about what your customer wants," he said.

Much of the decision-making, then, depends not only on what makes good technological sense, but also what fits with a client's corporate ethos, region of the country and the needs and desires of that end user, he said.

"We just listen," Anderson said. "If it's a hyperscale, if it's an end user that says 'this is my purpose-built facility,' we'll just reflect their needs."

Some of the cutting-edge cooling technology has moved to the use of more water, or specialized liquids. As more is demanded of servers for different applications and faster processing, such as artificial intelligence, streaming video or gaming, they also generate more heat. The same is true as data centers are more densely configured, bringing more equipment and heat into the space. Eventually there reaches a point where air-cooling alone isn't a good option. Experts during the event discussed chilled water operating systems, evaporative cooling, having liquid delivered directly to the chip to cool it down and what some see as the future of data center cooling: immersion.

Immersion is already starting to be used by cryptocurrency miners and other high-demand applications. Immersion cooling goes beyond water, most likely to a dielectric fluid that can be used to insulate and cool equipment, the panelists said.

"There's a fair amount of consensus that the end game is using immersion cooling," Google Machine Learning Infrastructure Project Manager Marc Bhuyan said.

Placeholder
QTS' Lane Anderson, RagingWire Data Centers' Kevin Dalton, Intel's Joseph Jankowski, Oracle's Josh Zhou and Turner Construction's Peter Kangas, who moderated

Some of the innovation in data centers is coming from companies like Intel, which has a role in both server equipment and running its own internal data centers.

Intel Corp. has a repurposed wafer fabrication site that it has turned into a data center hub in Santa Clara that helps with internal data demands for the company. Intel Construction and Facilities Engineering Manager Joseph Jankowski anticipates tripling the size of the hub in the coming years, and a key part of that growth is how to most efficiently run the data center.

"We're pushing the envelope on how much wattage per SF we can put in these spaces," he said. "Because we are the chip maker, we know how far we can push these servers."

If servers can still perform well in a warmer environment, that reduces the need for cooling and keeps energy costs down.

A lot of servers do have the capability to operate at higher temperatures, but the conservative nature of the business, particularly for companies like RagingWire Data Centers that are providing co-location services to multiple clients, means many companies do not have the flexibility to experiment with that much. Those data center providers are limited by different service level agreements, server designs and requirements.

If designers are still creating server fans meant to operate at 77 degrees, then that is going to limit what can be done to operate that center, RagingWire Senior Vice President Kevin Dalton said.

"Until the server manufacturers decide to change how they design their equipment, we can't react to it," he said.

Jankowski conceded that in some ways it is easier for Intel to innovate, make changes and take some risks.

"We have a captive customer and we have to satisfy their requirements, but other than that we have a pretty wide-open book on what we do and that allows us to innovate," he said.

Placeholder
PayPal's Shawn Tugwell, DPR Construction's Tejo Pydipati, Arup's Bruce Myatt, EPI Mission Critical's Jason Mayer, Google's Mark Bhuyan and Ascent's Drew Osborn, who moderated

Technology that works at higher temperatures could end up solving the power crunch long-term. It could help take data centers beyond the discussion of different types of cooling to maybe someday needing no cooling at all, making chips that can endure hotter temperatures an interesting prospect, Arup Data Center Business Leader Bruce Myatt said.

While companies like Intel and companies running hyperscale or edge facilities can push the envelope to grasp power savings, those in the co-location business have to consider the environment their customers need, and some have to handle different spaces for different customers in very different ways when it comes to the amount of heat they will tolerate or cooling demands, panelists said.

That means a big part of the puzzle is flexibility.

"You have to be very careful about how you not only select the cooling strategy you want to use, but you also have to be very careful about how you control it," Myatt said. "It's very easy to get out of control and all of a sudden you have equipment problems."

That flexibility also extends to the system itself. With advances that increase density or the heat generated by the server processors, cooling systems need to be replaced every three to five years in a building built to last for 20. That demands a flexible system that is also easy to service and replace as configurations and requirements change, DPR Construction Project Manager Tejo Pydipati said.

Costs will continue to rise, and cities will continue to see power and water as possible revenue sources through fees, which had Myatt speculating that data centers may eventually need to process their own wastewater and provide their own power independent of the grid.

"The data center of the future is probably going to have its own power plant, water treatment plant and be its own small independent community that takes care of its own needs," he said.