Contact Us
News

AI Is Making Older Data Centers Obsolete, Yet Upgrades Are Rare

Older data center inventory is being rendered obsolete by the high-powered computing equipment needed for artificial intelligence and other high-performance applications. But few providers are retrofitting older data centers to support these technologies. 

Placeholder
An AI-generated image of an abandoned data center.

The past 18 months have seen a significant change in the way new data centers are designed.

The newest information technology equipment, particularly the high-performance processors used for generative AI and other emerging technologies, use far more power per square foot and produce far more heat than the servers that have traditionally lived in data centers. These higher rack densities, as they are called, require fundamentally different cooling systems, power infrastructure and other supporting equipment than what is available in all but the newest data centers.

This evolution of IT hardware renders growing swaths of unused data center inventory effectively obsolete — yet few data center providers are choosing to retrofit these legacy facilities to support the higher densities tenants demand.

The infrastructure to support rising densities can’t just be shoehorned into older data centers, industry experts say. And retrofits carry significant challenges, costs and risks that have led providers to ditch outdated inventory in favor of new development. 

“It's a challenge to take a legacy data center that was designed for 5 to 10 kilowatts per rack and then start utilizing it with these much higher densities that we know we’ll need going forward,” Dave Meadows, technology director at Stulz USA, a manufacturer of data center cooling systems, said March 27 at Bisnow’s DICE Southeast event at the JW Marriott Atlanta Buckhead. 

“There are some in-between steps before looking at greenfield facilities designed specifically for high density, but these are just Band-Aids, not long-term solutions.” 

While AI requires massive amounts of computing power per square foot, rack densities were on the rise even before Big Tech’s AI arms race took off in late 2022. Between 2017 and 2020, the average power consumption by a rack of IT equipment in a data center climbed from 5.6 KW to 8.4 KW, with analysts estimating that densities would climb to five times that level by next year. 

Those estimates now seem conservative. The rapid adoption of graphics processing units for generative AI and other high-performance computing has driven average densities upward faster than ever before.

Now, many of the processors used for generative AI demand around 200 KW per rack. 

Typical densities vary across different segments of the data center landscape. Major cloud providers, which are aggressively pivoting their business models toward AI, are buying up the lion’s share of the newest, most energy-intensive processors, while corporate tenants in enterprise data centers are earlier in the adoption curve for these technologies.

There is widespread consensus that, across all segments of the industry, legacy data centers aren't up to the task of supporting the rising rack densities that will continue to climb in the years ahead.

“Five years ago, we were going from 2 KW racks to 10 KW racks, and now we're talking multiples of that, and it’s only going to go exponential from here,” Hani Noshi, senior program manager for project delivery at Microsoft, said April 3 at Bisnow’s DICE Southwest event at the Marriott Phoenix Chandler. “The [data centers] that got us to this point are not going to get us there.”

The inability of many older data centers to handle higher-density workloads centers primarily around their power management and cooling systems. Most data halls were designed to support a narrow range of voltages, with cabling and other power equipment that can’t even be connected to today’s state-of-the-art IT gear.

Adapting these systems for AI and other high-performance processing means a comprehensive redesign of the hall’s entire electrical infrastructure. 

Cooling is often an even greater challenge. Traditionally, data centers have effectively been giant air-conditioned boxes. But more power per square foot means more heat per square foot, and legacy cooling systems are largely unable to keep ambient temperatures around the newest processors within the equipment's operational limits.

Many firms have taken marginal steps to improve the ability of older data centers to handle rising densities, installing more sophisticated airflow systems that deliver targeted cold air directly to equipment racks. Some firms are using AI to model airflow and patterns of equipment usage to adjust where certain IT racks are placed to get as much cooling as possible out of older systems.

Placeholder
Miratech’s Jim McDonald, XYZ Reality’s Dan Fleming, Microsoft’s Hani Noshi, Doxel’s Saurabh Ladha and Exyte’s Abhishek Bardhan at Bisnow’s DICE Southwest event April 3 in Phoenix.

Even firms employing these techniques acknowledge these are half measures that will only temporarily stave off a facility’s obsolescence. Updating cooling systems to make legacy data center space competitive with new construction requires a truly comprehensive retrofit, particularly as the highest densities increasingly necessitate sophisticated liquid cooling systems in which refrigerant is pumped directly to each processor. 

Such retrofits are massive, costly undertakings. They often require not only the complete replacement of a facility’s sophisticated plumbing systems but also reinforcing the building itself to account for the increased weight of newer cooling equipment that often exceeds the structural limits of legacy data centers. 

It’s a route that few data center operators are pursuing. 

“No one is going to put in liquid cooling or retrofit the entire data center,” Cosme Garcia, senior director for data center operations at ADP, said at DICE Southeast. “There’s issues with taking already-built architecture that was built 10 or 25 years ago and trying to squeeze as much cooling out of it as possible because it’s not as easy as just saying you’re going to replace chillers. All the plumbing in the entire building is really difficult and expensive to change.” 

Developing new data centers from the ground up is often the path of least resistance compared to retrofitting unused legacy inventory, experts said. Building a new facility designed around higher-density applications is simpler and more predictable than shoehorning new systems into existing architecture, without the risks to existing IT equipment that exist with retrofits. New construction can be done at a scale that maximizes return on investment. 

It’s also easier to get financing for new construction than for a retrofit, said Sam Prudhomme, chief commercial officer at data center equipment manufacturer Accelevation. 

“It's more expensive to retrofit than it is to go build a new million-square-foot campus because there's so much institutional money flooding the space,” Prudhomme said. “It's just a cleaner business case to make to institutional capital.”

As providers choose not to upgrade older, unused inventory, the result has been what Prudhomme calls “ghost towns,” swaths of outdated largely unleasable data center space that has been left to collect dust.

Prudhomme estimated there are millions of square feet of this effectively abandoned inventory, largely within older multitenant colocation data centers. 

“A lot of this is where the old retail colocation bread and butter used to be … facilities designed to accommodate any and all comers because that’s what the market was for the first 12 to 15 years of this industry,” Prudhomme said. “We're building all this stuff that won't come online for the next 24 to 36 months, and there's millions of square feet of usable space that no one's paying attention to.”

These digital infrastructure ghost towns may not sit abandoned forever. Key data center markets face dwindling supplies of developable land and power and new regulatory hurdles that make new construction more difficult. Prudhomme predicted an eventual tipping point where retrofits become a stronger business case, particularly if providers upgrade pockets of outdated inventory scattered across various geographies and aggregate them into a single offering. 

But that day may be far off. 

“What has to happen is we have to reach the point where building a data center is hard — like way harder than it is now,” he said. “We haven't reached that point of diminishing return on new projects. It's going to take a minute till we get there.”