Retrofits Of Older Data Centers Are Not Part Of Developers' AI Playbook, Even As Inference Grows
Despite growing demand for AI inference, or application in real-world situations, data center developers and tenants still show little interest in retrofitting older data centers to support this high-performance computing.
From an Electrolux factory in Tennessee to a Chicago office tower, a growing number of underutilized buildings are being converted into artificial intelligence data centers. That activity has accelerated as AI adoption stokes demand for data center capacity close to major markets.
But amid this trend of retrofitting industrial and commercial properties into what Nvidia CEO Jensen Huang calls “AI factories,” few firms are undertaking conversions of one asset class that might seem best suited for the job: older data centers.
“The buildings to handle that have to be completely designed differently than data halls of the past,” said Malcolm Ferguson, distinguished technologist at Hewlett Packard Enterprise.
Legacy data centers are rarely being retrofitted to accommodate the high-performance computing needed for AI, even when they sit in locations where tech giants and other large enterprises want to place their AI infrastructure.
This is for good reason, industry leaders told Bisnow’s National Data Center Construction, Design And Development event in Dallas this month. AI uses far more power per square foot than traditional servers, requiring new mechanical, electrical and cooling systems and stronger structural elements to accommodate their weight.
“Retrofitting a legacy air-cooled data hall that was built even just five or six years ago is super expensive. It's a lot easier to have a fresh data hall to start new or even better, a fresh piece of land next door to start new,” Ferguson said.
The equipment now needed in data centers looks very different from the infrastructure data centers have traditionally hosted.
Graphics processing units and other equipment for AI use far more power per square foot and produce far more heat than the servers that have traditionally lived in data centers. These higher rack densities, as they are called, require fundamentally different cooling systems, power infrastructure and other supporting equipment.
Tech giants like Microsoft, Google and Meta have raced to build out massive data centers capable of hosting clusters of GPUs to train large language AI models, driving a wave of megacampuses with hundreds or thousands of megawatts of planned capacity.
These projects are far larger than any data centers that came before them, both in terms of their physical size and their power consumption.
But a fundamental shift in the AI landscape is pushing a rising share of demand back toward data centers that look a lot more like legacy facilities.
Data center tenants increasingly need computing not just for training AI models, but to allow customers to interact with those models in what is known as inference. Although inference accounts for only around 20% of AI workloads, that share is rising, and inference is expected to account for the bulk of AI demand by the end of the decade.
This shift is occurring as more corporations embrace AI solutions and consumers increasingly use AI products like ChatGPT for everyday tasks like identifying a plant they see on the golf course, according to Smarak Bhuyan, a product manager on Google’s digital infrastructure team.
“ChatGPT has been trained with billions of images of what plants are, but then I'm showing it a specific image and asking what that plant is and it's giving me a response, that's inference,” Bhuyan told DICE: Central. “These are two very separate workloads and two separate infrastructure profiles.”
Instead of the massive centralized GPU clusters developed for AI training, inference often needs to be deployed in smaller facilities near major metro areas. That means inference is pushing demand to the so-called edge, which includes last-mile data centers near population hubs that are measured in dozens of megawatts instead of hundreds.
But greenfield sites with access to the necessary power and fiber are rare in dense cities. Sites near major metro areas that already have the infrastructure needed to build data centers are often home to existing industrial or mission-critical assets.
Developers are increasingly finding previously developed sites and razing or retrofitting everything from office parks to industrial facilities. Even office towers in urban cores with unused power capacity are being converted to AI data centers.
Yet many sites appropriate for AI inference deployments also house older data centers built prior to the AI boom, many of them owned and operated by the same colocation firms, tech giants and other large corporations seeking capacity for inference computing.
If an abandoned textile plant makes a good candidate to be retrofitted as an AI facility, it might seem intuitive that so too would an actual purpose-built data center.
But industry leaders told DICE: Central that retrofitting legacy data centers to support AI is almost never a viable option.
“Installing AI compute in existing infrastructure is a real challenge,” said Mike Herwald, North American sales manager at data center equipment manufacturer Munters Corp. “There's a lot of headwinds to doing that efficiently and maybe even technically being able to do it at all.”
These headwinds stem primarily from the vastly higher power densities of AI workloads, which require fundamentally different designs for systems from power management to cooling and even the building’s physical shell.
It’s a change that has happened practically overnight. As recently as two years ago, data centers were designed for server racks using around 10KW. Today, racks commonly use as much as 200KW, according to Google's Bhuyan.
Nvidia’s leadership indicates that upcoming generations of GPUs will require 1MW per rack. Most data halls were designed to support a narrow range of voltages, so supporting AI’s skyrocketing densities would typically require a comprehensive redesign of the hall’s entire electrical infrastructure.
Cooling is often an even greater challenge. Traditionally, data centers have effectively been giant air-conditioned boxes, but higher densities produce more heat per square foot and increasingly require liquid cooling systems in which refrigerant is pumped directly to each processor.
“Retrofitting an existing air-cooled data hall, with customers in it, to handle that is not going to happen,” HPE’s Ferguson said. “You've got to either go with pods outside the building that are designed for it or you have a fresh data hall that hasn't been touched.”
Ferguson also points to added weight that comes with high-density racks and integrated liquid cooling systems. He said each rack of IT equipment can now weigh as much as a Ford F150 truck, which is far more than older data centers were built to support. These structural issues have helped push developers toward retrofits of industrial assets designed to accommodate heavy machinery.
Some AI computing infrastructure is being shoehorned into older data centers through partial retrofits.
But such efforts typically amount to expensive, short-lived solutions that rarely make sense for anyone except occasionally the largest hyperscalers, which might value time over cost when trying to bring a specific AI product to market, according to Ares Technology Consultants President Jay Brummett.
“My personal experience with retrofitting for AI is that trying to stuff 10 pounds of [manure] in an eight-pound bag doesn't work,” Brummett told DICE: Central. “Knock the building down and try to rebuild everything.”