Instead of going home, hyperscale data centers are going to go, well, even bigger in scale by the end of the decade—though the extra computing power will require radical new system designs to handle the power and cooling requirements.
A recent Synergy Research Group report projected worldwide hyperscale data center capacity will almost triple within just six years. Synergy researchers also wrote that the average capacity of newly constructed hyperscale facilities over that time period will be “more than double that of current operational hyperscale data centers.”
Synergy based the study off “an analysis of the data center footprint and operations” of 19 major hyperscale operators, taking into account 926 data centers run by those companies and plans for 427 more.
According to John Dinsdale, Synergy’s chief analyst and research director, those expansion plans will be the main driver of tripled capacity.
“The great bulk of new capacity added will be via new data centers,” Dinsale wrote via email. “For many facilities, retrofitting would be either impractical or too costly.”
Dinsdale wrote that “aggressive growth plans” by hyperscalers preceded the recent surge of interest in AI: “We were already going to see a lot of new data centers and capacity added every year.”
Denser, hungrier. Data center employment in the US increased 17% in direct employment from 2017 to 2021 according to a PwC/Data Center Coalition report, CIO Dive reported. With cutting-edge semiconductor design hitting a wall, capacity improvements—particularly with respect to enterprise GPUs used in machine learning and other computational tasks requiring intensive parallel processing—will come at the cost of a lot of juice.
New data centers will require “radically higher power densities” and enhanced cooling technology to match, according to Dinsdale: “Somewhat ironically, this requirement for added capacity and power will sometimes lead to smaller data centers being deployed—much higher power per square foot but less footprint.”
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
The impact of AI on data-center design and infrastructure was a central topic at the Open Compute Project (OCP) Global Summit in San Jose, California in October. As air cooling technologies are beginning to hit practical limits, hyperscale data centers will likely transition to liquid cooling methods that are more efficient but challenging to implement.
Rolf Brink, founder and CEO of liquid cooling consultancy Promersion and architect of OCP’s liquid cooling projects, told IT Brew: “Right now we’re dealing with cloud-optimized platforms or general purpose workloads that are somewhere between half a kilowatt hour to a kilowatt, and these AI systems that are doing 10 to 20 kilowatts…That is a huge gap.”
Choose your liquid. Brink said there’s no one-size-fits-all method, as indirect air cooling, cold plates, and immersion or spray cooling are all suited for different purposes or design needs.
“They’re all solving some very fundamental problems for specific application” including environmental variables and workloads, Brink told IT Brew. For example, hyperscalers usually handle specific types of tasks, meaning they are often more efficient per capita than smaller facilities.
An Omdia analysis from December 2022 projected the compound annual growth rate in the liquid cooling market to be 50.4% through 2026. Brink said that because the technology is already vetted and IT providers will have little choice but to adopt them, the main challenge facing liquid cooling manufacturers will be scaling up production.
“This year marks a huge milestone for the adoption, for the normalization, for the commoditization of cold plate technology,” Brink told IT Brew. “In the next couple of years, we’ll be looking at the maturing cycle of these various immersion technologies as well.”