The Future of Data Centers: Energy Efficiency and Waste Heat Potential
An Energy-Efficient Cooling System for Data Centers
The rise of AI tools like ChatGPT has accelerated demand for powerful, efficient data centers, which are essential for running and training AI models. As these facilities expand to meet growing computational needs, they must also adopt AI and emerging technologies to enhance performance, security, and efficiency. However, with data centers already consuming 1.5% of global electricity—and cooling systems using up to 40% of that—energy efficiency and sustainability are urgent challenges. A hybrid cooling approach combining air and liquid systems is key to managing heat and reducing environmental impact.
Air Cooling
Air cooling, the most traditional method for data centers, uses circulated air to dissipate heat with Fan Wall or Coil Wall Units. Fan Wall Units feature horizontal fan and coil layouts for high capacity, while Coil Wall Units are vertically arranged to save space in smaller facilities. Known for reliability and versatility, air cooling systems improve energy efficiency through turbo chillers, free cooling, and high-performance Fan Wall Units. The main power draw comes from EC fans, and using multiple IE5-rated EC fans not only boosts efficiency but also increases system reliability through redundancy.
The drive unit features an active harmonic filter that keeps total harmonic distortion (THDi) below 5%, preventing motor overheating, failures, and lifespan reduction. Lower THDi also improves power factor correction and power quality, reducing losses and saving energy. In low- to medium-density data halls, using such high-efficiency components is key to optimizing energy performance.
The Rise of Liquid Cooling Technology
Traditional air cooling remains common in most data centers and is suitable for low-power-density environments. However, it's insufficient for AI-driven data centers with rack power densities over 50kW. In such cases, liquid cooling methods like Rear Door Heat Exchangers (RDHX), Cold Plate (direct-to-chip) cooling, and immersion cooling are essential. RDHX improves air cooling by placing a heat exchanger behind the rack; Cold Plate cooling attaches coolant-filled metal plates to chips (high-power GPU or CPU), handling up to ~100kW per rack; and immersion cooling, which submerges servers in non-conductive liquid, offers the highest cooling capacity and efficiency.
Liquid cooling offers higher rack cooling capacity and significantly improves energy efficiency, as shown by lower Power Usage Effectiveness (PUE) values. While air-cooled data centers average a PUE of 1.5, RDHX systems lower it to 1.1–1.3, direct-to-chip cooling to 1.05–1.2, and immersion cooling—especially two-phase—can achieve PUEs as low as 1.02. This results in up to 50% energy savings and about 33% lower maintenance costs. Liquid cooling also supports higher coolant temperatures, unlike air cooling, which is typically limited to 18–27°C according to ASHRAE guidelines.
Our White Paper, "The Future of Data Centers: Energy Efficiency and Waste Heat Potential", takes a deeper look into the importance of thermal management and cooling technology for high-density hyperscale data centers and includes the following:
📖 Want to gain deeper knowledge and insights into Cooling Data Centers? Download the complete White Paper HERE to unlock new possibilities in data center management!
For more HVAC Insights,
Please visit our LG HVAC Blog