
While compute devices such as CPUs, GPUs, and XPUs are stealing the limelight in the artificial intelligence (AI) era, there is an increasing realization that powering AI at scale demands new power systems and architectures. In other words, data center operators are investing heavily in high-performance computing for AI, but there is no AI without power.
The exponential growth of AI is rapidly outstripping the capacity of the current 54-V data center power infrastructure, driving a transformation toward high-density, reliable, and safe 800-V powered data centers. Here, at this technology crossroads, the new power delivery architecture requires new power conversion solutions and safety mechanisms to prevent potential hazards and costly server downtimes.

Figure 1 AI data center power was a prominent theme at Infineon OctoberTech Silicon Valley 2025. Source: Infineon
At Infineon’s OctoberTech Silicon Valley event held on 16 October 2025 in Mountain View, California, this tectonic shift in data center power infrastructure was a major highlight. The company demonstrated 800-V AI data center power architectures built around silicon, silicon carbide (SiC), and gallium nitride (GaN) technologies.
Infineon has also joined hands with Nvidia to maximize the value of every watt in AI server racks through modular and scalar power architectures. The two companies will work together on data center power aspects, such as hot-swap controller functionality, which enables future server boards to operate in 800-V power architectures. It will facilitate the exchange of server boards on an 800 VDC bus while the entire rack continues operating through controlled pre-charging and discharging of the boards.
At Infineon OctoberTech Silicon Valley, Peter Wawer, division president of green industrial power at Infineon Technologies, spoke with EDN to explain the transition to AI data centers to 800-VDC architectures. He also walked through the demo to show how 800-V power is delivered to AI server racks.
The advent of solid-state circuit breakers
“We are seeing a switch to an 800-VDC architecture in AI data centers, which is a major step forward to establishing powerful AI gigafactories of the future,” Wawer said. “The power consumption of an AI server rack is estimated to increase from around 120 kilowatts to 500 kilowatts, and to 1 megawatt by the end of the decade.”
Inevitably, it calls for higher efficiency and reduced losses as computing power continues to scale at an unprecedented rate. “This evolution brings new challenges,” Wawer acknowledged. “When you want to exchange server boards on an 800-V bus while the entire rack continues operating, you are dealing with substantial power levels.”
For instance, engineers need controlled pre-charging and discharging to avoid dangerous inrush currents and ensure safe maintenance without downtime. While traditional protective devices like fuses and mechanical breakers have served reliably for decades, they were not designed for the ultra-fast fault response required in today’s high-voltage, high-speed environments, where microseconds matter.
That’s where the next generation of solid-state circuit breakers (SSCBs) comes in. The new data center architectural shift is leading to the emergence of SSCBs, which will modernize AI data centers while replacing electromagnetic transformers. SSCBs respond to faults in microseconds with very high precision, which makes power distribution in AI data centers safer, faster, and more efficient.

Figure 2 SSCBs will replace electromagnetic transformers that currently connect the grid to power infrastructure in data centers. Source: Infineon
“To enable these next-generation SSCBs, Infineon introduced the CoolSiC JFET family earlier this year,” Wawer told EDN. “These JFETs offer the ability to combine ultra-low on-resistance—1.5 mΩ at 750 V and 2.3 mΩ at 1200 V—to ensure robust performance even under tough conditions.”
Reliability is another key advantage, he added. “These JFETs are designed to handle sudden voltage spikes and current surges, responding quickly to faults and helping prevent equipment damage or downtime.” Their packaging—aided by top-side cooling and Infineon’s .XT interconnect technology—helps AI data center power systems stay cool and reliable even in the most demanding environments.
These JFETs also reduce the need for external clamping circuits, simplifying system design and enabling more compact and cost-effective solutions. Besides AI data centers, this SSCB technology can help protect electric vehicles (EVs), industrial automation and smart grids, making power distribution safer, more efficient, and ready for the future.
Solid-state transformers, hot-swap controllers, and power modules
At OctoberTech Silicon Valley, Infineon also demonstrated a power system built around high-voltage CoolSiC components for high-voltage DC power distribution to IT racks powered by a solid-state transformer (SST). “The SSTs will be crucial in gigawatt-scale AI datacenters,” Wawer said.
An SST is a power-electronics stack for connecting the grid to data center power distribution. It replaces the conventional systems based on a low-frequency transformer made of copper and steel and an AC-DC converter, enabling a dramatic reduction in size and weight, end-to-end efficiency, and reduced CO2 footprint.
Next, Infineon unveiled a reference board for hot-swap controllers for 400-V and 800-V power architectures in AI data centers. The hot-swap controller functionality is vital to providing the highest levels of protection, maximizing server uptime, and ensuring optimal performance. The REF_XDP701_4800 hot-swap controller reference design is optimized for future 400-V/800-V rack architectures.

Figure 3 Hot-swapping controller designs demonstrated at OctoberTech in Silicon Valley are optimized for 400-V/800-V data center rack architectures. Source: Infineon
Then there were trans-inductance voltage regulator (TLVR) modules specifically designed for high-performance AI data centers. Infineon’s TDM22545T modules combine OptiMOS technology power stages with TLVR inductors to bolster power density, improve electrical and thermal efficiency, and enhance signal quality with reduced transients.
The proprietary inductor design delivers ultra-fast transient response to dynamic load changes from AI workloads without compromising electrical or thermal efficiency. Moreover, the inductance architecture minimizes the number of output capacitors, reducing the overall size of the voltage regulator (VR) and lowering bill-of-materials (BOM) costs.

Figure 4 The TLVR modules deliver benchmark power density and transient response crucial in AI data centers. Source: Infineon
Transition to new power architectures
Jim McGregor, principal analyst at Tirias Research, acknowledges that it’s becoming increasingly challenging to power AI data centers from the grid to the chip level. “It’s critical that power design engineers continuously improve efficiency, power density, and signal integrity of power conversion from the grid to the core.”
Especially when an AI server costs 30 times as much as a traditional server. Furthermore, there is an increasing need to simplify system design, enabling more compact, cost-effective solutions for powering AI data centers.
The imminent shift from the current 54-V data center power infrastructure to a centralized 800-V architecture is part of this design journey in the rapidly evolving world of AI data centers. That inevitably calls for new building blocks—hot-swap controllers, SSCBs, and SSTs—to successfully migrate to new power architectures.
These power-electronics building blocks are now available, which means the transition to 400-V/800-V AI data centers isn’t far off.
Related Content
- Solving power challenges in AI data centers
- AI Data Centers Need Huge Power-Backup Systems
- EDN Talks to Infineon About the AI Data Center Evolution
- Data center power meets rising energy demands amid AI boom
- As Data Center Growth Soars, Startup Uses AI to Cut Power Binge
The post The transition from 54-V to 800-V power in AI data centers appeared first on EDN.