Home / FAQ / TECHNICAL ARTICLES / AN OVERVIEW OF SERVER POWER SUPPLY DESIGN TRENDS

An Overview of Server Power Supply Design Trends
With the rapid growth of data centers and cloud computing technologies, server power supply unit (PSU) designs are evolving at an accelerated pace. Increasing power output requirements, higher power density, and improved conversion efficiency are driving greater demands for performance, energy efficiency, and stability—factors that directly impact data center operations and energy consumption.
PSU efficiency plays a crucial role in overall energy consumption and thermal management. A high-efficiency PSU reduces energy loss and minimizes heat dissipation needs, ensuring stable server operation even under high loads. Moreover, improving PSU efficiency extends its lifespan and mitigates the risk of system downtime caused by power instability.
To enhance system reliability, redundant power architectures have become a standard design, significantly improving the maintainability of server PSUs. This article will discuss the environmental requirements set by the 80 PLUS certification and other international energy standards, as well as explore the critical role of redundant power configurations in ensuring system reliability and operational stability.
1. Higher Power Output Requirements
In the early 21st century, server power supply units (PSUs) typically operated within a power range of 200W to 300W. However, as computing demands have surged, modern servers now consume significantly more power, ranging from 800W to 2,000W, with some high-performance servers exceeding 3,000W.
Despite this increase in power consumption, server PSUs have largely continued to utilize IEC 60320 C19 (socket) / C20 (plug) AC connectors, which are rated for 16A. Under a 240V AC input, factoring in power conversion efficiency, the maximum power output is constrained to 3,600W. This limitation presents a power bottleneck for PSU design in the near term. To address the growing power demands, data centers have gradually transitioned to IEC 60320 C21 (socket) / C22 (plug) AC connectors, which are rated for 20A, increasing the maximum power output of a single PSU to 4,800W. Additionally, some high-performance computing facilities have begun deploying 277V or even 400V AC inputs to reduce current requirements and enhance overall power conversion efficiency.
With the continued growth of AI-driven servers, data centers are adopting higher-power supply architectures. For example, AI servers based on the Hopper architecture commonly use 3kW PSU modules, while AI compute servers utilizing the Blackwell architecture have already moved to 5.5kW PSUs to support the massive computational power needed for AI training and inference. Some high-end AI servers even surpass 6,000W in power consumption.
As this trend progresses, future PSU designs will evolve toward higher power density, improved efficiency (such as 80 PLUS Titanium or Platinum certification), and more advanced thermal management solutions to meet the stringent power and stability requirements of AI and high-performance computing applications.

2. Higher Power Density
High-power-density power supplies play a crucial role in various systems. As server computing capabilities and functionalities continue to expand, power demands are steadily increasing. This trend is particularly critical in space-constrained applications such as data centers, high-performance computing (HPC), industrial automation, medical equipment, and military systems, where high-power-density solutions are essential. While server chassis sizes remain unchanged, rising power requirements are driving more stringent power density demands. Modern server PSUs have evolved from single-digit power densities in the early 2000s to nearly 100W/in³ today. Achieving higher power density requires balancing efficiency and miniaturization, which involves reducing energy conversion losses, enhancing thermal management, and integrating advanced component technologies.
From a power topology perspective, advanced high-efficiency architectures like Totem-Pole PFC significantly reduce conduction and switching losses, improving overall power conversion efficiency. The adoption of advanced Gallium Nitride (GaN) power switches has reduced component counts by over 40%, contributing to enhanced reliability while minimizing solution size and cost. GaN switches not only enable faster switching speeds but also reduce power component size and thermal dissipation requirements. High-power-density designs also necessitate optimizing key components such as Ideal Diodes or ORing controllers, which must feature high current-handling capabilities, compact packaging, and built-in fault monitoring to minimize PCB footprint and component count, ultimately improving system reliability and efficiency.
Thermal management is another major challenge. As power density increases, thermal management technologies must advance accordingly. Solutions such as heat pipes, vapor chambers, and advanced air or liquid cooling techniques effectively mitigate internal thermal stress within power modules, extending the lifespan of power supplies.

3. Improved Conversion Efficiency
In the early 2000s, server power supply units (PSUs) typically had conversion efficiencies just above 65%. At the time, PSU designs prioritized power stability over energy efficiency optimization. Traditional converter topologies could easily achieve 65% efficiency, but since servers operate continuously for extended periods, improving PSU efficiency significantly reduces overall operating costs and energy consumption. Power conversion efficiency directly impacts system energy utilization. For example, with 90% efficiency, a server requiring 800W of output power would draw approximately 888W from the grid (800W / 90%). However, if the efficiency were only 80%, the PSU would need to draw 1,000W (800W / 80%). This 112W difference represents substantial energy waste and additional cooling requirements, especially in large data centers.
In 2004, the U.S. Environmental Protection Agency (EPA) introduced the 80 PLUS energy efficiency standard for PC power supplies, which later became a key efficiency benchmark for both PC and server PSUs. The 80 PLUS certification requires PSUs to achieve at least 80% efficiency under various load conditions. Today, most mainstream server PSUs meet the 80 PLUS Gold standard (>87–92% efficiency), while some high-end models achieve 80 PLUS Platinum (>90–94% efficiency). The 80 PLUS certification has become a crucial design criterion, defining minimum efficiency thresholds at 20%, 50%, and 100% load levels. By increasing conversion efficiency, PSUs not only reduce energy losses and heat generation but also enhance system stability and alleviate cooling system burdens.
The 80 PLUS certification categorizes PSU efficiency into the following levels:
(1) Standard 80 PLUS: Efficiency between 80% and 85% or higher.
(2) 80 PLUS Bronze: Efficiency between 81% and 88%.
(3) 80 PLUS Silver: Efficiency between 85% and 90%.
(4) 80 PLUS Gold: Efficiency between 87% and 92%.
(5) 80 PLUS Platinum: Efficiency between 90% and 94%.
(6) 80 PLUS Titanium: The highest tier, with efficiency between 90% and 96%.

80 PLUS certification categories
4. What Is a Redundant Power Supply (RPS)?
In server power supply design, a redundant power supply (RPS) is a critical mechanism that ensures continuous system operation even in the event of a primary power module failure. This design prevents operational disruptions and data loss due to power interruptions. Typically, an RPS consists of two or more power supply units (PSUs) and can be configured in various architectures such as 1+1, N+1, or N+N, depending on application requirements. When one power module fails, the backup module immediately takes over, and the faulty unit can be hot-swapped to maintain system stability and enhance overall reliability.
(1) Functions of Power Redundancy
High availability is crucial in IT and data center applications, making power redundancy an essential strategy for ensuring uninterrupted operation. With a multi-module power architecture, the system can continue running even if a single power module fails. This capability safeguards operational continuity, enhances data integrity, enables real-time fault switching, and reduces the risk of unexpected downtime. In applications such as cloud computing equipment, data centers, and telecommunications base stations, redundant power supplies significantly mitigate the impact of single points of failure, improving overall operational efficiency.
(2) How Power Redundancy Works
The core advantages of power redundancy lie in its high reliability and stability, ensuring continuous operation even in the event of power anomalies. The main working principles include:
A. Multi-Module Architecture: Even if one power module fails, the remaining modules can sustain system operation, preventing total shutdown due to a single point of failure.
B. Automatic Fault Switching: When the primary power module fails, the system automatically switches to the backup module within milliseconds, ensuring uninterrupted power delivery to critical loads.
C. Load Sharing: Multiple power modules operate simultaneously to distribute the load, improving energy efficiency, reducing stress on individual modules, and extending the lifespan of the equipment.
D. Ease of Maintenance: Modular designs support hot-swapping, allowing technicians to replace faulty power modules without shutting down the system, significantly enhancing maintenance efficiency.
(3) Applicable Scenarios
Redundant power supply technology is widely used in industries that require 24/7 uninterrupted operation to ensure system stability and minimize losses caused by power failures.
A. Server Systems: Enterprise servers, cloud computing equipment, and rack-mounted servers in data centers.
B. Telecommunication Base Stations: Ensuring stable network operations and preventing signal disruptions.
C. Medical Equipment: Critical systems such as life support and surgical equipment that demand the highest level of power stability.
