Skip to main content
  • HIGH PEAK POWER SOLUTION

    Withstand 200~1200% high peak current design

  • INDUSTRIAL POWER SOLUTION

    Power Solutions Across Various Industries

  • ABUNDANT POWER DESIGN EXPERIENCE

    Tailoring power solution to fit customers' needs

  • TELECOM POWER SOLUTIONS

    Specialize in telecom power solutions with digital technology

  • PROFESSIONAL PSU DESIGNER

    More than 50 years' experience in design and manufacturing

Home / FAQTECHNICAL ARTICLES / AN OVERVIEW OF SERVER POWER SUPPLY DESIGN TRENDS

An Overview of Server Power Supply Design Trends

With the rapid growth of data centers and cloud computing technologies, server power supply unit (PSU) designs are evolving at an accelerated pace. Increasing power output requirements, higher power density, and improved conversion efficiency are driving greater demands for performance, energy efficiency, and stability—factors that directly impact data center operations and energy consumption.

PSU efficiency plays a crucial role in overall energy consumption and thermal management. A high-efficiency PSU reduces energy loss and minimizes heat dissipation needs, ensuring stable server operation even under high loads. Moreover, improving PSU efficiency extends its lifespan and mitigates the risk of system downtime caused by power instability.

To enhance system reliability, redundant power architectures have become a standard design, significantly improving the maintainability of server PSUs. This article will discuss the environmental requirements set by the 80 PLUS certification and other international energy standards, as well as explore the critical role of redundant power configurations in ensuring system reliability and operational stability.


1. Higher Power Output Requirements


In the early 21st century, server power supply units (PSUs) typically operated within a power range of 200W to 300W. However, as computing demands have surged, modern servers now consume significantly more power, ranging from 800W to 2,000W, with some high-performance servers exceeding 3,000W.

Despite this increase in power consumption, server PSUs have largely continued to utilize IEC 60320 C19 (socket) / C20 (plug) AC connectors, which are rated for 16A. Under a 240V AC input, factoring in power conversion efficiency, the maximum power output is constrained to 3,600W. This limitation presents a power bottleneck for PSU design in the near term. To address the growing power demands, data centers have gradually transitioned to IEC 60320 C21 (socket) / C22 (plug) AC connectors, which are rated for 20A, increasing the maximum power output of a single PSU to 4,800W. Additionally, some high-performance computing facilities have begun deploying 277V or even 400V AC inputs to reduce current requirements and enhance overall power conversion efficiency.

With the continued growth of AI-driven servers, data centers are adopting higher-power supply architectures. For example, AI servers based on the Hopper architecture commonly use 3kW PSU modules, while AI compute servers utilizing the Blackwell architecture have already moved to 5.5kW PSUs to support the massive computational power needed for AI training and inference. Some high-end AI servers even surpass 6,000W in power consumption.

As this trend progresses, future PSU designs will evolve toward higher power density, improved efficiency (such as 80 PLUS Titanium or Platinum certification), and more advanced thermal management solutions to meet the stringent power and stability requirements of AI and high-performance computing applications.

2. Higher Power Density


High-power-density power supplies play a crucial role in various systems. As server computing capabilities and functionalities continue to expand, power demands are steadily increasing. This trend is particularly critical in space-constrained applications such as data centers, high-performance computing (HPC), industrial automation, medical equipment, and military systems, where high-power-density solutions are essential. While server chassis sizes remain unchanged, rising power requirements are driving more stringent power density demands. Modern server PSUs have evolved from single-digit power densities in the early 2000s to nearly 100W/in³ today. Achieving higher power density requires balancing efficiency and miniaturization, which involves reducing energy conversion losses, enhancing thermal management, and integrating advanced component technologies.

From a power topology perspective, advanced high-efficiency architectures like Totem-Pole PFC significantly reduce conduction and switching losses, improving overall power conversion efficiency. The adoption of advanced Gallium Nitride (GaN) power switches has reduced component counts by over 40%, contributing to enhanced reliability while minimizing solution size and cost. GaN switches not only enable faster switching speeds but also reduce power component size and thermal dissipation requirements. High-power-density designs also necessitate optimizing key components such as Ideal Diodes or ORing controllers, which must feature high current-handling capabilities, compact packaging, and built-in fault monitoring to minimize PCB footprint and component count, ultimately improving system reliability and efficiency.

Thermal management is another major challenge. As power density increases, thermal management technologies must advance accordingly. Solutions such as heat pipes, vapor chambers, and advanced air or liquid cooling techniques effectively mitigate internal thermal stress within power modules, extending the lifespan of power supplies.

3. Improved Conversion Efficiency


In the early 2000s, server power supply units (PSUs) typically had conversion efficiencies just above 65%. At the time, PSU designs prioritized power stability over energy efficiency optimization. Traditional converter topologies could easily achieve 65% efficiency, but since servers operate continuously for extended periods, improving PSU efficiency significantly reduces overall operating costs and energy consumption. Power conversion efficiency directly impacts system energy utilization. For example, with 90% efficiency, a server requiring 800W of output power would draw approximately 888W from the grid (800W / 90%). However, if the efficiency were only 80%, the PSU would need to draw 1,000W (800W / 80%). This 112W difference represents substantial energy waste and additional cooling requirements, especially in large data centers.

In 2004, the U.S. Environmental Protection Agency (EPA) introduced the 80 PLUS energy efficiency standard for PC power supplies, which later became a key efficiency benchmark for both PC and server PSUs. The 80 PLUS certification requires PSUs to achieve at least 80% efficiency under various load conditions. Today, most mainstream server PSUs meet the 80 PLUS Gold standard (>87–92% efficiency), while some high-end models achieve 80 PLUS Platinum (>90–94% efficiency). The 80 PLUS certification has become a crucial design criterion, defining minimum efficiency thresholds at 20%, 50%, and 100% load levels. By increasing conversion efficiency, PSUs not only reduce energy losses and heat generation but also enhance system stability and alleviate cooling system burdens.

The 80 PLUS certification categorizes PSU efficiency into the following levels:
(1) Standard 80 PLUS: Efficiency between 80% and 85% or higher.
(2) 80 PLUS Bronze: Efficiency between 81% and 88%.
(3) 80 PLUS Silver: Efficiency between 85% and 90%.
(4) 80 PLUS Gold: Efficiency between 87% and 92%.
(5) 80 PLUS Platinum: Efficiency between 90% and 94%.
(6) 80 PLUS Titanium: The highest tier, with efficiency between 90% and 96%.

80 PLUS certification categories

4. What Is a Redundant Power Supply (RPS)?


In server power supply design, a redundant power supply (RPS) is a critical mechanism that ensures continuous system operation even in the event of a primary power module failure. This design prevents operational disruptions and data loss due to power interruptions. Typically, an RPS consists of two or more power supply units (PSUs) and can be configured in various architectures such as 1+1, N+1, or N+N, depending on application requirements. When one power module fails, the backup module immediately takes over, and the faulty unit can be hot-swapped to maintain system stability and enhance overall reliability.

(1) Functions of Power Redundancy

High availability is crucial in IT and data center applications, making power redundancy an essential strategy for ensuring uninterrupted operation. With a multi-module power architecture, the system can continue running even if a single power module fails. This capability safeguards operational continuity, enhances data integrity, enables real-time fault switching, and reduces the risk of unexpected downtime. In applications such as cloud computing equipment, data centers, and telecommunications base stations, redundant power supplies significantly mitigate the impact of single points of failure, improving overall operational efficiency.

(2) How Power Redundancy Works

The core advantages of power redundancy lie in its high reliability and stability, ensuring continuous operation even in the event of power anomalies. The main working principles include:

A. Multi-Module Architecture: Even if one power module fails, the remaining modules can sustain system operation, preventing total shutdown due to a single point of failure.
B. Automatic Fault Switching: When the primary power module fails, the system automatically switches to the backup module within milliseconds, ensuring uninterrupted power delivery to critical loads.
C. Load Sharing: Multiple power modules operate simultaneously to distribute the load, improving energy efficiency, reducing stress on individual modules, and extending the lifespan of the equipment.
D. Ease of Maintenance: Modular designs support hot-swapping, allowing technicians to replace faulty power modules without shutting down the system, significantly enhancing maintenance efficiency.

(3) Applicable Scenarios

Redundant power supply technology is widely used in industries that require 24/7 uninterrupted operation to ensure system stability and minimize losses caused by power failures.

A. Server Systems: Enterprise servers, cloud computing equipment, and rack-mounted servers in data centers.
B. Telecommunication Base Stations: Ensuring stable network operations and preventing signal disruptions.
C. Medical Equipment: Critical systems such as life support and surgical equipment that demand the highest level of power stability.

5. Redundant Power Supply Configurations


Server redundant power supply designs aim to enhance system reliability and prevent operational disruptions due to single-point failures. For basic server systems, the commonly used 1+1 redundancy architecture consists of an active power supply unit (PSU) and a backup PSU. More complex server systems may adopt N+1 or N+N (N>2) configurations to meet higher reliability demands. To ensure continuous operation during PSU replacement, these architectures typically incorporate hot-swappable technology and ORing control to prevent backflow. Additionally, as multiple PSUs operate simultaneously in N+1 or N+N systems, load-sharing technology plays a critical role in maintaining balanced power distribution and stable power delivery. Below is a detailed breakdown of the primary redundant power supply configurations:

(1) 1+1 Configuration

A. The first “1” represents the number of PSUs required for normal server operation.
B. The second “1” represents the number of backup PSUs.
C. In a 1+1 configuration, a single PSU is sufficient to power the server, while an additional PSU of the same specification is included as a backup. If the primary PSU fails, the backup PSU immediately takes over to ensure uninterrupted system operation. This setup is suitable for small to medium-sized servers or individual devices that require high power reliability.

(2) N+1 Configuration

A. The first “N” represents the number of PSUs required for normal server operation (N≥2).
B. The second “1” represents the number of backup PSUs.
C. In an N+1 configuration, the server is powered by N PSUs to meet normal operational requirements, with an additional PSU serving as a backup. If one of the active PSUs fails, the backup PSU automatically takes over the load, ensuring continuous power delivery. For example, if a server requires three PSUs for operation, an N+1 configuration would consist of 3+1 PSUs, totaling four. This configuration is widely used in medium to large-scale servers and is particularly suited for data centers and enterprise applications where high stability is required.

(3) N+N Configuration

A. The first “N” represents the number of PSUs required for normal server operation (N≥2).
B. The second “N” represents the number of backup PSUs (N≥2).
C. In an N+N configuration, the server operates with N PSUs while an additional N PSUs serve as backups, mirroring the number of active PSUs. For example, if two PSUs are required for normal operation, the configuration would be 2+2, totaling four PSUs. This architecture provides dual redundancy, safeguarding against not only a single PSU failure but also simultaneous failures of multiple PSUs. The N+N configuration is typically deployed in mission-critical servers, such as financial systems, large-scale data centers, or high-availability cloud computing platforms.

(4) Considerations for Configuration Selection

The 1+1, N+1, and N+N configurations each have their applicable scenarios and cost considerations. When designing a server power architecture, it is essential to select the most suitable configuration based on the specific application requirements, server criticality, and budget constraints.

Tiger Power specializes in high-reliability power solutions, offering a wide range of redundant power architectures to meet customer demands for system stability and operational continuity. In redundant power supply designs, even when a PSU is in standby mode (not supplying power to the main power rail), it must remain in a minimal operational state to ensure immediate full-power delivery during a hot-swappable event. However, to reduce standby energy consumption, “cold standby” technology is emerging as a key trend.

Cold Standby Technology
Cold standby operates by keeping the backup PSU in a low-power inactive state rather than maintaining high-energy standby operation. When the primary PSU fails or there is a sudden surge in power demand, the backup PSU quickly activates to supply the necessary power. This design significantly reduces standby power consumption, improves energy efficiency, and aligns with the energy-saving requirements of green data centers. With increasingly stringent energy regulations and growing environmental awareness, cold standby architectures are becoming an essential trend in future server power supply designs.

6. Conclusion


The future development of server power supply design will continue to focus on meeting higher performance demands while balancing energy efficiency and reliability. As computational power increases, high power density design will become a key focus, leveraging advanced power components and innovative thermal management technologies to enhance power output capabilities. Meanwhile, redundancy design and hot-swappable technology will ensure seamless operation in the event of power failures, maintaining continuous system uptime. Moving forward, high-efficiency designs that comply with 80 PLUS certification and other international energy standards will become mainstream, helping businesses reduce their carbon footprint and optimize operational costs. Tiger Power remains committed to delivering high-quality power solutions, ensuring stable operation in high-load computing environments, and providing comprehensive power management strategies to help businesses achieve optimal performance.

LEAD YEAR ENTERPRISE CO., LTD.

2F No.618 RuiGuang Road, NeiHu District, Taipei, Taiwan 11492

Contact Us


Copyright Ⓒ 2025 Lead Year Enterprise Co., Ltd.  All rights reserved.