As data centers and volumes of servers have grown, so has the overall amount of electricity consumed. Electricity used by servers doubled between 2000 and 2005, from 12 billion to 23 billion kilowatt hours. This was due to the increase in the number of servers installed in data centers and to the required cooling equipment and infrastructure (Koomey 2008).
Individual servers are consuming increasing amounts of electricity over time. Before the year 2000, servers on average drew about 50 watts of electricity. By 2008, they were averaging up to 250 watts. As more data centers switch to higher density server form factors, the power consumption will increase at a faster rate. Analysts have forecasted that if the current trend is not abated, then the power to run servers will be equal to or greater than the server cost, as Figure 1 shows.
Due to these trends, it is important to understand how a server uses and consumes power. When replacing or upgrading server, it is possible to specify energy efficient improvements.
Power and Server Form Factor
Power use varies with the server's form factor. In the x86 server market, there are four basic server form factors:
- Pedestal servers,
- 2U server rack servers,
- 1U rack servers, and
- Blade servers.
Where floor space is restricted, and increasing computing capacity is a goal, many data centers utilize rack servers or blade servers rather than pedestal servers.
Servers, routers, and many other data center infrastructure devices are designed to mount in steel racks that are 19 inches wide. For rack servers, the height of the server is stated in multiples of U when 1U equals 1.75 inches. The U value identifies the form factor. Most common are 1U and 2U servers. Power use varies by server form factor due to the individual configuration, the heat and thermal environment related to that configuration, and the workload being processed.
Server Power: The Heat is On
Much of the electrical energy that goes into a computer gets turned into heat. The amount of heat generated by an integrated circuit is a function of the efficiency of the component's design, the technology used in its manufacturing process, and the frequency and voltage at which the circuits operate. Energy is required to remove heat from a server or from a data center packed with servers.
Computer sub-systems such as memory sub-systems and power supplies, and especially large server components generate vast amounts of heat during operation. This heat must be dissipated to keep the components within their safe operating temperatures. Overheated parts will generally have a shorter maximum life-span. Shorter component lifespan can produce sporadic problems, system freezes, or even system crashes.
In addition to server component heat generation, extra cooling is required when parts of the server are run at higher voltages or frequencies than specified. This is called over-clocking. Over-clocking a server results in increased performance, but also generates a greater amount of heat.
How Cooling is Achieved
Server manufacturers use several methods to cool components. Two common methods are the use of heat sinks to increase the surface area that dissipates the heat and the use of fans to speed up the exchange of air heated by the components for cooler ambient air. In some cases, soft cooling is the method of choice. Computer components can be throttled down to decrease heat generation.
Heat sinks consist of a metal structure with one or more flat surfaces (i.e., a base) to ensure good thermal contact with the components to be cooled, and an array of comb or fin like protrusions. Fins increase the surface area in contact with the air and thus increase the rate of heat dissipation, as Figure 2 illustrates. Heat sinks are frequently used in conjunction with a fan to accelerate airflow over the heat sink. Fans provide a larger temperature gradient by replacing the warmed air with cool air faster than convection alone can accomplish. Fans are used to make forced air systems, where the amount of air moved to cool components is far greater than the the flow due to convection.
Heat sink performance is defined as the thermal resistance from junction to case of the component. The units are °C/W. A heat sink rated at 10 °C/W will get 10 °C hotter than the surrounding air when it dissipates 1 watt of heat. Thus, a heat sink with a low °C/W value is more efficient than a heat sink with a high °C/W value.
A quality heat sink can dissipate thermal energy to an extent that additional cooling components need only be minimal. Heat sink thermal performance is determined by:
- Convection or fin area. More fins can provide more convection area. But care must be taken if a fan is used with a finned heat sink. In some cases, the pressure drop increases for forced air system.
- A shortcoming of the conventional fan-top CPU coolers is the reduction of airflow due to pressure drop resulting from the airflow obstruction of the chassis cover and the fins of the heat sink itself.
- Fan performance is rated in cubic feet per minute (CFM) with 0-pressure drop and performance is severely compromised with only minimal air flow obstructions from either the intake or exhaust side of the fan.
- Conduction area per fin. The thicker the fin, the better the conduction heat versus thinner fins.
- The most energy efficient heat sink designs will balance between multitudes thin fins and fewer thick fins.
- Heatsink base spreading. Heat must be spread out as evenly as possible in the base for fins to work effectively. A thicker base is good for heat spreading.
- However since server form factors are limited to a specific height to fit in racks, the thicker base leads to reduced fin height and hence reduced fin area and increased pressure drop.
Power consumption varies based type of server form factor, and it can also vary with the workload that is processed. Workloads are increasing across all server types due to increases in server processing performance, and thus become another trend increasing power consumption on servers.
Table 1 shows a sampling of power increase by server form factor over time. The three classes of server are defined by IDC.
- Volume servers cost less than $25,000 and most commonly have one or two processor sockets in 1-2U rack mount form factor.
- Mid-range servers cost between $25,000 and $499,999 and are typically contain two to four processor socket or more.
- High-end server cost $500,000 or more and are typically contain eight processor sockets or more.
A pedestal server varies in width and is designed to optimize the performance and cooling of the server. Because these systems are not space constrained, they have large heat sinks, multiple fans, and great air cooling.
Rack and blade servers are designed to fit within a standardized 19-inch mounting rack. The rack server architecture and the limited height for air vents and fans make them run hotter, and thus require more power in the data center for cooling infrastructure. 2U servers run hotter than a pedestal server, but cooler than 1 U servers or blades.
2U server, at 3.5-inch high can use more and larger fans in addition to bigger heat sinks resulting in improved cooling capability and thus less power consumption than a 1U server. Most servers are designed to bring cool, fresh air in from the bottom front of the case and exhaust warm air from the top rear.
Rack server architecture typically locates customer-desirable features up front, such as disk drives forcing the hot components, such as the server processor and memory, in the back. With rack servers, manufacturers try to achieve a balanced or neutral airflow. This is the most efficient, however many servers end up with a slightly positive airflow which provides the additional benefit of less dust build up if dust filters are used.
The 1U form factor, shown in Figure 3, and blade server are the most difficult to cool because of density of components and lack of space for airflow cooling. Blade servers have the benefit of more processing power in less rack space and simplified cabling. As many as 60 blade servers can be placed in a standard height 42U rack. However this condensed computing comes with a power price. The typical power demand (power and cooling) for this configuration is more than 4,000 watts compared to a full rack of 1U servers at 2,500 watts. Data centers address this demand by either increasing power consumption or with more exotic ways of computer cooling, such as liquid cooling, Peltier effect heat pumps, heat pipes, or phase change cooling. These more sophisticated cooling techniques all use more power.