Throttling Memory to Reduce Power Consumption
Intel processor based servers include automatic memory throttling features to prevent memory from overheating without the processor or memory using additional power. There are two different memory throttling mechanisms that are supported by Intel chipsets: closed loop thermal throttling (CLTT), and open loop throughput throttling (OLTT).
Closed loop thermal throttling is a temperature-based throttling feature. If the temperature of the installed FB-DIMMs approaches their thermal limit, the system BIOS will initiate memory throttling to manage memory performance by limiting bandwidth to the FB-DIMMs, therefore capping the power consumption and preventing the FB-DIMMs from overheating. By default, the BIOS will configure the system to support CLTT if it detects that there are functional advanced memory buffer (AMB) thermal sensors present on all installed FB-DIMMs. In CLTT mode, the system fans run slower to meet the acoustic limits for the given platform but will also allow the fans to ramp up as needed to maintain the parts within temperature specifications under high stress levels.
Open loop throughput throttling (OLTT) is based on a hardware bandwidth count and works by preventing the bandwidth from exceeding the throttling settings programmed into the MCH registers. The system BIOS will automatically select OLTT as the memory throttling mechanism if it detects that one or more installed DIMMs does not have a functional AMB thermal sensor. Once the system BIOS enables OLTT, it utilizes a memory reference code (MRC) throttling algorithm to maximize memory bandwidth for a given configuration. The MRC code relies on serial presence detect (SPD) data read from the installed DIMMs as well as system level data as set through the FRUSDR Utility.
While memory throttling is good in that it prevents memory failures without consuming additional power, it has limitations in that it can negatively impact system performance. Program execution can be affected when the memory is shut down or when the memory bandwidth is limited by CLTT or OLTT.
Power supplies transform AC power into DC for use by server circuitry and power transformation loses some energy. The efficiency of a power supply depends on its load. The most efficient loads are within the range of 50-75 percent utilization. Power supply efficiency drops dramatically below a 50 percent load, and it does not improve significantly with loads higher than 75 percent.
Power supplies are typically profiled for efficiency at a very high load factor, typically 80 to 90 percent. However, most data centers have a typical loads of 10 to 15 percent. Thus power supply efficiency is often poor. Since most servers today run with 20-40 percent efficient power supplies, they waste the majority of the electricity that passes through them. As a result, today's power supplies consume at least 2 percent of all U.S. electricity production. More efficient power supply designs could cut that usage in half, saving nearly $3 billion.
A high efficiency power supply can significantly reduce overall system power consumption. For example for a 400W system load, a 60 percent efficient supply consumes 560W at the wall vs. 460W with an 85 percent efficient power supply. Potential power savings from the change to a more efficient power supply = 100W.
In addition to the main power supply, servers utilize secondary power supplies that also can waste some power. These secondary smaller power supplies are distributed across the motherboard and are located close to the circuits they power. Secondary power supplies used in servers include point-of-load (POL) converters, voltage regulator modules (VRM) and voltage regulator down (VRD).
The output voltage from a VRM or VRD is programmed by the server processor using a voltage identification code (VID). Other secondary power supplies, such as POL converters, do not have this feature. VRM and VRD voltage and power requirements will vary according to the needs of different server systems. In many servers, approximately 85 percent of the motherboard power is consumed by the VRM/VRD - exclusively for the server's processor.
To minimize power consumption with power supplies and the secondary voltage regulators, a server should run workloads to optimize the power supply efficiency. Intel multi-core processors work with the VRM/VRDs so each core can operate in the most efficient way.
Storage Systems and Power Consumption
A basic server with two or four hard disk drives (HDDs) will consume between 24W and 48W for storage. By themselves, a few disks do not consume that much power. But external storage systems in large enterprises have thousands of disks that consume significant amounts of power in the data center. Small businesses typically purchase servers with direct-attached storage, where the server contains many HDDs. Increasingly, small businesses also purchase networked storage systems shared by client and server systems.
Fewer storage devices consume less energy. Better utilization is the key. Poor storage management practices can consume significant amounts of power. The most common wasteful storage practice is operating disks that manage low-activity data spinning 24 hours per day. Underutilization of data access (and thus the data's value) increases power and cooling expenses when compared to better managed storage solutions that use energy only when data is accessed or written.
Storage utilization figures differ by operating system and type of storage device. On typical server systems, the average usage level for a hard disk is about 40 percent. New disk drive capacity is increasing much faster than drive performance. As a result of this imbalance, storage administrators typically use the redundant array of inexpensive disks (RAID) architecture and striping techniques to increase performance and reliability, but at the price of increasing the number of rotating drives. As utilization levels drop, more devices are needed, increasing total disk costs and energy expense.
Rotating drives consume energy and generate heat. Hard disk drives, like other computer components, are sensitive to overheating. Manufacturers measure off a minimal range of operating temperatures -- from +5 to +55°C as a rule (occasionally from 0 to +60°C), which is a smaller range than for processors, video cards, or chipsets. The reliability and durability of HDDs depend on operating temperatures. Increasing HDD temperature by 5°C has the same effect on reliability as switching from 10 percent to 100 percent HDD workload. Each Centigrade degree drop of HDD temperature is equivalent to a 10 percent increase of HDD service life.
The heat rate dissipated by a HDD is the product of its current and voltage in various states. The efficiency of HDD small motors can be less than 50 percent. Power consumption of hard drives is usually measured in states of Idle, SATA or SCSI Bus Transfer, Read, Write, Seek, Quiet Seek (additionally, if supported), and Start. Average power consumption of a HDD can be calculated by measuring the power consumption of a HDD both during typical user operations and during intensive (constant) operations.
For every usage model, the percentage idle versus active of a HDD depends on disk capacity, applications in use, and related workloads. Average power consumption is estimated with the formula noted below, but it should be noted that your mileage may vary for actual power use.
Average hard disk power consumption for average operations, such as office work, PAverage can be estimated by the following formula:
where the states represent the power consumption of a drive from the voltage sources and the percentages represent the common percentage of the HDD state duration. This formula is based on the assumption that read/write HDD operations make up 10 percent of the total time for the average office usage.
Average power consumption during intensive hard disk operations such as defragmenting disks, scanning the surface, copying files (PConstant) can be calculated by the following formula:
This formula is based on the assumption that read/write HDD operations in intensive operations is over 50 percent of the time.
The most efficient hard drives will consume on average 5 - 6W in idle state. Average SATA interface hard drives consume between 7 and 10W in idle. Today's SATA hard drives typically consume between 10 and 15W during active modes. And, as with other computer components, the efficiency of the latest generations is much better than those of previous generations.
Heat dissipation requirements have relaxed because power usage of hard disks has been steadily going down in recent years. Newer serial interfaces (e.g., SATA II and SAS) do increase power usage and heat dissipation a bit, but overall the heat dissipation trend is moving down. And, Quiet Seek mode (i.e., slowing down the drive so the noise of rotating platters is equal or less than 128 decibels) can sometimes reduce heat dissipation of a hard disk much lower than it is increased by using a newer serial interface.
Electricity used by servers has doubled between 2000 and 2005 due to the increase in the number of servers installed and the required cooling equipment and infrastructure. Power use varies by server type, the configuration within each server, and the workload being run. All server components generate heat as they function. An increase of the local ambient temperature inside a server can cause reliability problems with the circuitry. Additional power is needed to keep systems and their components within a safe operating temperature range. As data centers move towards increased server density, the power and heat generated by servers will increase.
On a basic server system today, server processors consume the most power, followed by memory, then disks or PCI slots, the motherboard, and lastly the fan and networking interconnects. The recent shift to multi-core processors helps address energy consumption by the CPU and adds power management features that throttle processor power.
But today's applications are far more processor-intensive, which has triggered a trend toward high-density packaging and increased memory. This trend leads to memory as the largest power consumer in servers in the years to come. Memory cooling thus emerges as the primary thermal challenge.
Power supplies waste energy by converting AC power into DC. Most servers run today with 20 - 40 percent efficient power supplies that waste over half the electrical power passed through to them. At the typical server level, potential power savings from the change to a 15 percent more efficient power supply can be as great as 100W or more.
Storage power is minimal for an individual hard disk drive, but when servers utilize several disks and interoperate with RAID arrays and networked storage systems, storage power consumption is significant. Greater and cheaper disk drive capacity without performance improvements is leading to the trend of redundant disks and striping for performance and reliability. Good storage management techniques can offset this trend.
Power demands are trending upward, but newer server components are being manufactured to run more efficiently. The latest Intel Xeon processors, power supplies, memory and even hard disk drives all use less power, include power management features, and create less heat. With newer servers, data centers can evaluate servers by their performance per watt and focus on business optimization instead of TCO costs.