Data center designs can be challenging to create for even the most experienced professional. Data centers are among both the most complex and energy-demanding building environments of the modern age. The trends of increasing data center capacity and power usage by information technology (IT) equipment (such as servers, which make up a significant percentage of any data center’s footprint) mean there is an increased need to reduce cooling loads, upgrade and improve HVAC equipment and try to decrease related energy and operational costs.
Since data centers are considered a crucial part of the country’s operational and defense infrastructure (“mission critical facilities”), standards have been put in place to enhance and protect data center operability and efficiency. Planning out appropriate airflow, circulation and cooling or heating needs and implementing them during construction or retrofitting is vital.
Federal Guidelines for Data Center Energy Efficiency
The federal Office of Management and Budget (OMB) has carefully outlined energy efficiency strategies and requirements for federal data centers in the Data Center Optimization Initiative (Memorandum M-16-19). The Federal Energy Management Program (FEMP) works with federal agencies to help them meet these requirements. FEMP provides a range of tools to help analyze and improve Power Usage Effectiveness (PUE) in data centers.
ANSI/ASHRAE Data Center Energy Standards
The American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) is a U.S. professional association founded to help advance heating, ventilation, air conditioning and refrigeration systems design and construction. ASHRAE publishes Energy Standard for Data Centers in conjunction with the American National Standards Institute (ANSI) to establish the minimum energy efficiency requirements for the design and operation of data centers.
The current 2019 edition, ASHRAE Standard 90.4, was created to be code-intended (mandatory and enforceable) language, much like Standard 90.1. Furthermore, Standard 90.4 references 90.1 for service water heating, building envelope, lighting and other equipment criteria.
The current edition includes the following notable changes: A reduction of maximum MLC values necessary for compliance and the removal of design MLC path for compliance – opting for the improved accuracy of maximum annualized MLC calculations (Section 6)
A reduction of maximum ELC values for the UPS segment needed to meet compliance requirements. The reduction was made in recognition of improvements in core electrical distribution equipment efficiency (Section 8)
Data Center Energy Consumption
The two main energy end-users in a data center are the IT and HVAC equipment. IT equipment consumes electrical energy and simultaneously generates very high internal heat loads. The HVAC systems consume more energy to remove excess heat and maintain appropriate indoor operating conditions. As a result, data centers can consume up to 100 to 200 times as much electricity as a standard office space. In 2014, American data centers consumed an estimated 70 billion kWh—1.8% of total U.S. electricity consumption.
Power Saving Data Center Design and Trends
How IT equipment is arranged and what kind of HVAC equipment is installed can both affect final energy usage. By designing data centers with “hot aisles” and “cold aisles” separated by blank panels or curtains, cool air can be conserved for use at air inlet points, keeping servers and other IT equipment cool. Heat from air outlets doesn’t impact IT equipment and can be vented straight out instead of being allowed to mix with the cool air, decreasing efficiency.
Another key area for energy efficiency improvement in data center cooling includes the integration of redundant liquid cooling and thermal storage systems. Mechanical systems designed for data centers must be built to manage machine efficiency, thermal transfer, controls and air/water flow losses, with a significant focus on airflow management and economizer operations for fine-tuning temperature and humidity conditions in cold aisles.
In existing buildings, the following soft guidelines can highlight opportunities for retrofitting to enable greater energy efficiency in data center cooling.
Data Center Efficiency Checklist
Ultimately, greater energy efficiency in data center cooling can be accomplished by planning or retrofitting buildings to accommodate the following:
Reduced IT loads via consolidation and virtualization
Hot aisle and cold aisle implementation via blank panels, curtaining, equipment configuration and cable entrance/exit ports
Installation of an economizer (air or water) and evaporative cooling (direct or indirect)
Indirect liquid cooling systems, centralizing of the chiller plant, moving chilled water close to the server, and raising chiller water set-points to achieve 1.4% reduction in chiller energy use per degree of temperature reduction
Networking of process systems controls to raise discharge air temperature and coordinate fans if a complete switch to liquid cooling is not achieved