Next Article in Journal
Evaluation of Clearance to Stop Requirements in A Seismically Isolated Nuclear Power Plant
Previous Article in Journal
An Economic Penalty Scheme for Optimal Parking Lot Utilization with EV Charging Requirements
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Comparative CFD Study of Two Air Distribution Systems with Hot Aisle Containment in High-Density Data Centers

1
Department of Building and Plant Engineering, Hanbat National University, Daejeon 34158, Korea
2
Infra Facility Engineering Team, SK Telecom Co., Ltd., Seoul 04539, Korea
3
Building Energy Center, Energy Division, KCL (Korea Conformity Laboratories), Jincheon 27872, Korea
4
Department of Architectural Engineering, Seoil University, Seoul 02192, Korea
*
Authors to whom correspondence should be addressed.
Energies 2020, 13(22), 6147; https://doi.org/10.3390/en13226147
Submission received: 28 August 2020 / Revised: 17 November 2020 / Accepted: 17 November 2020 / Published: 23 November 2020
(This article belongs to the Section G: Energy and Buildings)

Abstract

:
Removing heat from high-density information technology (IT) equipment is essential for data centers. Maintaining the proper operating environment for IT equipment can be expensive. Rising energy cost and energy consumption has prompted data centers to consider hot aisle and cold aisle containment strategies, which can improve the energy efficiency and maintain the recommended level of inlet air temperature to IT equipment. It can also resolve hot spots in traditional uncontained data centers to some degree. This study analyzes the IT environment of the hot aisle containment (HAC) system, which has been considered an essential solution for high-density data centers. The thermal performance was analyzed for an IT server room with HAC in a reference data center. Computational fluid dynamics analysis was conducted to compare the operating performances of the cooling air distribution systems applied to the raised and hard floors and to examine the difference in the IT environment between the server rooms. Regarding operating conditions, the thermal performances in a state wherein the cooling system operated normally and another wherein one unit had failed were compared. The thermal performance of each alternative was evaluated by comparing the temperature distribution, airflow distribution, inlet air temperatures of the server racks, and recirculation ratio from the outlet to the inlet. In conclusion, the HAC system with a raised floor has higher cooling efficiency than that with a hard floor. The HAC with a raised floor over a hard floor can improve the air distribution efficiency by 28%. This corresponds to 40% reduction in the recirculation ratio for more than 20% of the normal cooling conditions. The main contribution of this paper is that it realistically implements the effectiveness of the existing theoretical comparison of the HAC system by developing an accurate numerical model of a data center with a high-density fifth-generation (5G) environment and applying the operating conditions.

1. Introduction

With the advent of the fourth industrial revolution, data are the key for future industrial development and their importance and activation plans have been actively discussed [1]. Since data have been generated in large quantities due to the expansion of new businesses, such as cloud, big data, artificial intelligence (AI), and the Internet of Things (IoT), the importance of data centers to process the data has been increasing [2,3]. Hyper-scale data centers are increasing at a global level. Hyper-scale data centers are significantly larger than the existing legacy data centers and they have organic structures. A hyper-scale data center generally operates approximately 100,000 information technology (IT) servers and has a size of more than 20,000 m2 with flexible scalability [4]. The number of hyper-scale data centers worldwide in 2017 increased by 14.2% in comparison with the previous year. The number of hyper-scale data centers increased to 448 in 2018 and it is expected to increase by 10.17% in 2021 in comparison with the data for 2020 [5]. If data centers move towards the concept of hyper-scale data centers, it is necessary to prepare for the different modes of operation. In other words, changes in non-IT infrastructures for maintaining the operating IT environments of data centers, such as cooling the IT equipment as well as the power distribution and storage (UPS), are inevitable. Among them, changes in cooling systems are expected to be the largest due to the operation of high-density IT equipment [6,7].
In the hyper-scale data center environment, cooling system standards that can respond to changes in IT industries are required due to the increase in the density of IT equipment (more than 15 kW/rack). The conversion of cooling methods for high-density data centers is in progress. Cooling for removing heat from high-density IT equipment is a key consideration for data centers. To maintain the proper operating environment of IT equipment, the recommended temperature range specified by the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) [8] must be maintained throughout the year, which leads to a significant energy cost.
The increase in the energy cost and energy consumption has prompted data centers to consider hot aisle containment (HAC) and cold aisle containment (CAC) strategies [9,10,11]. Separating hot aisles from cold aisles is one of the most essential energy-saving methods that can be applied to new and existing data centers. The use of containment strategies can improve the energy efficiency and maintain the inlet air temperature to IT equipment at a constant level. It can also solve local overheating (hot spots) that can be observed in the existing non-containment data centers to some degree [12]. Both methods minimize the mixing of hot and cold air; however, they significantly affect the air-conditioning IT environment conditions, the power usage effectiveness (PUE), and the economizer time due to the difference in the actual implementation and operation. The HAC system can save 40% of the annual cooling energy consumption in comparison with the CAC system, and this led to a case in which the annual PUE was reduced by 13% [13,14].
In this study, the IT environment of the HAC system, which recently has been considered an essential solution for high-density data centers, was analyzed to compare two air distribution systems equipped with HAC to observe any differences in the IT cooling environment for the same geometry of server racks in the IT room. To this end, the thermal performance of each system was analyzed in a reference data center. Computational fluid dynamics (CFD) analysis was conducted to compare the operating performance of the cooling systems that are applied to the raised floor and hard floor and to examine the difference in the IT environment between the server rooms. Therefore, the results of this study were analyzed with a focus on the following three points.
First, the sustainability of the IT environment under the application of the HAC was examined. Second, the thermal performance of the raised floor and hard floor were analyzed. Finally, an analysis was conducted on the difference in the thermal performance between a state in which the cooling system was normally operating and another in which one cooling unit was not operating due to failure. The potential contribution of this paper is that it facilitates the realistic implementation of the effectiveness of the existing theoretically simple comparison of the HAC systems. This is achieved by performing accurate numerical modeling of the actual situation in the data center for a high-density IT environment and applying the operating conditions. It is essential to verify the results of this study through field measurements in the future, as it targets a reference data center in operation.

2. Methodology

2.1. Analytic Validity of Computational Fluid Dynamics (CFD) Simulation for Data Centers

A data center represents a very complex environment, where each server can have a different load, and the flow of air around it can shift drastically based solely on the load distribution. The thermal performance of the cooling system in a data center depends on the air path. CFD is an efficient tool to provide detailed information about the airflow, whereas experimental procedures are more expensive and time-consuming. It is especially hard to conduct field measurements for data centers because of the high security level. Another advantage of using CFD modeling is the possibility to analyze proposed data center configurations that have not been built yet, in addition to the already existing data centers. However, it is very important that the simulations be performed with quality and reliability [15]. The uncertainties arising from numerical simulations in these types of complex prediction may result in differing from their true or exact values. These uncertainties and errors not only apply to the CFD code, but other computer programs used in the analysis process such as grid generators. Using high-order schemes can be improved in terms of both accuracy and computational efficiency [16,17]. Important parameters such as quality of the computational grid, boundary conditions, and choice of turbulence model must be carefully considered. Otherwise, the results from the CFD modeling can be misleading [18]. Without field measurements on the real objects, the most important aspect of CFD simulations is the process from which a result secures its validity. To ensure reliable CFD modeling, the characteristics of the IT equipment must be reflected accurately [19].
First, a concurrence regarding the method of performing the CFD using a verified tool is needed. Various types of commercial simulation software are available for performing data center analysis. The software used for the CFD analysis was 6SigmaRoom from Future Facilities. It is a comprehensive analysis software program equipped with the functions necessary for the design and operation of data centers. It can analyze complex data center cooling systems while considering a variety of variables such as the temperature, air volume, humidity, and pressure difference [20]. As a major feature, it has a database containing information regarding almost all the IT equipment and cooling systems, collected and verified by the manufacturers. This makes it possible to accurately model the situation of an actual data center. It covers the latest cooling systems, and it has wide flexibility to address a variety of problems in comparison with other commercial CFD software programs [21,22]. Second, it is important in all CFD simulations to perform a grid independent study. The components can be retrieved from an accompanying library or by providing the information using a neutral data format. This means that CFD modeling of an IT room with existing racks and coolers could be more effective because its grid independency testing has been already verified. Among various dedicated data center solutions surveyed, 6SigmaRoom appears the most comprehensive and feature-packed. It has an MPI-enabled solver that can scale well on many machines, delivering fast and in-situ results for the simulation [23]. It uses the finite volume method and involves the integration of the governing equations over several control volume regions obtained from the discretization of the solution domain. The K-Epsilon turbulence model, also known as the two-equation model, is used. This model is preferable for free air streams such as data centers and telecommunication shelters as well as for problems with thin shear layers. The two variables solved in the transport equations for this model are the kinetic energy of turbulence (k) and the dissipation rate of kinetic energy of turbulence (ε) [24]. The third important observation is that the capability of certain simulation tools is universal in the data center industry. 6SigmaRoom is used in representative data center projects and demonstrates reliable CFD results. There are several other global data center references such as Samsung, Facebook, and IBM [19,25,26,27,28,29]. Lastly, an accurate and fair comparison of HAC with raised floor versus that with hard floor requires similar setups. Many researchers are beginning to wonder whether it is better for supply air path to have a raised floor or a hard floor, but no conclusion has been reached. The thermal performance of an air distribution system depends on its cooling conditions and IT environment. Although both approaches minimize the loss of cooled air, there are practical differences in their implementation and operation. The test cases in the CFD simulation should have identical room and IT setups. It is of even greater importance for engineers and researchers to provide an objective basis for deciding the optimal system levels, which is currently dependent on the discretion of the data centers. This comparative CFD study contributes significantly to the deployment of a proper data center cooling system. It can reduce the time and money needed to build the data centers.

2.2. Overview of Numerical Analysis

A numerical simulation of the air distribution system was carried out at the SK telecom data center in Korea along with detailed CFD modeling. A three-dimensional (3D) virtual white space was modeled for the IT server room on the fourth floor of a reference data center. The dimensions of the IT server room were 34.160 × 12.810 × 4.800 m3. Figure 1a shows the floor plan of the target space for the CFD analysis. The zone of the IT server room that was analyzed had 10 rows (A–J), and 10 server racks (1–10) were placed in each row. The hot aisle spacing was 1220 mm, which corresponded to two floor tiles. The cold aisle spacing was 1830 mm, which corresponded to three floor tiles. The room-based cooling method, which uses six computer room air handling (CRAH) units to serve one zone, was applied. In the analysis, the air distribution method of installing the raised floor was compared with the hard floor structure without the raised floor, where air is supplied to the cold aisles directly from the CRAH units. As shown in the sectional view of the target space in Figure 1b, HAC was adopted for efficient air distribution and the air path was connected to the CRAH units by installing return air (RA) ducts and an RA chamber.

2.3. Basic Analysis Model

2.3.1. Information Technology (IT) Server Room (Solution Domain)

Figure 2 shows the solution domain 3D model for one zone of the IT server room that was analyzed. The solution domain was limited to the inside of the IT server room. The heat and mass transfer from the outside of the domain was not considered. The beams and columns that may affect the airflow were modeled as accurately as possible based on the design drawing. Unstructured Cartesian grids were used to conduct the numerical analysis, and the total number of grid elements for the basic model was 2,447,462. It was assumed that the data center was not affected by the external weather and there was no external cooling load, which includes the solar heat gain. Table 1 lists the boundary conditions for the analysis.

2.3.2. Server Rack

The total number of server racks installed in one zone (10 rows) was 100. Table 1 shows the power per rack and the total power of a zone. High- and medium-density server racks were used in the rack analysis model. Figure 3a depicts the powers of these two types of server rack shown in the floor plan. For this investigation, 40 A-type server racks (16 kW/rack) and 60 B-type server racks (8.8 kW/rack) were installed. Because a variety of combinations can be applied to the equipment inside the server racks, a simplified rack model was adopted. Regarding the direction of airflow, cold air was introduced through the front vent and hot air was discharged through the rear vent. The leakage rate inside the server racks was set to 5% under the assumption that the blanking panels were installed in the empty slots of the cabinets. As demonstrated in Figure 3b, the size of the applied cabinet also varied depending on the power density of the rack. It was assumed that the temperature difference (ΔT) between the air inlet and outlet of the IT server was constant at 15 °C. Accordingly, the air volumes required by each server rack were 3200 m3/h (A-type) and 1800 m3/h (B-type).

2.3.3. Hot Aisle Containment

Figure 4 displays the analysis model of HAC, which is the air distribution method for the IT server room. The containment height was the same as the server rack in each row, and the RA ducts were installed at the top so that the returned air could reach the CRAH units. The ducts of the containment were installed at each hot aisle and they were connected again to the RA chamber. It was assumed that there was no leakage in the ducts and the containment had no leakage except for the gap under the entrance door of the HAC. Furthermore, the walls of the ducts and containments were adiabatic.

2.3.4. Computer Room Air Handling (CRAH) Unit

Table 2 describes the detailed specification and geometry of the analysis model of a CRAH unit, which is a cooling system. The down-flow type was applied as the airflow method regardless of the raised floor method and hard floor method. Three electronically commutated (EC) fans were installed in each CRAH unit. The return air was introduced through the RA chamber that was connected to the top of the CRAH unit.

2.4. CFD Analysis Alternatives

Which type of air distribution is a better choice for existing data centers? This question has been extensively discussed by manufacturers, consultants, and end users. In reality, the best containment type will largely depend on the constraints of the facility [13]. The logical process for deciding the appropriate HAC solutions to implement begins with reviewing all potential solutions, and selecting the appropriate air supply solution. The following approach can be adopted in an emergency when the cooling system fails. A ducted hot aisle containment system (Ducted HAC) can be used with either a raised floor or a hard floor-based (room cooled) air distribution system. The ducted HAC encloses the hot aisle, allowing the rest of the data center to become a large cold-air plenum. As shown in Table 3 and Figure 5, the CFD analysis alternatives were set depending on the operation status of the CRAH units and the installation of the raised floor. The down-flow type was used for the CRAH units regardless of the raised floor method (ALT-1) and the hard floor method (ALT-2). ALT-3 and ALT-4 were assumed to display cooling under the fault conditions in which one of the six CRAH units cannot serve.

3. Numerical Analysis Results

The thermal performance of each alternative was evaluated by comparing the temperature distribution, airflow distribution, inlet air temperatures of the server racks, and the recirculation ratio from the outlet to the inlet.

3.1. Normal Cooling Conditions (ALT-1 and ALT-2)

Regarding the normal cooling operation, an evaluation was performed for the thermal performance of the HAC with the raised floor (ALT-1) and the hard floor (ALT-2) under the normal condition in which all of the six CRAH units operated. This numerical analysis considered the HAC to be tightly sealed. This assumption allows calculating the maximum efficiency of the CRAH units and ensures a fair comparison. However, cold air leakage always occurs in a HAC, requiring the CRAH fan airflow to be greater than the IT equipment airflow.

3.1.1. Temperature Distribution in an IT Server Room

Figure 6a depicts the air temperature distribution around the server racks at heights of 0.2 m (bottom), 1.0 m (middle), and 1.8 m (top) from the floor. For ALT-1 and ALT-2, the air temperature distribution for most of the cold aisles was maintained at approximately 16 °C. This was less than 1 °C higher than the supply air (SA) temperature of the CRAH units (15 °C), which indicates that the temperature rise was under excellent control. This demonstrates that the air cooled in the CRAH units was well distributed with little heat loss in the paths to each server. In other words, air recirculation was well prevented as originally intended by the HAC system. The vertical section views of the IT server room in Figure 6b show the air temperature distribution in the vertical sections of the ninth and fourth server rack positions and in the cross-section of the RA chamber. A uniform distribution was observed in the cold aisles in a manner similar to the air temperature distribution in the horizontal sections of each height, but an air temperature difference of 26–30 °C was observed in the hot aisles. In the section where the A-type high-density server racks were installed, the hot aisle air temperature distribution was higher, which affected some of the surrounding areas. The CFD analysis results showed that the air temperature distribution of the hot and cold aisles in the IT server room were similar for the raised floor (ALT-1) and hard floor (ALT-2).

3.1.2. Airflow Distribution in an IT Server Room

Figure 7a shows the distribution of the airflow that is discharged from the CRAH units at 0.2 m height from the floor. This height is inside the raised floor for ALT-1 and it is just above the hard floor for ALT-2. The vertical section view of the cold aisle in Figure 7b illustrates that high airflow velocity of more than 10 m/s to the third server rack position is maintained in ALT-1. The airflow velocity to the sixth server rack position is approximately 6 m/s, and it is less than 4 m/s for the rest. On the other hand, in ALT-2, high airflow velocity of more than 10 m/s is maintained to the sixth server rack position. The airflow velocity to the last server rack is maintained at approximately 6 m/s, and it exhibits relatively high-speed airflow distribution in comparison with ALT-1. A relatively low airflow velocity of less than 2 m/s was maintained in the hot aisles for ALT-1 and ALT-2 due to the containment structure. It appears that the heat was effectively discharged through the ducts that are connected to the upper part. The simulation analysis results showed that ALT-2 exhibited a relatively higher airflow speed than ALT-1 for the cold aisles in the IT server room. In particular, a high airflow velocity of more than 10 m/s caused a negative relative pressure, making it difficult to supply a proper air volume to the inlet of the server racks.

3.1.3. Inlet Air Temperature of the Server Racks

The most essential aspect in evaluating the thermal performance of a cooling system in a data center is the distribution of the inlet air temperature of the server racks. If it is too low, energy is wasted. If it is too high, the heat of the server cannot be effectively removed [30]. At the rack level, systems have the ability to shut down when a rise in temperature threatens their functioning. To avoid this, ASHRAE suggests monitoring the inlets at the bottom, middle, and top of the rack, maintaining the recommended (18 °C to 27 °C) as well as allowable (15 °C to 32 °C) thermal ranges (Figure 8a). The ASHRAE thermal guidelines [8] have been widely used as the IT environment classes for data centers, and they propose standards on the proper server inlet air temperature. The adequacy of the results for this study was determined by applying the temperature ranges. Figure 8b shows the ID numbers, which indicates the positions of 100 server racks (A01-J10) as well as the recommended and allowable ranges of the server inlet air temperature. In the upper allowable temperature range of 27–32 °C suggested by ASHRAE, the IT servers can operate without significant problems. However, if this condition is maintained over a long period of time, it continuously affects the IT equipment, thereby increasing the possibility of downtime. The inlet air temperature of the server racks is under 27 °C, which is within the range recommended by ASHRAE. Temperatures above 27 °C need not be classified. Therefore, the graph depicted in Figure 8a ends at 30 °C, beyond which the temperature contours need not be clearly segregated with colors.
Figure 9 presents the mean inlet air temperature distribution of 100 server racks for ALT-1 that adopted the raised floor. All the server racks operated within the recommended temperature range of ASHRAE, which resulted in a very safe state. According to the airflow distribution analyzed above, a proper air volume was not supplied to the first server racks with high-speed airflow and the tenth server racks with low-speed airflow in comparison with the other server racks, thus resulting in relatively high inlet air temperatures for the server racks. Nevertheless, the maximum inlet air temperature was 20.6 °C, which is adequate.
In the case of ALT-2 that adopted the hard floor as shown in Figure 10, 39000 m3/h of cooled air at 15 °C, blown from the CRAH units to the uncontained cold aisles at a high speed, was rapidly spread throughout the IT server room. This airflow velocity, which is higher than that in ALT-1, does not allow a sufficient air volume to be supplied to many server racks, thereby causing high inlet air temperature distribution. It was determined that 97% of the server racks satisfied the recommended temperature range of ASHRAE. Approximately 3% of them (three server racks closest to the CRAH units) exceeded the range. Among them, two servers exhibited a maximum inlet air temperature of 30.2 °C, which was 9.6 °C higher in comparison with ALT-1. In conclusion, for ALT-2, 97% of the 100 server racks satisfied the recommended temperature range of ASHRAE and 3% met the allowable temperature range. The numerical analysis results showed that the inlet air temperature distribution of the server racks was very stable for ALT-1 in comparison with ALT-2. In other words, the stable supply of cooled air through the raised floor improves the overall air distribution efficiency of the room-based cooling system.

3.1.4. Recirculation Ratio

For cooling the IT server room, the cooled air that is supplied from the CRAH units is introduced into the server racks to remove the heat. The warmer air then returns to the CRAH units. During air re-circulation, the air warmed by the heat of the IT server is reintroduced into the server racks instead of returning to the CRAH units. During air by-pass, the cooled air that is supplied from the CRAH units immediately returns to the CRAH units instead of being introduced into the server racks; these processes occur frequently. The elements that have the largest influence on the thermal performance and efficiency of the cooling and air distribution systems in data centers are air re-circulation and air by-pass. Previously, the RTI (return temperature index) was proposed to explain the re-circulation and by-pass [31,32]. Under the ideal condition of the RTI, the air conditioning-cooling supply must be 100% matched by utilization. RTI is applicable to the traditional uncontained room-based cooling systems. For containment structures, such as HAC and CAC, however, RTI does not significantly change when the leakage rate between the hot and cold aisles is maintained at 5% or less. Therefore, RTI was not well applied to ALT-1 and ALT-2 with the HAC system. Instead, the recirculation ratio was analyzed based on the amount of air re-circulated inside the server racks where a proper air volume was not supplied. The ratio of the insufficient recirculation air volume is calculated by subtracting the ratio of the air volume that is actually supplied to the server racks ( sQ rack ) to the air volume required to remove the heat from the server racks ( rQ rack ) from 100%. The recirculation ratio is calculated using Equation (1).
RR ( recirculation   ratio ) = 1 sQ rack rQ rack
Figure 11 illustrates the distribution of the recirculation ratios of 100 server racks. The overall distribution pattern shows that the server racks with high inlet air temperatures exhibited high recirculation ratios. Because a proper air volume supply to remove the heat was absent, recirculation occurred inside and the inlet air temperature increased. In ALT-1 and ALT-2, a high heat load of 16 kW/rack occurs in rows A through D where A-type high-density server racks were applied; thus, high air volume needs to be supplied accordingly. Because the supplied air volume was not sufficient for them, the recirculation ratio was relatively high. For rows D through J where the B-type medium-density server racks were applied, it was determined that a proper air volume to remove the heat was supplied. In addition, the air volume was not sufficient for most of the first server racks that were the closest to the CRAH units, and they exhibited high recirculation ratios. A maximum recirculation ratio of 25.7% for ALT-1 implies that the supplied air volume was 25.7% less than the required air volume. However, this was not a problem because the inlet air temperature was 20.6 °C. The maximum recirculation ratio of ALT-2 was 50%, which indicated the presence of server racks that are in a serious condition. The inlet air temperature of these server racks was as high as approximately 30 °C. In other words, only half of the air volume that is required to remove the heat was supplied. In addition, the temperature of cooled air at 15 °C supplied from the CRAH units increased by more than 15 °C. The CFD analysis results showed that the recirculation ratio distribution of the server racks was very stable for ALT-1 in comparison with ALT-2, which corresponded to a 42% reduction in the recirculation ratio of more than 20%.

3.1.5. Operational Net Cooling Power

Table 4 describes the cooling system capacity during the operation of each CRAH unit. CRAH-01 and CRAH-02, which covered the high-density server racks, operated close to the rated cooling capacity. On the other hand, in the section where medium-density rack servers were installed and there was sufficient free space, approximately 70% of the rated capacities of CRAH-03 through CRAH-06 was used. The simulation analysis results showed that there was almost no difference in the cooling system load between the raised floor (ALT-1) and the hard floor (ALT-2). This indicates that the performance and efficiency of the IT environment, cooling system, and the air distribution system may significantly differ depending on the raised floor and the hard floor configuration, even if cooled air is supplied at the same temperature (15°C) by using the cooling systems with the same capacity.

3.2. Cooling Fault Conditions (ALT-3 and ALT-4)

Regarding cooling fault operation, the thermal performances of the HAC with the raised floor (ALT-3) and hard floor (ALT-4) were evaluated under the condition that five out of six CRAH units operated and one unit experienced failure. For the cooling system, it was assumed that the operation of CRAH-03, which covered the high- and medium-density server racks, was stopped as depicted in Figure 5c. As indicated, a value of N+1 for the cooling system was considered an emergency wherein the cooling capacities of the six CRAH units were increased to cover the entire IT load with five operational CRAH units.

3.2.1. Temperature Distribution in an IT Server Room

Figure 12a presents the air temperature distribution around the server racks at each height from the floor. For ALT-3 and ALT-4, approximately 16 °C was maintained for most of the cold aisles. The rise in air temperature was under excellent control in a similar manner to the normal cooling condition. This is because the cooled air that was produced from the five CRAH units was well distributed without significant heat loss in the paths to each server, even though one CRAH unit did not serve. The vertical section views of the IT server room in Figure 12b show that a uniform distribution occurred in the cold aisles in a manner similar to the air temperature distribution of the horizontal sections for each height. The numerical analysis results under the cooling fault conditions showed that the air temperature distribution of the hot and cold aisles in the IT server room was similar for the HAC with a raised floor (ALT-3) and a hard floor (ALT-4). Almost normal IT environment maintenance was possible with the sufficient capacity of the five CRAH units.

3.2.2. Airflow Distribution in an IT Server Room

Figure 13a shows the distribution of the airflow that is discharged from the CRAH units at 0.2 m height from the floor. Compared with the normal cooling conditions in Figure 7a, a low airflow distribution of 2 m/s or less occurred in rows D through F and it was covered by CRAH-03 because the cooled air was supplied from nearby CRAH units. In addition, the airflow velocity increased in comparison with the normal cooling conditions because the rest of the five CRAH units had to supply a larger airflow rate. The vertical section view of the cold aisle in Figure 13b shows that high airflow velocity of 10 m/s or more to the fourth server rack position was maintained in ALT-3; the airflow velocity was approximately 6 m/s to the eighth server rack position, and 4 m/s or less for the rest. On the other hand, in ALT-4, high airflow velocity of 10 m/s or more was maintained to the eighth server rack position. The airflow velocity to the last server rack was maintained at approximately 8 m/s, thus exhibiting a relatively high-speed airflow distribution in comparison with ALT-3. As for the hot aisles, a stable airflow velocity was maintained for ALT-3 and ALT-4 due to the containment structure, and heat was effectively discharged through the ducts connected to the upper part. The CFD analysis results showed that ALT-4 exhibited a relatively higher airflow velocity than ALT-3 for the cold aisles in the IT server room. In particular, the number of sections that exhibited high-speed airflow of 10 m/s or more increased in comparison with the normal cooling conditions of ALT-1 and ALT-2. Therefore, this made it difficult to supply a proper air volume to the inlet of the server racks.

3.2.3. Inlet Air Temperature of the Server Racks

Figure 14 shows the mean inlet air temperature distribution that was supplied to the server racks of ALT-3. The IT environment was maintained within the recommended temperature range of ASHRAE for all the 100 server racks. The first server racks with high-speed airflow and the tenth server racks with low-speed airflow exhibited relatively high inlet air temperatures in comparison with the other server racks. In addition, the inlet air temperatures were higher in rows D and E in comparison with the other rows due to the downtime of CRAH-03. The maximum inlet air temperature was 20.2 °C, which is appropriate. In the case of ALT-4 in Figure 15, five CRAH units supplied 46,000 m3/h of cooled air at 15 °C to the cold aisles throughout the IT server room at a high speed. Due to the downtime of CRAH-03, the inlet air temperature was high in rows D and E, as sufficient air volume was not supplied to the server racks. Approximately 97% of the server racks satisfied the recommended temperature range of ASHRAE and 3% met the allowable temperature range specified by ASHRAE. Among them, the maximum inlet air temperature was as high as 30.4 °C. The simulation analysis results under the cooling fault conditions showed that the inlet air temperature distribution of the server racks was more stable for ALT-3 than for ALT-4. Even with the failure of one CRAH unit, it was possible to ensure a relatively stable supply of cooled air through the raised floor. In addition, there was no overall imbalance of the room-based cooling system when the required air volume was satisfied. As for the hard floor, the failure of one CRAH unit made it difficult to supply cooled air to the corresponding cold aisles because the server racks served as partitions to divide the space.

3.2.4. Recirculation Ratio

Figure 16 shows the recirculation ratios of 100 server racks. The overall distribution pattern shows that the server racks with high inlet air temperatures exhibited high recirculation ratios. Because adequate air volume was not supplied to remove the heat, recirculation occurred inside and the inlet air temperature increased. For ALT-3 and ALT-4, the air volume was not sufficient for the first server racks that were the closest to the CRAH units. The maximum recirculation ratio of ALT-3 was 24.2%. It was slightly lower than that in the normal cooling conditions, but the number of server racks with a recirculation ratio of more than 20% increased from two to four. This means that the supplied air volume was more than 20% lower than the required air volume. This was not a problem because the inlet air temperature ranged from 19.6 to 20.2 °C. In the case of ALT-4, the number of server racks with a recirculation ratio of more than 40% increased to two and the number of server racks with a serious recirculation ratio of 50% increased to three. The inlet air temperatures of these server racks were very high (~26.1–30.4 °C). The CFD analysis results under the cooling fault conditions showed that air recirculation occurred to a greater number of server racks in ALT-4 than in ALT-3, which corresponds to a 40% increase in the recirculation ratio of more than 20%. The IT environment is expected to be degraded if this phenomenon continues.

3.2.5. Operational Net Cooling Power

Table 5 shows the cooling system capacity during the operation of each CRAH unit under the cooling fault conditions. CRAH-01 and CRAH-02, which covered the high-density server racks, operated at the maximum cooling capacity. In the case of CRAH-04 through CRAH-06 for the section where medium-density server racks were installed, approximately 85% of the maximum capacity was used. The simulation analysis results under the cooling fault conditions showed that there was almost no difference in the cooling system load between the raised floor (ALT-3) and the hard floor (ALT-4). It was determined, however, that the IT environment of ALT-4 is unfavorable, even if cool air is supplied at the same temperature (15 °C) by using cooling systems with the same capacity.

4. Discussions; Cooling Efficiency Evaluation

For a comparison based on the CFD simulation results, the IT environments were compared based on the inlet air temperature supplied to each server rack from ALT-1 to ALT-4, as demonstrated in Figure 17a. The boxplot shows the distribution of the data, such as the maximum, minimum, and median values, as well as the quartiles for each alternative, along with a comparison of the four alternatives. When the normal cooling system was operated based on the supply air temperature of 15 °C, the mean inlet air temperatures of the raised floor (ALT-1) and hard floor (ALT-2) were 16.5 °C and 16.9 °C, respectively, which exhibited no significant difference. The maximum inlet air temperatures, however, were 20.6 °C and 30.2 °C, respectively, which exhibited a larger difference of 10 °C or higher. Some server racks of ALT-2 exceeded the recommended temperature range of ASHRAE. Because the IT environment was based on the raised floor with or without a normal cooling system operation, ALT-1 and ALT-3 exhibited very similar mean/maximum inlet air temperatures, standard errors, and standard deviations. In addition, based on the hard floor, the IT environment of ALT-2 was very similar to that of ALT-4. In the case of failure of the cooling system, the mean inlet air temperatures of the raised floor (ALT-3) and hard floor (ALT-4) were 16.6 °C and 17.1 °C, respectively, which exhibited a temperature increase of approximately 0.5 °C. In addition, the corresponding maximum inlet air temperatures were 20.2 °C and 30.4 °C, respectively. ALT-4 exhibited a temperature increase of 10 °C or higher, and some server racks exceeded the recommended temperature range of ASHRAE. The interval plot of the 95% confidence interval in Figure 17b shows that the raised floor HAC systems of ALT-1 and ALT-3 maintained more uniform inlet air temperature ranges than the hard floor HAC systems of ALT-2 and ALT-4 for the normal and fault operations of the cooling system. The selection of the HAC with the raised floor over the hard floor can improve the air distribution efficiency by 28% for normal cooling conditions. Nonetheless, the hard floor HAC systems also maintained a proper IT environment.

5. Conclusions

In this study, CFD analysis was conducted to compare two air distribution methods with HAC for the same geometry of server racks in the IT room. The thermal performance of the raised floor and hard floor air distribution systems were compared and the IT environment in a reference data center was examined. Regarding the operating conditions, the difference in the thermal performance between the state in which the cooling system was normally operating, and another in which one cooling unit had failed, were analyzed. In conclusion, the HAC systems with a raised floor have higher cooling efficiency than those with a hard floor. The major results of this study can be summarized as follows.
  • The most important function of the HAC system is to effectively reduce the air re-circulation and by-pass by physically dividing the cold aisles and hot aisles. For all of ALT-1 through ALT-4, the temperature increase was well controlled in the entire cold aisle area because the temperature increased by approximately 1.0 °C or less from the supply air temperature of the CRAH units (15 °C). This indicates that the HAC system is effective in terms of temperature control and it maintains an appropriate IT environment.
  • Considering the IT environment, which is the server rack operating condition, applying the raised floor (ALT-1 and ALT-3) was determined to ensure more stable operation than applying the hard floor (ALT-2 and ALT-4). The selection of the HAC with a raised floor over a hard floor can improve the air distribution efficiency by 28%, which corresponds to a 40% reduction in the recirculation ratio of more than 20% during normal cooling conditions.
  • Considering the inlet air temperatures and recirculation ratios of the server racks, there was no decisive difference between the normal cooling condition in which all CRAH units operated and the cooling fault condition in which one unit did not operate, thus severely degrading the IT environment. For the stable operation of all server racks while one CRAH unit is not operating, the capacity of the total cooling system must be N + 1 or higher.
  • Even under normal cooling conditions, the velocity of the airflow discharged from the CRAH units is significantly high since the required air volume is large. In addition, a smaller temperature difference between the server rack inlet and outlet requires greater air volume. For an ordinary air volume, the velocity of the airflow discharged from the CRAH units is 10 m/s or higher. This high-speed airflow is most likely to cause a recirculation problem in the server racks that are close to the CRAH units.
  • To address this problem, it is necessary to consider methods for reducing the airflow velocity inside the raised floor or on the hard floor. This includes installing artificial resistive films that can delay an extremely fast airflow.
In the future, further investigations using sensitivity analysis for cooling energy savings is required through field measurement of the HAC system applied as a result of this study. We believe that it is necessary to present the effects of direct cooling energy reduction and coefficient of performance (COP) improvement according to the change in the supply air temperature.

Author Contributions

Conceptualization, J.C., J.W.; methodology, J.C., B.P.; formal analysis, J.C., J.W.; data curation, J.C., J.W., B.P.; writing-original draft preparation, J.C., B.P., T.L.; writing-review and editing, J.C., T.L.; supervision, J.C., T.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by grant of the research fund of the MOTIE (Ministry of Trade, Industry and Energy) of the Republic of Korea in 2020. Project number: 20182010600010.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cho, J.; Woo, J. Development and experimental study of an independent row-based cooling system for improving thermal performance of a data center. Appl. Therm. Eng. 2020, 169, 114857. [Google Scholar] [CrossRef]
  2. Beaty, D.L. Internal IT load profile variability. ASHRAE J. 2013, 55, 72–74. [Google Scholar]
  3. Luo, Y.; Andresen, J.; Clarke, H.; Rajendra, M.; Maroto-Valer, M. A decision support system for waste heat recovery and energy efficiency improvement in data centres. Appl. Energy 2019, 250, 1217–1224. [Google Scholar] [CrossRef]
  4. Barnett, T.; Jain, S.; Sumits, A.; Andra, U.; Khurana, T. Cisco Global Cloud Index 2015–2020; Cisco Knowledge Network (CKN) Session; Cisco Public: San Jose, MA, USA, 2016. [Google Scholar]
  5. Gartner Research. Forecast: Data Centers, Worldwide, 2015–2022; Gartner, Inc.: Stamford, CT, USA, 2018. [Google Scholar]
  6. Kheirabadi, A.C.; Groulx, D. Cooling of server electronics: A design review of existing technology. Appl. Therm. Eng. 2016, 105, 622–638. [Google Scholar] [CrossRef]
  7. Zhang, K.; Zhang, Y.; Liu, J.; Niu, X. Recent advancements on thermal management and evaluation for data centers. Appl. Therm. Eng. 2018, 142, 215–231. [Google Scholar] [CrossRef]
  8. ASHRAE TC 9.9. Thermal Guideline for Data Processing Environments; American Society of Heating Refrigerating and Air-Conditioning Engineers, Inc.: Atlanta, GA, USA, 2015. [Google Scholar]
  9. Wang, C.-H.; Tsui, Y.-Y.; Wang, C.-C. On cold-aisle containment of a container datacenter. Appl. Therm. Eng. 2017, 112, 133–142. [Google Scholar] [CrossRef]
  10. Phan, L.; Lin, C.-X. A multi-zone building energy simulation of a data center model with hot and cold aisles. Energy Build. 2014, 77, 364–376. [Google Scholar] [CrossRef]
  11. Tatchell-Evans, M.; Kapur, N.; Summers, J.; Thompson, H.; Oldham, D. An experimental and theoretical investigation of the extent of bypass air within data centres employing aisle containment and its impact on power consumption. Appl. Energy 2017, 186, 457–469. [Google Scholar] [CrossRef]
  12. Cho, J.; Kim, B.S. Evaluation of air management system’s thermal performance for superior cooling efficiency in high-density data centers. Energy Build. 2011, 43, 2145–2155. [Google Scholar] [CrossRef]
  13. Lin, P.; Avelar, V.; Niemann, J. Implementing Hot and Cold Air Containment in Existing Data Centers; APC White Paper 153; Schneider Electric-Data Center Science Center: Foxboro, MA, USA, 2013. [Google Scholar]
  14. Niemann, J.; Brown, K.; Avelar, V. Hot-Aisle vs. Cold-Aisle Containment for Data Centers; APC White Paper 135 rev02; Schneider Electric-Data Center Science Center: Foxboro, MA, USA, 2011. [Google Scholar]
  15. Wibron, E.; Ljung, A.-L.; Lundström, T.S. Computational Fluid Dynamics Modeling and Validating Experiments of Airflow in a Data Center. Energies 2018, 11, 644. [Google Scholar] [CrossRef] [Green Version]
  16. Tsoutsanis, P.; Antoniadis, A.F.; Drikakis, D. WENO schemes on arbitrary unstructured meshes for laminar, transitional and turbulent flows. J. Comput. Phys. 2014, 256, 254–276. [Google Scholar] [CrossRef]
  17. Tsoutsanis, P.; Kokkinakis, I.W.; Könözsy, L.; Drikakis, D.; Williams, R.; Youngs, D.L. Comparison of structured- and unstructured-grid, compressible and incompressible methods using the vortex pairing problem. Comput. Methods Appl. Mech. Engrg. 2015, 293, 207–231. [Google Scholar] [CrossRef] [Green Version]
  18. Misiulia, D.; Andersson, A.G.; Lundström, T.S. Computational Investigation of an Industrial Cyclone Separator with Helical-Roof Inlet. Chem. Eng. Technol. 2015, 38, 1425–1434. [Google Scholar] [CrossRef]
  19. Cho, J.; Park, B.; Jeong, Y. Thermal Performance Evaluation of a Data Center Cooling System under Fault Conditions. Energies 2019, 12, 2996. [Google Scholar] [CrossRef] [Green Version]
  20. Data Center CFD with 6SigmaRoom. Available online: https://www.futurefacilities.com/products/6sigmaroom/ (accessed on 28 August 2020).
  21. Cho, J.; Yang, J.; Park, W. Evaluation of air distribution system’s airflow performance for cooling energy savings in high-density data centers. Energy Build. 2014, 68, 270–279. [Google Scholar] [CrossRef]
  22. Schlichting, A.D. Data Center Energy Efficiency Technologies and Methodologies: A Review of Commercial Technologies and Recommendations for Application to Department of Defense Systems; The MITRE Corporation: Bedford, MA, USA, 2016. [Google Scholar]
  23. Kopecki, A. Platform for Optimising the Design and Operation of Modular Configurable IT Infrastructures and Facilities with Resource-Efficient Cooling; The High Performance Computing Centre (HLRS); The University of Stuttgart (USTUTT): Stuttgart, Germany, 2011. [Google Scholar]
  24. Launder, B.E.; Spalding, D.B. The numerical computation of turbulent flows. Comput. Methods Appl. Mech. Eng. 1974, 3, 269–289. [Google Scholar] [CrossRef]
  25. Frachtenberg, E.; Lee, D.; Magarelli, M.; Mulay, V.; Park, J. Thermal Design in the Open Compute Datacenter (Facebook). In Proceedings of the 13th InterSociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems, San Diego, CA, USA, 30 May–1 June 2012; pp. 530–538. [Google Scholar]
  26. Alissa, H.A.; Nemati, K.; Sammakia, B.; Ghose, K.; Seymour, M.; Schmidt, R. Innovative Approaches of Experimentally Guided CFD Modeling for Data Centers. In Proceedings of the 31st Semiconductor Thermal Measurement, Modeling & Management Symposium (SEMI-THERM), San Diego, CA, USA, 15–19 March 2015; pp. 176–184. [Google Scholar]
  27. Dai, J.; Ohadi, M.M.; Das, D.; Pecht, M.G. Optimum Cooling of Data Centers: Application of Risk Assessment and Mitigation Techniques; Springer Science+Business Media: New York, NY, USA, 2014. [Google Scholar]
  28. Cheong, K.H.; Tang, J.W.; Koh, J.M.; Yu, C.M.; Acharya, U.R.; Xie, N.-G. A Novel Methodology to Improve Cooling Efficiency at Data Centers. IEEE Access. 2019, 7, 153799–153809. [Google Scholar] [CrossRef]
  29. Saini, S.; Shahi, P.; Bansode, P.; Siddarth, A.; Agonafer, D. CFD Investigation of Dispersion of Airborne Particulate Contaminants in a Raised Floor Data Center. In Proceedings of the 36th Semiconductor Thermal Measurement, Modeling & Management Symposium (SEMI-THERM)), San Diego, CA, USA, 16–20 March 2020; pp. 39–47. [Google Scholar]
  30. Herrlin, M.K. Rack cooling effectiveness in data centers and telecom central offices: The rack cooling index (RCI). ASHRAE Trans. 2005, 111, 725–731. [Google Scholar]
  31. Herrlin, M.K. Improved data center energy efficiency and thermal performance by advanced airflow analysis. In Proceedings of the Digital Power Forum, San Francisco, CA, USA, 10 September 2007; pp. 10–12. [Google Scholar]
  32. Xie, M.; Wang, J.; Liu, J. Evaluation metrics of thermal management in data centers based on exergy analysis. Appl. Therm. Eng. 2019, 147, 1083–1095. [Google Scholar] [CrossRef]
Figure 1. The test white space of information technology (IT) server room for computational fluid dynamics (CFD) analysis; (a) a floor plan, (b) a sectional view and (c) cooling infrastructure.
Figure 1. The test white space of information technology (IT) server room for computational fluid dynamics (CFD) analysis; (a) a floor plan, (b) a sectional view and (c) cooling infrastructure.
Energies 13 06147 g001
Figure 2. CFD modeling; (a) 3D solution domain and (b) unstructured Cartesian grid.
Figure 2. CFD modeling; (a) 3D solution domain and (b) unstructured Cartesian grid.
Energies 13 06147 g002
Figure 3. Basic analysis model; (a) server rack arrangement in IT server room, (b) server rack modeling and (c) CRAH unit modeling.
Figure 3. Basic analysis model; (a) server rack arrangement in IT server room, (b) server rack modeling and (c) CRAH unit modeling.
Energies 13 06147 g003
Figure 4. Hot aisle containment (HAC) model; (a) unit modeling of a hot aisle containment and (b) hot aisle containment system with RA chamber and CRAH units.
Figure 4. Hot aisle containment (HAC) model; (a) unit modeling of a hot aisle containment and (b) hot aisle containment system with RA chamber and CRAH units.
Energies 13 06147 g004
Figure 5. CFD analysis alternatives; (a) HAC with raised floor (ALT-1, ALT-3), (b) HAC with hard floor (ALT-2, ALT-4) and (c) Under cooling fault condition (ALT-3, ALT-4).
Figure 5. CFD analysis alternatives; (a) HAC with raised floor (ALT-1, ALT-3), (b) HAC with hard floor (ALT-2, ALT-4) and (c) Under cooling fault condition (ALT-3, ALT-4).
Energies 13 06147 g005
Figure 6. Contours of temperature in normal cooling condition; (a) a horizontal section view and (b) a vertical section view.
Figure 6. Contours of temperature in normal cooling condition; (a) a horizontal section view and (b) a vertical section view.
Energies 13 06147 g006
Figure 7. Contours of airflow velocity in normal cooling condition; (a) a horizontal section view and (b) a vertical section view.
Figure 7. Contours of airflow velocity in normal cooling condition; (a) a horizontal section view and (b) a vertical section view.
Energies 13 06147 g007
Figure 8. (a) Identification number of server racks and CRAH units and (b) American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) environmental classes for data centers (2015).
Figure 8. (a) Identification number of server racks and CRAH units and (b) American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) environmental classes for data centers (2015).
Energies 13 06147 g008
Figure 9. ALT-1 HAC with raised floor—the mean inlet air temperature of 100 server racks in normal cooling condition.
Figure 9. ALT-1 HAC with raised floor—the mean inlet air temperature of 100 server racks in normal cooling condition.
Energies 13 06147 g009
Figure 10. ALT-2 HAC with hard floor—the mean inlet air temperature of 100 server racks in normal cooling condition.
Figure 10. ALT-2 HAC with hard floor—the mean inlet air temperature of 100 server racks in normal cooling condition.
Energies 13 06147 g010
Figure 11. The recirculation ratio of 100 server racks in normal cooling condition; (a) ALT-1: HAC raised floor and (b) ALT-2: HAC hard floor.
Figure 11. The recirculation ratio of 100 server racks in normal cooling condition; (a) ALT-1: HAC raised floor and (b) ALT-2: HAC hard floor.
Energies 13 06147 g011
Figure 12. Contours of temperature under cooling fault condition; (a) a horizontal section view and (b) a vertical section view.
Figure 12. Contours of temperature under cooling fault condition; (a) a horizontal section view and (b) a vertical section view.
Energies 13 06147 g012
Figure 13. Contours of airflow velocity under cooling fault condition; (a) a horizontal section view and (b) a vertical section view.
Figure 13. Contours of airflow velocity under cooling fault condition; (a) a horizontal section view and (b) a vertical section view.
Energies 13 06147 g013
Figure 14. ALT-3 HAC with raised floor—the mean inlet air temperature of 100 server racks in cooling fault condition.
Figure 14. ALT-3 HAC with raised floor—the mean inlet air temperature of 100 server racks in cooling fault condition.
Energies 13 06147 g014
Figure 15. ALT-4 HAC with hard floor—the mean inlet air temperature of 100 server racks in cooling fault condition.
Figure 15. ALT-4 HAC with hard floor—the mean inlet air temperature of 100 server racks in cooling fault condition.
Energies 13 06147 g015
Figure 16. The recirculation ratio of 100 server racks in cooling fault condition; (a) ALT-3: HAC raised floor and (b) ALT-4: HAC hard floor.
Figure 16. The recirculation ratio of 100 server racks in cooling fault condition; (a) ALT-3: HAC raised floor and (b) ALT-4: HAC hard floor.
Energies 13 06147 g016
Figure 17. Inlet air temperature distribution of server racks; (a) boxplot and (b) interval plot.
Figure 17. Inlet air temperature distribution of server racks; (a) boxplot and (b) interval plot.
Energies 13 06147 g017
Table 1. Boundary conditions of CFD simulation.
Table 1. Boundary conditions of CFD simulation.
Room size (m2)410Room height (mm)4800
Raised floor height (mm)500False ceiling height (m)N/A
Number of racks (EA)100Rack IT limit (kW/rack)8.8~16
Number of CRAH unit (EA)6Total power of a zone (kW)1168
Rack porosity (%)35Tile porosity (%)25
Computational mesh size2,447,462CFD Solver Modelk-Epsilon turbulence
Table 2. Detailed specification of CRAH unit for CFD simulation.
Table 2. Detailed specification of CRAH unit for CFD simulation.
Type of CRAH unitDown-flow with EC fan
Type of coolingChilled water system with chiller
Cooling capacity (kW)230 (65 usRT), 280 (max)
Fan size (CMM)650 (39000 m3/h)
Type of fanEC fan × 3EA
Supply air temperature (°C)15
Supply chilled water temp. (°C)7ΔT = 5 °C
Return chilled water temp. (°C)12
Total cooling capacity (kW)1380 (390 usRT) for the IT room
Total air volume (CMM)3900 (234000 m3/h) for the IT room
Table 3. CFD simulations adopted in four alternatives.
Table 3. CFD simulations adopted in four alternatives.
AlternativesALT-1ALT-2ALT-3ALT-4
ContainmentHot aisle containmentHot aisle containmentHot aisle containmentHot aisle containment
Air distributionRaised floorHard floorRaised floorHard floor
CoolingIn normal cooling conditionUnder cooling fault condition
Table 4. Operational net cooling power in normal cooling condition.
Table 4. Operational net cooling power in normal cooling condition.
ALT-1: HAC with a Raised FloorCRAH-01CRAH-02CRAH-03CRAH-04CRAH-05CRAH-06
Net cooling power (kW)220.0223.0209.5188.5166.5160.1
Mean Temperature In (°C)31.932.131.129.527.827.3
Mean Temperature Out (°C)15.015.015.015.015.015.0
ALT-2: HAC with a Hard FloorCRAH-01CRAH-02CRAH-03CRAH-04CRAH-05CRAH-06
Net cooling power (kW)226.2225.0217.1181.1162.1157.7
Mean Temperature In (°C)32.432.331.728.927.527.1
Mean Temperature Out (°C)15.015.015.015.015.015.0
Table 5. Operational net cooling power in cooling fault condition.
Table 5. Operational net cooling power in cooling fault condition.
ALT-3: HAC with a Raised FloorCRAH-01CRAH-02CRAH-03CRAH-04CRAH-05CRAH-06
Net cooling power (kW)266.8275.7 235.3201.3188.8
Mean Temperature In (°C)32.132.7 30.127.927.1
Mean Temperature Out (°C)15.015.0 15.015.015.0
ALT-4: HAC with a Hard FloorCRAH-01CRAH-02CRAH-03CRAH-04CRAH-05CRAH-06
Net cooling power (kW)272.9277.0 235.2197.3186.3
Mean Temperature In (°C)32.532.7 30.127.626.9
Mean Temperature Out (°C)15.015.0 15.015.015.0
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cho, J.; Woo, J.; Park, B.; Lim, T. A Comparative CFD Study of Two Air Distribution Systems with Hot Aisle Containment in High-Density Data Centers. Energies 2020, 13, 6147. https://doi.org/10.3390/en13226147

AMA Style

Cho J, Woo J, Park B, Lim T. A Comparative CFD Study of Two Air Distribution Systems with Hot Aisle Containment in High-Density Data Centers. Energies. 2020; 13(22):6147. https://doi.org/10.3390/en13226147

Chicago/Turabian Style

Cho, Jinkyun, Jesang Woo, Beungyong Park, and Taesub Lim. 2020. "A Comparative CFD Study of Two Air Distribution Systems with Hot Aisle Containment in High-Density Data Centers" Energies 13, no. 22: 6147. https://doi.org/10.3390/en13226147

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop