Next Article in Journal
Thermal Assessment of a Ventilated Double Skin Façade Component with a Set of Air Filtering Photocatalytic Slats in the Cavity
Next Article in Special Issue
Barriers to Energy Efficiency: Low-Income Households in Australia
Previous Article in Journal
Understanding and Predicting Localised Variations in the Degradation Rate of Architectural, Organically Coated, Steel Cladding
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Numerical Investigation of Thermal Performance with Adaptive Terminal Devices for Cold Aisle Containment in Data Centers

1
State Key Laboratory of Power Grid Safety, Beijing 100192, China
2
China Electric Power Research Institute, Beijing 100192, China
3
China Electric Power Research Institute Tianjin, Tianjin 300143, China
*
Author to whom correspondence should be addressed.
Buildings 2023, 13(2), 268; https://doi.org/10.3390/buildings13020268
Submission received: 9 December 2022 / Revised: 10 January 2023 / Accepted: 12 January 2023 / Published: 17 January 2023
(This article belongs to the Special Issue Current Trends for Reducing Building Energy Consumption)

Abstract

:
The energy consumption of data center cooling systems accounts for a large proportion of total energy consumption. The optimization of airflow organization is one of the most important methods to improve the energy efficiency of cooling systems. The adjustment scale of many current air flow organization methods, however, is too large and does not support the data center’s refined operation. In this paper, a new type of air supply terminal device is proposed, and it could adaptively adjust according to the power of servers in the rack for cold air redistribution. In addition, the corresponding regulation strategy is proposed. A CFD model is established according to field investigation of a real data center in Shanghai to investigate the adjustment range and the energy saving potential of the device. The simulation results indicate that the device can suppress the local hot spots caused by excessive server power to some extent and greatly improve the uniformity of servers exhaust temperature. The case study shows that the device can save energy consumption by 20.1% and 4.2% in mitigating local hot spots compared with reducing supply air temperature and increasing supply air flowrate.

1. Introduction

In recent years, with the rapid development of new technologies, such as the Internet of Things, 5G communications, and big data analytics, the construction of data centers, which provide resources for their computing power, has also accelerated rapidly. The data processing capacity provided by data centers is an important national strategic resource. In 2020, data centers, as an important basic setup for the development of the national digital economy, were listed as one of seven new infrastructure projects. According to the Ministry of Industry and Information Technology of China, the total number of data center racks in use in China reached 3,145,000 by the end of 2019, with large and super-large data center racks accounting for 75% of the total. The current data center market in China is growing at a compound annual growth rate of 38.6% and will show exponential growth in the coming years. The high-speed development of data centers requires huge energy consumption. The annual power consumption of data centers has exceeded 2000 billion kWh, accounting for approximately 2.7% of the total electricity consumption of China. This high electricity consumption results in high carbon emissions. According to the emission factor of 0.620 (tCO2/MWh) in the literature, the amount of carbon emissions of China’s data centers is approximately 1.24 × 1011 tons per year. With the goal of “carbon neutrality and emission peaking”, the improvement of the energy efficiency of data centers is an urgent issue to be addressed [1,2].
Currently, power utilization efficiency (PUE) is the most important energy efficiency indicator of a data center. It is defined as the ratio of the energy consumption of Internet Technology (IT) equipment to the total energy consumption of the data center. The closer its value is to 1, the higher is the energy efficiency of the data center. Currently, the PUE of China’s data centers is approximately 1.5, with a large room for improvement. Reducing the energy consumption of cooling systems is an effective measure for reducing PUE. At present, the most dominant cooling method in data centers is still air-cooling. Although liquid cooling has a better cooling effect and lower energy consumption, it is in development stage.
Air cooling is the current mainstream cooling method, and lots of studies have focused on the energy saving of air-cooled data centers. In general, the energy-saving optimization of cooling systems in air-cooled data centers is divided into two main areas: water-side energy saving and air-side energy saving. The water-side energy saving research includes several research directions, such as the application of natural cooling sources [3,4,5], the control optimization of refrigeration units [6,7], and new combined cooling and power systems [8,9,10]. These methods can effectively improve the energy efficiency of data centers. In addition, the optimization of airflow organization in data centers is an important method to reduce the energy consumption of cooling systems in air-cooled data centers. In previous studies, the optimization of airflow organization has mostly focused on the improvement of the thermal environment at the room level. Some early research studied data center cold/hot aisle containment technology [11,12] that utilized partitions to isolate the hot and cold aisles from each other to prevent the mixing of hot and cold airflows and avoid the waste of cooling capacity [13,14,15]. There are also ways to improve the thermal environment of data centers by changing the layout of the computer room air handler (CRAH) in data center rooms to prevent the phenomenon of cold airflow bypass [16,17]. In addition, some studies focused on improving the uniformity of airflow. Researchers suppress the uneven pressure distribution in the static pressure layer by changing the height [18,19] and shape of the plenum [20] and the opening rate of the air supply tiles [21,22] to improve the uniformity of the air supply and the temperature distribution.
All the aforementioned methods are part of a body of research that aimed to improve the thermal environment of data centers on a larger scale. In recent years, researchers have increasingly begun to focus on the improvement of the local thermal environment of data centers, particularly considering the situation where local hot spots appear. Some scholars have proposed air supply tiles with an inclination angle, allowing the air supply to be sent to the lower positioned servers in the form of air jets. This would maintain the temperature of the lower position of the racks close to the air supply temperature and solve the problem of a lack of cooling in the lower servers to a certain extent [23]. Some researchers have also proposed fan-assisted perforations, which can significantly inhibit the return of hot air, reduce the cold aisle temperature, and improve the uniformity of the temperature distribution [24]. In 2021, some scholars used deflector plates to adjust the server exhaust angle, which can eliminate the local hot spots in the rack when the exhaust angle is 60° and significantly improve the temperature consistency [25]. In 2022, some researchers proposed using removable jet fans to target the hot spot-prone areas in the cold aisle to reduce the hot spot temperature while suppressing hot air return at the top [26].
In summary, as local regulation methods can purposefully eliminate rack hot spots and improve the thermal environment with fine application effects, and the local regulation device itself has no or low energy consumption, studies in this area have gradually increased in recent years. However, there are relatively few studies on local active regulation, and the relevant devices cannot solve the situation where multiple hot spots appear. This study aims to provide an adaptive air supply terminal device for data centers with easy installation, simple control, low energy consumption, and multipoint regulation to suppress rack hot spots and improve the uniformity of temperature distribution. Accordingly, the application effect and energy-saving potential of the proposed device are calculated and analyzed.

2. Methodology

Localized overheating often occurs in air-cooled data centers. If not addressed in a timely manner, it can cause server temperatures to become too high, affecting server operation and even causing downtime. Reducing the supply air temperature or increasing the supply air flowrate to eliminate hot spots will easily cause overcooling in some areas and increase energy consumption. The following adaptive terminal device for air supply is proposed to mitigate the above disadvantages.

2.1. Adaptive Terminal Device for Air Supply

This terminal device consists of three sets of movable deflectors that can be controlled to improve the mismatch between cooling and heat dissipation in the vertical direction of the rack. The three sets of deflectors divide the air outlet into three areas, and each set of deflectors can be adjusted individually to change the airflow direction and achieve the redistribution of cold air flow. As shown in Figure 1a,b, the device is installed under each rack at the air outlets of the plenum. Its size is the same as that of the perforated tiles, which is 0.6 m × 0.6 m. The size of each set of deflectors is 0.2 m × 0.6 m, and each set contains four deflectors. The deflectors have a width of 0.05 m and distance between them is 0.05 m, as shown in Figure 1b,c. The height of terminal device is 0.05 m. The angle between the deflector and the horizontal line is called deflection angle, as indicted in Figure 1d.

2.2. Adjustment Strategy of Terminal Device

As shown in Figure 2, the basic idea of regulation is to allow the terminal device to adapt to the change in servers power consumption. Therefore, the implementation of the regulation has two main parts: one is the acquisition of the regulation capability of the terminal device and the other is the acquisition of the server power consumption.
The terminal device changes the direction of the airflow, and thus the airflow into each server, by changing the angle of each set of deflectors. Each set of deflectors is set at the four deflection angles of 0°, 60°, 75°, and 90°; hence, there are 63 adjustment modes for the device (unless they are all 0°). Each adjustment mode corresponds to a flow distribution curve, which is the flow distribution ratio of different positions of the rack, and the value is calculated using Equation (1).
α Y , X = Q Y , X Q a ¯
where α Y , X is the flow distribution ratio at position Y. Q a ¯ is the average value of the flow rate. Q Y , X is the value of the flow rate at position Y and at the deflection angle X. This value indicates the distribution effect of the deflector.   α Y , X > 1 indicates that more flow is distributed at position Y, and α Y , X < 1 indicates that less flow is distributed at position Y.
On the server side, the regulation module simultaneously calculates the power consumption curves of the servers in different locations of the rack. The required data are obtained from the server baseboard management controller (BMC) chip via the IPMI (a set of interactive standard management specifications) interface, which is a small operating system on the server independent of the main operating system. BMC is a chip integrated into the server’s motherboard. Users can obtain the server’s hardware information by logging through IPMI, which usually allows access to different types of information, such as CPU utilization, motherboard temperature, and CPU temperature. This study collects CPU utilization information from each server of the rack and calculates the current power consumption of the server using a linear method, as shown in Equation (2). The linear power consumption model of the server is the most commonly used model to calculate the power consumption of the server with good accuracy and has good applications in airflow organization optimization, workload scheduling, etc. Then, the ratio of server power consumption at each location to the average power consumption of the server is calculated according to Equation (3) to obtain a power distribution curve.
P s = P i d l e + ( P m a x P i d l e ) · u
β Y = P Y P a ¯
where P s is the calculated power consumption of the server, P i d l e is the idle power consumption of the server, P m a x is the maximum power consumption of the server, u is the server utilization collected from BMC, and P a ¯ is the average power consumption of all the servers on the rack.
After the above calculation, the flow distribution curve and power distribution curve of a set of terminal devices are obtained, and the similarity between the curves is calculated using Equation (4), the Euclidean distance. The X mode whose deflection angle corresponds to the minimum value of D is taken as the terminal device regulation mode adapted to the current power distribution of the rack servers.
D = 1 n Y = 1 n ( α Y , X β Y ) 2
where D is the minimum Euclidean distance among all the deflector adjustment modes. This study uses this approach for the adjustment of the deflector angle to achieve the secondary distribution of cold air, so that the flow into the server meets the demand for server cooling to some extent.

3. Numerical Simulation

3.1. Simulation Model

3.1.1. Geometric Model

In this study, a data center located in Shanghai was selected as the research object. It is an air-cooled data center with cold aisle containment, and its plane figure is shown in Figure 3. The geometric dimensions are 21.5 m × 20.3 m × 4.9 m. The air supply of this data center is in the form of a closed cold aisle with under floor plenum. The data center consists of seven rows of racks, 10 CRAHs, raised floors, and air supply perforated tiles. Except for the southernmost rack, which is a separate column owing to its proximity to the wall, all the racks are paired to form a closed cold aisle, as shown in the blue part of the figure. The hot aisle of the data center is open, and no exhaust ducts are set. Except for the middle two columns of racks, which are partially occupied because of the load-bearing columns, each column of racks has 24 racks, as shown in the gray part of the figure. Each rack is a 48U (1U = 44.45 mm) rack with the total dimensions of 0.6 m (W) × 1.2 m (L) × 2.2 m (H). The room is supplied with 10 CRAHs symmetrically distributed on both sides of the room, with the air outlets located on the side face of the CRAHs, and the air returns on the top of the CRAHs. The height of the plenum is 0.75 m. The plenum outlets are fitted with perforated tiles with the dimension of 0.6 m × 0.6 m and the opening ratio of 35%. The cold air is sent out horizontally through the CRAH and enters the plenum. After passing through the perforated tiles, it enters the server racks. After removing the heat generated by the server, the exhaust air enters the hot aisles. Finally, the air is returned to the CRAH from its top to complete the air handling.
As shown in Figure 4, a 3D model is set on a one-to-one basis according to the actual data center to ensure the accuracy of the simulation model. The CRAHs and servers in the data center are placed according to their actual locations. The air outlet of the model is set in the plenum, and the air exits horizontally into the static pressure layer. The air exits through perforated tiles in the cold aisles, enters the servers for heat exchange, subsequently enters the open hot aisles, and finally returns to the air-conditioning return outlet.

3.1.2. Governing Equations and Boundary Conditions

The airflow in the data center is in the form of turbulent mixed convection. In simulation studies, the commonly used models are the indoor zero equation turbulence, standard k–ε model, and re-normalization group (RNG) models. The standard k-ε model and RNG model are more accurate, and these two models are also the most used ones. Specifically, RNG model has an extra term to its ε equation compared with standard k-ε model, which gives the RNG model better accuracy but a longer convergence time. In the validated data center model utilized in this paper, the calculation time of RNG model is 27.4% longer than that of the standard k-ε model, but the accuracy difference between them is no more than 6.7%. Considering the tread-off between accuracy and calculation time, the standard k–ε model is used in this study. The continuity, momentum, and energy equations of the model are given in Equations (5)–(7).
V = 0
ρ ( V t + V · V ) = p + · ( μ V ) + ρ g
ρ c p ( T t + V · T ) = · ( λ T ) + Q
where μ is dynamic viscosity, λ is effective thermal conductivity, and Q is internal heat source.
Several assumptions are set in the model: (1) the airflow is considered as incompressible fluid with low velocity, and the heat dissipation caused by viscous force is ignored; (2) the room air follows the Boussinesq assumption, which treats the density as constant in all equations, except the buoyancy term in the momentum equation; (3) the room fluid is a steady turbulent flow; and (4) air leakage from the server room is ignored, and the server and rack walls are considered adiabatic, ignoring thermal radiation from the interior surfaces.
According to the results of the experimental tests, the air outlet temperature was set to 298 K for the left-side air conditioner and 287 K for the right-side air conditioner. The air speed was set to 2.7 m/s, the air outlet direction was horizontal, and the return air outlet was set to the pressure outlet. According to the assumptions, the racks and server walls were set as adiabatic, and the walls of the room were also adiabatic. The perforated tiles were set up as a porous-jump model.

3.1.3. Grid Independent Study and Validation Experiments

Based on the previous related studies, we conducted a grid-independent study. Two surfaces, face1 and face2 at 5U–6U and 22U–23U of the F1 rack, were selected to verify the convergence performance of the mesh. The total numbers of meshes executed for the five operations were 260,000+, 360,000+, 560,000+, 830,000+, and 1,340,000+. The results of the mesh verification are shown in Figure 5. The results show that, as the number of grids increases, they gradually stabilize at 560,000+, and the temperature difference obtained from the 560,000+, 830,000+, and 134,000+ grids is extremely small. Because the influence of the 560,000+, 830,000+, and 134,000+ grids on the calculated results is not significant, the setting corresponding to 560,000+ grids is adopted, considering the trade-off between long computational time and computational accuracy.
A field test was conducted to verify the accuracy of the model. The measured parameter was the air temperature at the inlet and outlet of the first rack in column F, as shown in Figure 6a, which is a photograph of thermocouple arrangements at the outlet of the rack. Figure 6b is the schematic of the thermocouple arrangement. There were 21 thermocouples arranged in the inlet and outlet, respectively, divided into seven layers, with three evenly arranged in each layer. Their average temperature was calculated. The test process lasted for about 5 h, because the air supply parameters of the CRAHs varied during the test, and the measured data also fluctuated in response. We finally selected a relatively stable period of temperature data as the validation data to test the accuracy of the model. The thermocouple is an OMEGA brand K-type thermocouple, whose accuracy is ±0.1 K after calibration, and the acquisition equipment used was Agilent 34972A. The temperature of the CRAHs outlet was also measured using the above equipment, and the airflow rate of the outlet was measured using an anemometer, whose accuracy is ±0.3 m/s. The measured temperature of the outlet was 14 °C, and the speed was 2.7 m/s.
In the simulated results, the average temperature at the position corresponding to Figure 6b is the validation temperature. The comparison between the simulation and experimental results is shown in Figure 7. The maximum difference between the simulated and measured temperatures is about 1 K, and the maximum deviation is 5.5%. Notably, owing to the low shelving rate of this data center, the CRAHs in the room only cooled on the right side, the CRAHs on the left side only delivered air without cooling, and the air conditioner fan settings were the same on both sides; hence, there is an apparent temperature dividing line in the middle of the server room, as shown in Figure 8a. Computational fluid dynamics (CFD) calculation also shows similar results, forming a clear demarcation line in the middle of the room (see Figure 8b). The CFD model developed is better validated by the actual tests and can be used for subsequent studies.

3.2. Numerical Investigation Setup

As the CFD model of the entire room took too long to calculate and is not conducive to computational analysis, when studying the flow distribution ratio of the terminal device at different deflection angles, the servers were set only in the first rack of column F. Twenty-four 2U-server were set, and the other racks were set vacant to simplify the calculation of the model. Different deflector angles were set separately for each rack. Each outlet has three sets of deflectors, each group of deflectors has four adjustment angles, and the terminal device has 63 adjustment positions, denoted as A1, A2, A3, …, A63. After the model calculation is completed, the flow rate of the server outlet at different height positions is calculated and the flow distribution ratio curve is obtained.
Owing to the experimental constraints of the actual data center, the terminal device in this study cannot be employed for practical applications. Therefore, this study applied the terminal device to the CFD model to analyze its regulation effect. Considering the symmetrical distribution of the server room air conditioners and the same fan setting parameters, there were apparent temperature dividing lines in the server room. The CFD model of the server room was simplified here to reduce the computational cost. The room is divided equally from the middle, and only racks 1–12 on the right side were considered. The server room air conditioners on the right side was retained. The racks were set up with 24 2U servers with power settings between 40 W and 200 W. The power distribution curves were calculated and matched with corresponding flow distribution curves. The parameters of the terminal device of each rack were set in turn, and the computational model is implemented to analyze the effect of the proposed terminal device.

4. Results and Discussion

The numerical simulation is studied in three parts. First, the adjustment range of terminal device on a single rack is analyzed. The effect of the flow distribution of terminal devices under different angular conditions is calculated through simulation. Second, the application effect of the terminal device is analyzed. According to the power of the servers on the rack, the corresponding angle of the terminal device is set, and the application effect of the terminal device is analyzed. Third, the energy-saving effect of using the terminal devices is analyzed. The proposed device is then compared to traditional methods of regulating the thermal environment of data centers and its energy-saving effects are analyzed.

4.1. Flow Distribution Curve for the Terminal Device

In the simulation calculation, the temperature of the room air-conditioner was set to 293 K and the air speed was set to 1 m/s. According to Section 2.2, flow distribution curves under different angles were calculated, and the results are shown in Figure 9. The solid blue lines in the figure are the flow distribution ratio curves for 63 (excluding all angles of zero) regulation modes. The deflector angles corresponding to each flow distribution curve are noted as (a, b, c), where a, b, and c are assigned as the angular values of the three sets of deflectors. The value at each point in the figure is the ratio of the air flow to the average flow values in the rack. The blue filled area in the figure is the adjustable range of the terminal device. Overall, the flow adjustment range of the terminal device is roughly between 0.5 and 1.5, which indicates that the terminal device can ideally control the power consumption distribution of approximately 50% above and below the average power consumption of the rack servers.
Most of the flow regulation curves have relatively high flow rates for servers positioned downward because the deflector is deflected downward, changing the direction of the cold air to some extent and causing it to collect downward. Here, the three sets of angles (60, 0, 0), (0, 60, 0), and (0, 0, 60) have unusually high flow rates at the bottom, with the flow ratios of 5.5, 3.5, and 4.2 at the 4th, 6th, and 7th layer servers, respectively. The streamline plots for these three cases are shown in Figure 10. In all three cases, the terminal device has a better ability to improve the hot spots on the lower side of the rack. This is because the other two deflectors are 0° and the deflection angle of the open deflector is small, allowing a large amount of cool airflow to pour into the underside of the rack. This results in a greater cooling capacity underneath the rack. A comparison of the three shows that the further the open deflector is from the rack, the higher the peak position appears.

4.2. Analysis of the Effect of Terminal Devices Application

The purpose of the proposed terminal unit is to achieve “on-demand cooling” as much as possible, to make the distribution of airflow better, to avoid excessive temperature in the local area of the rack, and to improve the uniformity of server temperature. The following is a detailed analysis of the effect of the terminal device.
According to Section 3.2, the power consumption of the server was set in the range of 40 W to 200 W. Because of the large number of racks, not all racks were analyzed. The racks were divided into four groups and the first rack in each group (F1, F4, F7, and F10) was selected for illustration. Table 1 shows the power consumption distribution of the upper racks. F1 is set up with lower power in the middle and with higher power above and below. F4 is set up with higher power for servers above the 9th layer. F7 is set up with higher power for servers below. F10 is set up with higher power for servers below 12th layer with no significant difference between the power consumption in the whole rack. The D value was calculated between the power distribution curve and the 63 flow distribution curves, respectively. For Rack F1, the minimum D was 0.107, which came from the flow distribution curve, corresponding to the angle (0, 60, 90). Similarly, the angles for Rack F4, F7, and F10 was (75, 60, 60), (0, 60, 60), and (75, 60, 60), respectively. The power distribution of the four racks were shown in the bar charts in Figure 11, and the flow distribution curves in scatter form in Figure 11. Generally, the flow distribution curves can capture the difference between high and low power distributions. The flow curve for the F1 rack is low in the middle and high at both ends, corresponding to the power distribution curve, and F7 was in the form of a high left and low right, while the power distribution did not vary much for F4 and F10, with both corresponding to the same flow distribution curve. Figure 12 showed the streamlines of the four racks. Generally, the first, second, and third set of deflectors had a targeted regulation effect on the lower, middle, and upper layers, respectively. Comparing F1 and F7, the angles of the third sets of deflectors were 90° and 60°, respectively. There was more airflow in the upper 17th–24th servers of F1 and in the lower 3rd–8th servers of F7, which showed that the deflection of the third deflector plate is able to change the airflow of the upper servers. The angles of the first set of deflectors of F7 and F10 were different. Comparing F7 and F10, the opening of the first set of deflectors made the airflow into the servers become more uniform.
It should be noted that in all the cases, the server airflow at the bottom 1st–2nd layers of a rack were small. This was because there was a thick frame at the bottom of the rack to support the weight, so that there was a bump between the air outlet and the lowermost servers, which caused the cold air to form a vortex here, as shown in Figure 13. This situation is not conducive to the cold air entering the servers at the lower level; hence, the flowrate of the servers at the lowermost level was always very low.
Figure 14a,b show the exhaust velocity distribution before and after the installation of the terminal device, respectively. After the angles of the terminal device for the server power distribution were set, the flow distribution in the rack could be effectively adjusted, and more air flow was distributed in the location of servers with higher power. The temperature distribution diagram corresponding to the rack was shown in Figure 15. Before the terminal device was installed, the exhaust temperature of each server outlet had a large difference and low uniformity, owing to the different power consumption distributions of the servers on each rack. In addition, each rack had individual localized overheating. The variation of the maximum temperature of each rack was shown in Table 2, where the local hot spots were all mitigated to a large extent.
As seen in Figure 16, the square dots in the figure represent the rack without the terminal device and the round dots represent the rack with the terminal device. The maximum temperature of each of the four racks decreases and the minimum temperature increases. The standard deviation of the exhaust air temperature of each rack is listed in Table 2. The standard deviation of the rack outlet temperature is significantly lower after using the terminal device, showing a better uniformity of exhaust air temperature distribution.
In conclusion, through the analysis of the effect of using terminal devices under different server power distributions, it can be concluded that the terminal devices can suppress the local hot spots of the racks to some extent, reduce the heat aggregation phenomenon, and significantly improve the uniformity of the exhaust temperature under the condition of sufficient cooling capacity provided.

4.3. Energy Efficiency Analysis

When hot spots appear on data center racks, the current basic approach is to reduce the air temperature of the CRAH or to increase the air flowrate of the fan to eliminate the rack hot spots. These two methods are relatively crude and usually result in the overcooling of areas that are not very hot, which puts the servers at risk of condensation and is also not very energy efficient.
In the case of the above calculation, the mean exhaust air temperature of the rack was maximum at 315.3 K in rack F10, which was lowered to 313.4 K after adjustment by the terminal device, a drop of approximately 2 K. For the sake of the standard of uniform calculation, it was assumed that the maximum exhaust air temperature of the rack could not be higher than 313.4 K.
When the supply air temperature was 293 K, COP of the CRAH was about 3.2, according to the literature [27]. According to the settings in Section 3.2, the total server power consumption was P s = 36,331.2 W, so P c ( 293 ) was 11,353.5 W, calculated from Equation (8). The air flowrate Q s u p p l y was 6.2 m3/s, so the P f ( 6.17 ) was 1666.3 W, according to Equation (9) and P t o t a l was 13,019.8 W.
The energy consumption of the terminal device was extremely low. When the deflector of the terminal device was in motion, the power consumed was 2.1 W, and when it was not in motion, the power consumed was 0 W. For the convenience of calculation, P t e r m i n a l was uniformly set as 2.1 W.
When reducing the air supply temperature to eliminate hot spots, it needed to be lower than the current value of 293 K. After calculation, when the air supply temperature of the CRAH was reduced from 293 K to 291 K, the air outlet temperature of all the racks is lower than the setting maximum air outlet temperature. The energy consumption of the CRAH increased, owing to the reduction of the setting point of the CRAH. The COP of the CRAH decreased to 2.6 at 291 K. At this time, P c ( 291 ) was 13,973.5 W, so P e x t r a was 2620 W. The power saved by terminal device was Δ P t o t a l = 2617.9 W and η = 2617.9 W/13,019.8 W = 20.1%.
Similarly, when the fan flowrate was increased to eliminate hot spots, the supply air flowrate was 1.1 times the current flowrate. The outlet air temperature of all the racks could be below the setting maximum outlet air temperature. P f ( 6.79 ) was increased to 2217 W calculated for Equation (9). So P e x t r a was 551.7 W. The power saved by terminal device was Δ P t o t a l   = 548.6 W. Finally, the value of η = 548.6 W/13,019.8 W = 4.2% according to Equation (12).
P c ( T s u p p l y ) = P s C O P ( T s u p p l y )
P f ( Q s u p p l y ) = 7.082 × Q s u p p l y 3
P t o t a l = P c + P f
Δ P t o t a l = P e x t r a P t e r m i n a l
η = Δ P t o t a l P t o t a l × 100 %
The energy-saving analysis showed that the utilization of terminal devices not only reduced heat accumulation owing to excessive server power and suppressed hot spots but also led to relatively considerable energy savings, which reached 20.1% and 4.2% compared with those under the conditions of lowering the temperature and increasing the air flowrate, respectively.

5. Conclusions

In this paper, an adaptive air supply terminal device was proposed for air-cooled data centers. It adapts the cooling capacity entering the servers to the distribution of servers power in the rack by adjusting the angle of the deflectors in the terminal device. A CFD model based on a real data center was established and field experiments were conducted to complete the model validation. The effect of the angle adjustment of the deflector on the server flow was discussed and analyzed. The application effect and energy-saving potential of this terminal device were analyzed through simulations, and the main results of this study are as follows:
(1)
The adaptive terminal device proposed in this paper has an adjustment range of airflow ratio between approximately 0.5 and 1.5 times relative to the average value of the flow rate, which can accommodate changes in the distribution of power consumption in the rack within a certain range. The server flow rates at the lower position of the rack can be unusually large, reaching approximately 5.0 times the mean value for the three deflector angles of (60, 0, 0), (0, 60, 0), and (0, 0, 60). These can be used to deal with the situation when the power of the server at the lower position of the rack is abnormally high.
(2)
The application of terminal devices significantly improved the airflow distribution and thermal environment at the airflow outlets, alleviating localized overheating conditions. The standard deviation of the rack exhaust temperature significantly decreased, and the uniformity of the rack temperature increased after the use of the terminal device. In addition, the maximum temperature of the rack decreased by up to 4.2 K after the utilization of the terminal device.
(3)
The terminal device has good energy-saving potential. The use of the terminal device can improve the thermal environment by adjusting the local air flow without changing the parameters set of CRAH. Compared with the methods of a reduction in the supply air temperature and an increase in the supply air flowrate, the use of the terminal device is able to reduce the energy consumption by 20.1% and 4.2%, respectively.

Author Contributions

Conceptualization, Hongyin Chen; Formal analysis, Ye Li; Investigation, Songcen Wang; Data curation, Yi Ding and Xianxu Huo; Writing—original draft, Hongyin Chen; Writing—review & editing, Ming Zhong; Supervision, Dezhi Li and Tianheng Chen; Project administration, Hongyin Chen. All authors have read and agreed to the published version of the manuscript.
Conceptualization, H.C.; Formal analysis, Y.L.; Investigation, S.W.; Data curation, Y.D. and X.H.; Writing—original draft, H.C.; Writing—review & editing, M.Z.; Supervision, D.L. and T.C.; Project administration, H.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by State Grid Corporation of China Science and Technology, grant number 5400-202112159A-0-0-00.

Data Availability Statement

The data are not publicly available.

Acknowledgments

This work is supported by State Grid Corporation of China Science and Technology Project “Research on Precise Cooling and Waste Heat Utilization Technology of Data Center Based on Data-Load Interaction” (5400-202112159A-0-0-00).

Conflicts of Interest

The authors declare no conflict of interest.

Nomenclature

AbbreviationsSymbols
BMCBaseboard Management Controller D Euclidean distance (-)
CFDComputational Fluid Dynamics P Power (W)
COPCoefficient of Performance Q Flow rate (m3/s)
CPUCentral Processing Unit T Temperature (K)
CRAHComputer Room Air Handler U Unit of measure for server height (mm)
ITInternet Technology α Flow distribution ratio (-)
IPMIInteractive Standard Management Specifications β Power distribution ratio (-)
PUEPower Utilization Efficiency u CPU utilization (-)
RNGRe-normalization group η Energy saving rate (-)
Superscript and subscript
s Server
X Mode number of deflection angle
Y Location number of servers in the rack
c Chiller
f Fan
a Average
e x t r a Extra power caused by reducing temperature or increasing flowrate
i d l e Idle state of server
m a x Full load state of server
n Number of servers in the rack
s u p p l y Supply air
t e r m i n a l Terminal device
t o t a l Total power of chiller and fan

References

  1. Ministry of industry and information. Technology of the People’s Republic of China, National Data Center Application Development Guidelines (2020); Post and Telecom Press: Beijing, China, 2020. [Google Scholar]
  2. Di, L.I.U.; Junwei, C.A.O.; Mingshuang, L.I.U. Collaborative optimization strategy of information and energy for distributed data centers. J. Tsinghua Univ. (Sci. Technol.) 2022, 62, 1864–1874. [Google Scholar]
  3. Ali, H.K.; Khalid, A.; Saman, K.H. Towards the stand-alone operation of data centers with free cooling and optimally sized hybrid renewable power generation and energy storage. Renew. Sustain. Energy Rev. 2018, 93, 451–472. [Google Scholar]
  4. Ding, J.; Zhang, H.; Leng, D.; Xu, H.; Tian, C.; Zhai, Z. Experimental investigation and application analysis on an integrated system of free cooling and heat recovery for data centers. Int. J. Refrig. 2022, 136, 142–151. [Google Scholar] [CrossRef]
  5. Zou, S.; Zhang, Q.; Yue, C.; Wang, J.; Du, S. Study on the performance and free cooling potential of a R32 loop thermosyphon system used in data center. Energy Build. 2022, 256, 111682. [Google Scholar] [CrossRef]
  6. Choi, Y.J.; Park, B.R.; Hyun, J.Y.; Moon, J.W. Development of an adaptive artificial neural network model and optimal control algorithm for a data center cyber–physical system. Build Env. 2022, 210, 108704. [Google Scholar] [CrossRef]
  7. Park, B.R.; Choi, Y.J.; Choi, E.J.; Moon, J.W. Adaptive control algorithm with a retraining technique to predict the optimal amount of chilled water in a data center cooling system. J. Build. Eng. 2022, 50, 104167. [Google Scholar] [CrossRef]
  8. Cai, S.; Zou, Y.; Luo, X.; Tu, Z. Investigations of a novel proton exchange membrane fuel cell-driven combined cooling and power system in data center applications. Energy Convers Manag. 2021, 250, 114906. [Google Scholar] [CrossRef]
  9. Pan, Q.; Peng, J.; Wang, R. Application analysis of adsorption refrigeration system for solar and data center waste heat utilization. Energy Convers Manag. 2021, 228, 113564. [Google Scholar] [CrossRef]
  10. Zhao, J.; Cai, S.; Luo, X.; Tu, Z. Multi-stack coupled energy management strategy of a PEMFC based-CCHP system applied to data centers. Int. J. Hydrog. Energy 2022, 47, 16597–16609. [Google Scholar] [CrossRef]
  11. Kennedy, D. Improving data centers with aisle containment; BNP Media: Troy, MI, USA, 2012; Volume 29, p. 48. [Google Scholar]
  12. Martin, M.; Khattar, M.; Germagian, M. High-density heat containment. Ashrae J. 2007, 49, 38–43. [Google Scholar]
  13. Wibron, E.; Ljung, A.; Lundström, T. Computational Fluid Dynamics Modeling and Validating Experiments of Airflow in a Data Center. Energies 2018, 11, 644. [Google Scholar] [CrossRef] [Green Version]
  14. Alkharabsheh, S.A.; Sammakia, B.G.; Shrivastava, S.K. Experimentally validated computational fluid dynamics model for a data center with cold aisle containment. J. Electron. Packag. Trans. ASME 2015, 137, 021010. [Google Scholar] [CrossRef]
  15. Gao, C.; Yu, Z.; Wu, J. Investigation of Airflow Pattern of a Typical Data Center by CFD Simulation. Energy Procedia 2015, 78, 2687–2693. [Google Scholar] [CrossRef]
  16. Nada, S.A.; Said, M.A. Effect of CRAC units layout on thermal management of data center. Appl. Therm. Eng. 2017, 118, 339–344. [Google Scholar] [CrossRef]
  17. Nada, S.A.; Said, M.A.; Rady, M.A. Numerical investigation and parametric study for thermal and energy management enhancements in data centers’ buildings. Appl. Therm. Eng. 2016, 98, 110–128. [Google Scholar] [CrossRef]
  18. Bhopte, S.; Agonafer, D.; Schmidt, R.; Sammakia, B. Optimization of data center room layout to minimize rack inlet air temperature. J. Electron. Packag. 2006, 128, 380–387. [Google Scholar] [CrossRef]
  19. Babak, F.; Masud, B.; Srinarayana, N.; Steve, A. A Comparison of Parametric and Multivariable Optimization Techniques in a Raised-Floor Data Center. J. Electron. Packag. 2013, 135, 030905. [Google Scholar]
  20. Lu, H.; Zhang, Z. Numerical and experimental investigations on the thermal performance of a data center. Appl. Therm. Eng. 2020, 180, 115759. [Google Scholar] [CrossRef]
  21. Nada, S.A.; Elfeky, K.E.; Attia, A.M.A.; Alshaer, W.G. Experimental parametric study of servers cooling management in data centers buildings. Heat Mass Transf. 2017, 53, 2083–2097. [Google Scholar] [CrossRef]
  22. Zhang, K.; Zhang, X.; Li, S.; Jin, X. Experimental study on the characteristics of supply air for UFAD system with perforated tiles. Energy Build. 2014, 80, 1–6. [Google Scholar] [CrossRef]
  23. Bahgat, S.; Kourosh, N.; Mark, S.; Mohammad, I.T.; Sadegh, K. Impact of Tile Design on the Thermal Performance of Open and Enclosed Aisles. J. Electron. Packag. 2017, 140. [Google Scholar]
  24. Song, Z. Numerical investigation for performance indices and categorical designs of a fan-assisted data center cooling system. Appl. Therm. Eng. 2017, 118, 714–723. [Google Scholar] [CrossRef]
  25. Lim, S.; Chang, H. Airflow management analysis to suppress data center hot spots. Build. Environ. 2021, 197, 107843. [Google Scholar] [CrossRef]
  26. Niu, B.; Shi, M.; Zhang, Z.; Li, Y.; Cao, Y.; Pan, S. Multi-objective optimization of supply air jet enhancing airflow uniformity in data center with Taguchi-based grey relational analysis. Build. Environ. 2022, 208, 108606. [Google Scholar] [CrossRef]
  27. Samadiani, E.; Joshi, Y.; Allen, J.K.; Mistree, F. Adaptable Robust Design of Multi-Scale Convective Systems Applied to Energy Efficient Data Centers. Numer. Heat Transfer. Part A Appl. 2010, 57, 69–100. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of terminal device.
Figure 1. Schematic diagram of terminal device.
Buildings 13 00268 g001
Figure 2. The regulation principle of the terminal device.
Figure 2. The regulation principle of the terminal device.
Buildings 13 00268 g002
Figure 3. Data center layout.
Figure 3. Data center layout.
Buildings 13 00268 g003
Figure 4. 3D model of data center.
Figure 4. 3D model of data center.
Buildings 13 00268 g004
Figure 5. Grid independence verification results.
Figure 5. Grid independence verification results.
Buildings 13 00268 g005
Figure 6. Field experiments figures: (a) experiment setup photo of rear door; (b) thermocouple positions on front and rear doors; (c) servers on the tested rack.
Figure 6. Field experiments figures: (a) experiment setup photo of rear door; (b) thermocouple positions on front and rear doors; (c) servers on the tested rack.
Buildings 13 00268 g006
Figure 7. Comparison of measured and simulated results.
Figure 7. Comparison of measured and simulated results.
Buildings 13 00268 g007
Figure 8. Temperature in the cold aisle: (a) infrared camera photos; (b) simulation results.
Figure 8. Temperature in the cold aisle: (a) infrared camera photos; (b) simulation results.
Buildings 13 00268 g008
Figure 9. Flow distribution curve for the terminal device.
Figure 9. Flow distribution curve for the terminal device.
Buildings 13 00268 g009
Figure 10. Streamlines of settings (a) (60, 0, 0), (b) (0, 60, 0), (c) (0, 0, 60).
Figure 10. Streamlines of settings (a) (60, 0, 0), (b) (0, 60, 0), (c) (0, 0, 60).
Buildings 13 00268 g010
Figure 11. Power ratio and flow ratio of (a) F1, (b) F4, (c) F7, and (d) F10.
Figure 11. Power ratio and flow ratio of (a) F1, (b) F4, (c) F7, and (d) F10.
Buildings 13 00268 g011
Figure 12. Streamlines of (a) F1, (b) F4, (c) F7, and (d) F10.
Figure 12. Streamlines of (a) F1, (b) F4, (c) F7, and (d) F10.
Buildings 13 00268 g012
Figure 13. Vortex at the bottom of the rack.
Figure 13. Vortex at the bottom of the rack.
Buildings 13 00268 g013
Figure 14. The exhaust air velocity distribution: (a) without terminal device; (b) with terminal device.
Figure 14. The exhaust air velocity distribution: (a) without terminal device; (b) with terminal device.
Buildings 13 00268 g014
Figure 15. The exhaust air temperature distribution: (a) without terminal device; (b) with terminal device.
Figure 15. The exhaust air temperature distribution: (a) without terminal device; (b) with terminal device.
Buildings 13 00268 g015
Figure 16. Mean exhaust temperature distribution of racks with/without the terminal device.
Figure 16. Mean exhaust temperature distribution of racks with/without the terminal device.
Buildings 13 00268 g016
Table 1. Power distribution of Rack F1, F4, F7, and F10.
Table 1. Power distribution of Rack F1, F4, F7, and F10.
Server NumberServer Power (W)
Rack F1Rack F4Rack F7Rack F10
2414410158157
2314010068146
2213710754149
2115610063145
2014910445147
1914210360149
181579954147
1714611252155
165815362140
155314555154
145914252151
134714954145
125814649192
116814561196
105715353196
94514859189
8140141136201
7140156149188
6146151153196
5136150156204
4155143141198
3136144142200
2139154156192
1137151149186
Table 2. Max. temperature and standard deviation variation.
Table 2. Max. temperature and standard deviation variation.
RackMax. Temperature (K)Standard Deviation
Without Terminal DeviceWith Terminal DeviceWithout Terminal DeviceWith Terminal Device
F1314.8314.55.064.12
F4311.2309.41.640.89
F7310.9306.74.542.00
F10315.3313.41.671.10
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, H.; Li, D.; Wang, S.; Chen, T.; Zhong, M.; Ding, Y.; Li, Y.; Huo, X. Numerical Investigation of Thermal Performance with Adaptive Terminal Devices for Cold Aisle Containment in Data Centers. Buildings 2023, 13, 268. https://doi.org/10.3390/buildings13020268

AMA Style

Chen H, Li D, Wang S, Chen T, Zhong M, Ding Y, Li Y, Huo X. Numerical Investigation of Thermal Performance with Adaptive Terminal Devices for Cold Aisle Containment in Data Centers. Buildings. 2023; 13(2):268. https://doi.org/10.3390/buildings13020268

Chicago/Turabian Style

Chen, Hongyin, Dezhi Li, Songcen Wang, Tianheng Chen, Ming Zhong, Yi Ding, Ye Li, and Xianxu Huo. 2023. "Numerical Investigation of Thermal Performance with Adaptive Terminal Devices for Cold Aisle Containment in Data Centers" Buildings 13, no. 2: 268. https://doi.org/10.3390/buildings13020268

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop