dc.contributor.author | Li, J | |
dc.contributor.author | Deng, Y | |
dc.contributor.author | Zhou, Y | |
dc.contributor.author | Zhang, Z | |
dc.contributor.author | Min, G | |
dc.contributor.author | Qin, X | |
dc.date.accessioned | 2022-05-31T08:04:17Z | |
dc.date.issued | 2022-03-16 | |
dc.date.updated | 2022-05-30T16:05:12Z | |
dc.description.abstract | Increasing workload conditions lead to a significant surge in power consumption and computing node failures in data centers. Existing workload distribution strategies merely are focused on either thermal awareness or failure mitigation, overlooking the impact of node failures on the energy efficiency of cloud data centers. To address this issue, a holistic model is built to characterize the impacts of workloads, computing and cooling costs, heat recirculation, and node failure on the energy efficiency of cloud data centers. Leveraging such a holistic model, we propose a novel thermal-aware workload distribution strategy called HGSA to take into account node failure, thereby aiming to improve the energy efficiency of cloud data centers. Our empirical findings confirm that (i) faulty nodes lead to a large rise in power consumption, and (ii) failure locations play a vital role in the power consumption of data centers. Experimental results unveil that our HGSA is adroit at making near-optimal decisions in workload distribution strategies. In particular, HGSA cuts down the minimum inlet temperature by 5.2%-15%, improves the maximum air temperature of CRAC by 4.2%-26.5%, lowers the cooling cost by 15.4%-50% compared to the existing solutions. Furthermore, HGSA cuts back the total power consumption by 0.65%-78%. | en_GB |
dc.description.sponsorship | National Natural Science Foundation of China | en_GB |
dc.description.sponsorship | Guangdong Basic and Applied Basic Research Foundation | en_GB |
dc.description.sponsorship | International Cooperation Project of Guangdong Province | en_GB |
dc.description.sponsorship | Open Project Program of Wuhan National Laboratory for Optoelectronics | en_GB |
dc.format.extent | 1-1 | |
dc.identifier.citation | Published online 16 March 2022 | en_GB |
dc.identifier.doi | https://doi.org/10.1109/tc.2022.3158476 | |
dc.identifier.grantnumber | 62072214 | en_GB |
dc.identifier.grantnumber | 2021B1515120048 | en_GB |
dc.identifier.grantnumber | 2020A0505100040 | en_GB |
dc.identifier.grantnumber | 2020WNLOKF006 | en_GB |
dc.identifier.uri | http://hdl.handle.net/10871/129783 | |
dc.identifier | ORCID: 0000-0003-1395-7314 (Min, Geyong) | |
dc.language.iso | en | en_GB |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | en_GB |
dc.rights | © 2022 IEEE | en_GB |
dc.subject | Data Centers | en_GB |
dc.subject | Power Consumption | en_GB |
dc.subject | Node Failure | en_GB |
dc.subject | Energy Efficiency | en_GB |
dc.subject | Workload Distribution | en_GB |
dc.subject | Thermal-Aware | en_GB |
dc.title | Towards Thermal-Aware Workload Distribution in Cloud Data Centers Based on Failure Models | en_GB |
dc.type | Article | en_GB |
dc.date.available | 2022-05-31T08:04:17Z | |
dc.identifier.issn | 0018-9340 | |
dc.description | This is the author accepted manuscript. The final version is available from IEEE via the DOI in this record | en_GB |
dc.identifier.eissn | 1557-9956 | |
dc.identifier.journal | IEEE Transactions on Computers | en_GB |
dc.relation.ispartof | IEEE Transactions on Computers, PP(99) | |
dc.rights.uri | http://www.rioxx.net/licenses/all-rights-reserved | en_GB |
rioxxterms.version | AM | en_GB |
rioxxterms.licenseref.startdate | 2022-03-16 | |
rioxxterms.type | Journal Article/Review | en_GB |
refterms.dateFCD | 2022-05-31T08:01:16Z | |
refterms.versionFCD | AM | |
refterms.dateFOA | 2022-05-31T08:04:22Z | |
refterms.panel | B | en_GB |