dc.contributor.author | Chen, Z | |
dc.contributor.author | Hu, J | |
dc.contributor.author | Min, G | |
dc.contributor.author | Luo, C | |
dc.contributor.author | El-Ghazawi, T | |
dc.date.accessioned | 2021-11-30T15:44:10Z | |
dc.date.issued | 2021-12-03 | |
dc.date.updated | 2021-11-30T15:36:10Z | |
dc.description.abstract | The ever-expanding scale of cloud datacenters necessitates automated resource provisioning to best meet the requirements of low latency and high energy-efficiency. However, due to the dynamic system states and various user demands, efficient resource allocation in cloud faces huge challenges. Most of the existing solutions for cloud resource allocation cannot effectively handle the dynamic cloud environments because they depend on the prior knowledge of a cloud system, which may lead to excessive energy consumption and degraded Quality-of-Service (QoS). To address this problem, we propose an adaptive and efficient cloud resource allocation scheme based on Actor-Critic Deep Reinforcement Learning (DRL). First, the actor parameterizes the policy (allocating resources) and chooses actions (scheduling jobs) based on the scores assessed by the critic (evaluating actions). Next, the resource allocation policy is updated by using gradient ascent while the variance of policy gradient is reduced with an advantage function, which improves the training efficiency of the proposed method. We conduct extensive simulation experiments using real-world data from Google cloud datacenters. The results show that our method can obtain the superior QoS in terms of latency and job dismissing rate with enhanced energy-efficiency, compared to two advanced DRL-based and five classic cloud resource allocation methods. | en_GB |
dc.description.sponsorship | European Union Horizon 2020 | en_GB |
dc.identifier.citation | Published online 3 December 2021 | en_GB |
dc.identifier.doi | 10.1109/TPDS.2021.3132422 | |
dc.identifier.grantnumber | 101008297 | en_GB |
dc.identifier.uri | http://hdl.handle.net/10871/127986 | |
dc.identifier | ORCID: 0000-0001-5406-8420 (Hu, Jia) | |
dc.language.iso | en | en_GB |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | en_GB |
dc.rights | © 2021 IEEE | |
dc.subject | Cloud computing | en_GB |
dc.subject | datacenters | en_GB |
dc.subject | data centres | en_GB |
dc.subject | resource allocation | en_GB |
dc.subject | energy efficiency | en_GB |
dc.subject | deep reinforcement learning | en_GB |
dc.title | Adaptive and Efficient Resource Allocation in Cloud Datacenters Using Actor-Critic Deep Reinforcement Learning | en_GB |
dc.type | Article | en_GB |
dc.date.available | 2021-11-30T15:44:10Z | |
dc.identifier.issn | 1558-2183 | |
dc.description | This is the author accepted manuscript. The final version is available from IEEE via the DOI in this record | en_GB |
dc.identifier.journal | IEEE Transactions on Parallel and Distributed Systems | en_GB |
dc.relation.ispartof | IEEE Transactions on Parallel and Distributed Systems | |
dc.rights.uri | http://www.rioxx.net/licenses/all-rights-reserved | en_GB |
dcterms.dateAccepted | 2021-11-28 | |
rioxxterms.version | AM | en_GB |
rioxxterms.licenseref.startdate | 2021-11-28 | |
rioxxterms.type | Journal Article/Review | en_GB |
refterms.dateFCD | 2021-11-30T15:36:15Z | |
refterms.versionFCD | AM | |
refterms.dateFOA | 2021-12-14T15:16:28Z | |
refterms.panel | B | en_GB |