Show simple item record

dc.contributor.authorChen, Z
dc.contributor.authorHu, J
dc.contributor.authorMin, G
dc.contributor.authorLuo, C
dc.contributor.authorEl-Ghazawi, T
dc.date.accessioned2021-11-30T15:44:10Z
dc.date.issued2021-12-03
dc.date.updated2021-11-30T15:36:10Z
dc.description.abstractThe ever-expanding scale of cloud datacenters necessitates automated resource provisioning to best meet the requirements of low latency and high energy-efficiency. However, due to the dynamic system states and various user demands, efficient resource allocation in cloud faces huge challenges. Most of the existing solutions for cloud resource allocation cannot effectively handle the dynamic cloud environments because they depend on the prior knowledge of a cloud system, which may lead to excessive energy consumption and degraded Quality-of-Service (QoS). To address this problem, we propose an adaptive and efficient cloud resource allocation scheme based on Actor-Critic Deep Reinforcement Learning (DRL). First, the actor parameterizes the policy (allocating resources) and chooses actions (scheduling jobs) based on the scores assessed by the critic (evaluating actions). Next, the resource allocation policy is updated by using gradient ascent while the variance of policy gradient is reduced with an advantage function, which improves the training efficiency of the proposed method. We conduct extensive simulation experiments using real-world data from Google cloud datacenters. The results show that our method can obtain the superior QoS in terms of latency and job dismissing rate with enhanced energy-efficiency, compared to two advanced DRL-based and five classic cloud resource allocation methods.en_GB
dc.description.sponsorshipEuropean Union Horizon 2020en_GB
dc.identifier.citationPublished online 3 December 2021en_GB
dc.identifier.doi10.1109/TPDS.2021.3132422
dc.identifier.grantnumber101008297en_GB
dc.identifier.urihttp://hdl.handle.net/10871/127986
dc.identifierORCID: 0000-0001-5406-8420 (Hu, Jia)
dc.language.isoenen_GB
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_GB
dc.rights© 2021 IEEE
dc.subjectCloud computingen_GB
dc.subjectdatacentersen_GB
dc.subjectdata centresen_GB
dc.subjectresource allocationen_GB
dc.subjectenergy efficiencyen_GB
dc.subjectdeep reinforcement learningen_GB
dc.titleAdaptive and Efficient Resource Allocation in Cloud Datacenters Using Actor-Critic Deep Reinforcement Learningen_GB
dc.typeArticleen_GB
dc.date.available2021-11-30T15:44:10Z
dc.identifier.issn1558-2183
dc.descriptionThis is the author accepted manuscript. The final version is available from IEEE via the DOI in this recorden_GB
dc.identifier.journalIEEE Transactions on Parallel and Distributed Systemsen_GB
dc.relation.ispartofIEEE Transactions on Parallel and Distributed Systems
dc.rights.urihttp://www.rioxx.net/licenses/all-rights-reserveden_GB
dcterms.dateAccepted2021-11-28
rioxxterms.versionAMen_GB
rioxxterms.licenseref.startdate2021-11-28
rioxxterms.typeJournal Article/Reviewen_GB
refterms.dateFCD2021-11-30T15:36:15Z
refterms.versionFCDAM
refterms.dateFOA2021-12-14T15:16:28Z
refterms.panelBen_GB


Files in this item

This item appears in the following Collection(s)

Show simple item record