Show simple item record

dc.contributor.authorWang, Z
dc.contributor.authorHu, J
dc.contributor.authorMin, G
dc.contributor.authorZhao, Z
dc.date.accessioned2023-10-11T08:34:47Z
dc.date.issued2023-09-09
dc.date.updated2023-10-11T08:02:20Z
dc.description.abstractCooperative edge caching enables edge servers to jointly utilize their cache to store popular contents, thus drastically reducing the latency of content acquisition. One fundamental problem of cooperative caching is how to coordinate the cache replacement decisions at edge servers to meet users’ dynamic requirements and avoid caching redundant contents. Online deep reinforcement learning (DRL) is a promising way to solve this problem by learning a cooperative cache replacement policy using continuous interactions (trial and error) with the environment. However, the sampling process of the interactions is usually expensive and time-consuming, thus hindering the practical deployment of online DRL-based methods. To bridge this gap, we propose a novel Delay-awarE Cooperative cache replacement method based on Offline deep Reinforcement learning (DECOR), which can exploit the existing data at the mobile edge to train an effective policy while avoiding expensive data sampling in the environment. A specific convolutional neural network is also developed to improve the training efficiency and cache performance. Experimental results show that DECOR can learn a superior offline policy from a static dataset compared to an advanced online DRL-based method. Moreover, the learned offline policy outperforms the behavior policy used to collect the dataset by up to 35.9%.en_GB
dc.description.sponsorshipUK Research and Innovationen_GB
dc.description.sponsorshipEuropean Union Horizon 2020en_GB
dc.identifier.citationPublished online 9 September 2023en_GB
dc.identifier.doihttps://doi.org/10.1145/3623398
dc.identifier.grantnumberEP/X038866/1en_GB
dc.identifier.grantnumber101086159en_GB
dc.identifier.urihttp://hdl.handle.net/10871/134199
dc.identifierORCID: 0000-0001-5406-8420 (Hu, Jia)
dc.language.isoenen_GB
dc.publisherAssociation for Computing Machineryen_GB
dc.rights© 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.en_GB
dc.subjectOffline deep reinforcement learningen_GB
dc.subjectcache replacementen_GB
dc.subjectconvolutional neural networken_GB
dc.subjectedge computingen_GB
dc.subjectsmart cityen_GB
dc.titleIntelligent cooperative caching at mobile edge based on offline deep reinforcement learningen_GB
dc.typeArticleen_GB
dc.date.available2023-10-11T08:34:47Z
dc.identifier.issn1550-4859
dc.descriptionThis is the author accepted manuscript. The final version is available from the Association for Computing Machinery via the DOI in this record en_GB
dc.identifier.eissn1550-4867
dc.identifier.journalACM Transactions on Sensor Networksen_GB
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/en_GB
dcterms.dateAccepted2023-08-27
rioxxterms.versionAMen_GB
rioxxterms.licenseref.startdate2023-09-09
rioxxterms.typeJournal Article/Reviewen_GB
refterms.dateFCD2023-10-11T08:30:50Z
refterms.versionFCDAM
refterms.dateFOA2023-10-11T08:34:49Z
refterms.panelBen_GB
refterms.dateFirstOnline2023-09-09


Files in this item

This item appears in the following Collection(s)

Show simple item record

© 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. For the purpose of open access, the
author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.
Except where otherwise noted, this item's licence is described as © 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.