Show simple item record

dc.contributor.authorWang, Z
dc.contributor.authorHu, J
dc.contributor.authorMin, G
dc.contributor.authorZhao, Z
dc.contributor.authorWang, Z
dc.date.accessioned2024-02-20T15:51:24Z
dc.date.issued2024-02-22
dc.date.updated2024-02-20T15:04:32Z
dc.description.abstractOne fundamental problem of content caching in edge computing is how to replace contents in edge servers with limited capacities to meet the dynamic requirements of users without knowing their preferences in advance. Recently, online deep reinforcement learning (DRL)-based caching methods have been developed to address this problem by learning an edge cache replacement policy using samples collected from continuous interactions (trial and error) with the environment. However, in practice, the online data collection phase is often expensive and time-consuming, thus hindering the practical deployment of online DRL-based methods. To bridge this gap, we propose a novel Agile edge Cache replacement method based on Offline-online deep Reinforcement learNing (ACORN), which can efficiently learn an edge cache replacement policy offline from a training dataset collected by a behavior policy (e.g., Least Recently Used) and then improve it with fast online fine-tuning. We also design a specific convolutional neural network structure with multiple branches to effectively extract content popularity knowledge from the dataset. Experimental results show that the offline policy generated by ACORN outperforms the behavior policy by up to 38%. Through online fine-tuning, ACORN also achieves the number of cache hits as good as that of several advanced DRL-based methods while significantly reducing the number of training epochs by up to 40%.en_GB
dc.description.sponsorshipUKRIen_GB
dc.description.sponsorshipHorizon Europeen_GB
dc.identifier.citationPublished online 22 February 2024en_GB
dc.identifier.doi10.1109/TPDS.2024.3368763
dc.identifier.grantnumberEP/X038866/1en_GB
dc.identifier.grantnumber101086159en_GB
dc.identifier.urihttp://hdl.handle.net/10871/135363
dc.identifierORCID: 0000-0001-5406-8420 (Hu, Jia)
dc.language.isoenen_GB
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_GB
dc.rightsFor the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.en_GB
dc.rights© 2024 IEEE
dc.subjectDeep reinforcement learningen_GB
dc.subjectcache replacementen_GB
dc.subjectoffline trainingen_GB
dc.subjectconvolutional neural networken_GB
dc.subjectedge computingen_GB
dc.titleAgile Cache Replacement in Edge Computing via Offline-Online Deep Reinforcement Learningen_GB
dc.typeArticleen_GB
dc.date.available2024-02-20T15:51:24Z
dc.identifier.issn1558-2183
dc.descriptionThis is the author accepted manuscript. The final version is available from IEEE via the DOI in this recorden_GB
dc.identifier.journalIEEE Transactions on Parallel and Distributed Systemsen_GB
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_GB
dcterms.dateAccepted2024-02-18
dcterms.dateSubmitted2023-03-12
rioxxterms.versionAMen_GB
rioxxterms.licenseref.startdate2024-02-18
rioxxterms.typeJournal Article/Reviewen_GB
refterms.dateFCD2024-02-20T15:04:35Z
refterms.versionFCDAM
refterms.dateFOA2024-03-06T13:57:44Z
refterms.panelBen_GB


Files in this item

This item appears in the following Collection(s)

Show simple item record

For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.
Except where otherwise noted, this item's licence is described as For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.