Agile Cache Replacement in Edge Computing via Offline-Online Deep Reinforcement Learning
dc.contributor.author | Wang, Z | |
dc.contributor.author | Hu, J | |
dc.contributor.author | Min, G | |
dc.contributor.author | Zhao, Z | |
dc.contributor.author | Wang, Z | |
dc.date.accessioned | 2024-02-20T15:51:24Z | |
dc.date.issued | 2024-02-22 | |
dc.date.updated | 2024-02-20T15:04:32Z | |
dc.description.abstract | One fundamental problem of content caching in edge computing is how to replace contents in edge servers with limited capacities to meet the dynamic requirements of users without knowing their preferences in advance. Recently, online deep reinforcement learning (DRL)-based caching methods have been developed to address this problem by learning an edge cache replacement policy using samples collected from continuous interactions (trial and error) with the environment. However, in practice, the online data collection phase is often expensive and time-consuming, thus hindering the practical deployment of online DRL-based methods. To bridge this gap, we propose a novel Agile edge Cache replacement method based on Offline-online deep Reinforcement learNing (ACORN), which can efficiently learn an edge cache replacement policy offline from a training dataset collected by a behavior policy (e.g., Least Recently Used) and then improve it with fast online fine-tuning. We also design a specific convolutional neural network structure with multiple branches to effectively extract content popularity knowledge from the dataset. Experimental results show that the offline policy generated by ACORN outperforms the behavior policy by up to 38%. Through online fine-tuning, ACORN also achieves the number of cache hits as good as that of several advanced DRL-based methods while significantly reducing the number of training epochs by up to 40%. | en_GB |
dc.description.sponsorship | UKRI | en_GB |
dc.description.sponsorship | Horizon Europe | en_GB |
dc.identifier.citation | Published online 22 February 2024 | en_GB |
dc.identifier.doi | 10.1109/TPDS.2024.3368763 | |
dc.identifier.grantnumber | EP/X038866/1 | en_GB |
dc.identifier.grantnumber | 101086159 | en_GB |
dc.identifier.uri | http://hdl.handle.net/10871/135363 | |
dc.identifier | ORCID: 0000-0001-5406-8420 (Hu, Jia) | |
dc.language.iso | en | en_GB |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | en_GB |
dc.rights | For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising. | en_GB |
dc.rights | © 2024 IEEE | |
dc.subject | Deep reinforcement learning | en_GB |
dc.subject | cache replacement | en_GB |
dc.subject | offline training | en_GB |
dc.subject | convolutional neural network | en_GB |
dc.subject | edge computing | en_GB |
dc.title | Agile Cache Replacement in Edge Computing via Offline-Online Deep Reinforcement Learning | en_GB |
dc.type | Article | en_GB |
dc.date.available | 2024-02-20T15:51:24Z | |
dc.identifier.issn | 1558-2183 | |
dc.description | This is the author accepted manuscript. The final version is available from IEEE via the DOI in this record | en_GB |
dc.identifier.journal | IEEE Transactions on Parallel and Distributed Systems | en_GB |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | en_GB |
dcterms.dateAccepted | 2024-02-18 | |
dcterms.dateSubmitted | 2023-03-12 | |
rioxxterms.version | AM | en_GB |
rioxxterms.licenseref.startdate | 2024-02-18 | |
rioxxterms.type | Journal Article/Review | en_GB |
refterms.dateFCD | 2024-02-20T15:04:35Z | |
refterms.versionFCD | AM | |
refterms.dateFOA | 2024-03-06T13:57:44Z | |
refterms.panel | B | en_GB |
Files in this item
This item appears in the following Collection(s)
Except where otherwise noted, this item's licence is described as For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.