Show simple item record

dc.contributor.authorWang, J
dc.contributor.authorHu, J
dc.contributor.authorMin, G
dc.contributor.authorZhan, W
dc.contributor.authorNi, Q
dc.contributor.authorGeorgalas, N
dc.date.accessioned2019-04-25T13:51:02Z
dc.date.issued2019-05-13
dc.description.abstractMulti-access Edge Computing (MEC) is an emerging paradigm which utilizes computing resources at the network edge to deploy heterogeneous applications and services. In the MEC system, mobile users and enterprises can offload computation-intensive tasks to nearby computing resources to reduce latency and save energy. When users make offloading decisions, the task dependency needs to be considered. Due to the NP-hardness of the offloading problem, the existing solutions are mainly heuristic, and therefore have difficulties in adapting to the increasingly complex and dynamic applications. To address the challenges of task dependency and adapting to dynamic scenarios, we propose a new Deep Reinforcement Learning (DRL) based offloading framework, which can efficiently learn the offloading policy uniquely represented by a specially designed Sequence-to-Sequence (S2S) neural network. The proposed DRL solution can automatically discover the common patterns behind various applications so as to infer an optimal offloading policy in different scenarios. Simulation experiments were conducted to evaluate the performance of the proposed DRL-based method with different data transmission rates and task numbers. The results show that our method outperforms two heuristic baselines and achieves nearly optimal performance.en_GB
dc.description.sponsorshipEngineering and Physical Sciences Research Council (EPSRC)en_GB
dc.identifier.citationVol. 57 (5), pp. 64-69.en_GB
dc.identifier.doi10.1109/MCOM.2019.1800971
dc.identifier.grantnumberEP/M013936/2en_GB
dc.identifier.urihttp://hdl.handle.net/10871/36902
dc.language.isoenen_GB
dc.publisherInstitute of Electrical and Electronics Engineersen_GB
dc.rights© 2019 IEEE.
dc.titleComputation Offloading in Multi-access Edge Computing using Deep Sequential Model based on Reinforcement Learningen_GB
dc.typeArticleen_GB
dc.date.available2019-04-25T13:51:02Z
dc.identifier.issn0163-6804
dc.descriptionThis is the author accepted manuscript. The final version is available from IEEE via the DOI in this record.en_GB
dc.identifier.journalIEEE Communications Magazineen_GB
dc.rights.urihttp://www.rioxx.net/licenses/all-rights-reserveden_GB
dcterms.dateAccepted2019-04-22
exeter.funder::Engineering and Physical Sciences Research Council (EPSRC)en_GB
rioxxterms.versionAMen_GB
rioxxterms.licenseref.startdate2019-04-22
rioxxterms.typeJournal Article/Reviewen_GB
refterms.dateFCD2019-04-25T12:37:45Z
refterms.versionFCDAM
refterms.dateFOA2019-05-14T14:25:31Z
refterms.panelBen_GB


Files in this item

This item appears in the following Collection(s)

Show simple item record