dc.contributor.author | Wang, J | |
dc.contributor.author | Hu, J | |
dc.contributor.author | Min, G | |
dc.contributor.author | Zhan, W | |
dc.contributor.author | Ni, Q | |
dc.contributor.author | Georgalas, N | |
dc.date.accessioned | 2019-04-25T13:51:02Z | |
dc.date.issued | 2019-05-13 | |
dc.description.abstract | Multi-access Edge Computing (MEC) is an
emerging paradigm which utilizes computing resources
at the network edge to deploy heterogeneous applications
and services. In the MEC system, mobile users and enterprises can offload computation-intensive tasks to nearby
computing resources to reduce latency and save energy.
When users make offloading decisions, the task dependency needs to be considered. Due to the NP-hardness of
the offloading problem, the existing solutions are mainly
heuristic, and therefore have difficulties in adapting to
the increasingly complex and dynamic applications. To
address the challenges of task dependency and adapting to
dynamic scenarios, we propose a new Deep Reinforcement
Learning (DRL) based offloading framework, which can
efficiently learn the offloading policy uniquely represented
by a specially designed Sequence-to-Sequence (S2S) neural
network. The proposed DRL solution can automatically
discover the common patterns behind various applications
so as to infer an optimal offloading policy in different scenarios. Simulation experiments were conducted to evaluate
the performance of the proposed DRL-based method with
different data transmission rates and task numbers. The
results show that our method outperforms two heuristic
baselines and achieves nearly optimal performance. | en_GB |
dc.description.sponsorship | Engineering and Physical Sciences Research Council (EPSRC) | en_GB |
dc.identifier.citation | Vol. 57 (5), pp. 64-69. | en_GB |
dc.identifier.doi | 10.1109/MCOM.2019.1800971 | |
dc.identifier.grantnumber | EP/M013936/2 | en_GB |
dc.identifier.uri | http://hdl.handle.net/10871/36902 | |
dc.language.iso | en | en_GB |
dc.publisher | Institute of Electrical and Electronics Engineers | en_GB |
dc.rights | © 2019 IEEE. | |
dc.title | Computation Offloading in Multi-access Edge Computing using Deep Sequential Model based on Reinforcement Learning | en_GB |
dc.type | Article | en_GB |
dc.date.available | 2019-04-25T13:51:02Z | |
dc.identifier.issn | 0163-6804 | |
dc.description | This is the author accepted manuscript. The final version is available from IEEE via the DOI in this record. | en_GB |
dc.identifier.journal | IEEE Communications Magazine | en_GB |
dc.rights.uri | http://www.rioxx.net/licenses/all-rights-reserved | en_GB |
dcterms.dateAccepted | 2019-04-22 | |
exeter.funder | ::Engineering and Physical Sciences Research Council (EPSRC) | en_GB |
rioxxterms.version | AM | en_GB |
rioxxterms.licenseref.startdate | 2019-04-22 | |
rioxxterms.type | Journal Article/Review | en_GB |
refterms.dateFCD | 2019-04-25T12:37:45Z | |
refterms.versionFCD | AM | |
refterms.dateFOA | 2019-05-14T14:25:31Z | |
refterms.panel | B | en_GB |