Show simple item record

dc.contributor.authorWang, J
dc.date.accessioned2021-11-11T09:37:48Z
dc.date.issued2021-11-15
dc.date.updated2021-11-11T04:14:24Z
dc.description.abstractMulti-access edge computing (MEC) is an emerging and important distributed computing paradigm that aims to extend cloud service to the network edge to reduce network traffic and service latency. Proper system optimisation and maintenance are crucial to maintaining high Quality-of-service (QoS) for end-users. However, with the increasing complexity of the architecture of MEC and mobile applications, effectively optimising MEC systems is non-trivial. Traditional optimisation methods are generally based on simplified mathematical models and fixed heuristics, which rely heavily on expert knowledge. As a consequence, when facing dynamic MEC scenarios, considerable human efforts and expertise are required to redesign the model and tune the heuristics, which is time-consuming. This thesis aims to develop deep reinforcement learning (DRL) methods to handle system optimisation problems in MEC. Instead of developing fixed heuristic algorithms for the problems, this thesis aims to design DRL-based methods that enable systems to learn optimal solutions on their own. This research demonstrates the effectiveness of DRL-based methods on two crucial system optimisation problems: task offloading and service migration. Specifically, this thesis first investigate the dependent task offloading problem that considers the inner dependencies of tasks. This research builds a DRL-based method combining sequence-to-sequence (seq2seq) neural network to address the problem. Experiment results demonstrate that our method outperforms the existing heuristic algorithms and achieves near-optimal performance. To further enhance the learning efficiency of the DRL-based task offloading method for unseen learning tasks, this thesis then integrates meta reinforcement learning to handle the task offloading problem. Our method can adapt fast to new environments with a small number of gradient updates and samples. Finally, this thesis exploits the DRL-based solution for the service migration problem in MEC considering user mobility. This research models the service migration problem as a Partially Observable Markov Decision Process (POMDP) and propose a tailored actor-critic algorithm combining Long-short Term Memory (LSTM) to solve the POMDP. Results from extensive experiments based on real-world mobility traces demonstrate that our method consistently outperforms both the heuristic and state-of-the-art learning-driven algorithms on various MEC scenarios.en_GB
dc.identifier.urihttp://hdl.handle.net/10871/127771
dc.publisherUniversity of Exeteren_GB
dc.subjectEdge Computingen_GB
dc.subjectReinforcement Learningen_GB
dc.subjectOptimisationen_GB
dc.titleSystem Optimisation for Multi-access Edge Computing Based on Deep Reinforcement Learningen_GB
dc.typeThesis or dissertationen_GB
dc.date.available2021-11-11T09:37:48Z
dc.contributor.advisorGeyong, Min
dc.contributor.advisorJia, Hu
dc.publisher.departmentComputer Science
dc.rights.urihttp://www.rioxx.net/licenses/all-rights-reserveden_GB
dc.type.degreetitlePhD in Computer Science
dc.type.qualificationlevelDoctoral
dc.type.qualificationnameDoctoral Thesis
rioxxterms.versionNAen_GB
rioxxterms.licenseref.startdate2021-11-15
rioxxterms.typeThesisen_GB
refterms.dateFOA2021-11-11T09:39:21Z


Files in this item

This item appears in the following Collection(s)

Show simple item record