Show simple item record

dc.contributor.authorZhang, Y
dc.date.accessioned2024-11-26T12:25:41Z
dc.date.issued2024-12-02
dc.date.updated2024-11-26T04:32:36Z
dc.description.abstractTo address the challenges of massive data processing and low latency requirements, edge computing (EC) has emerged as a compelling solution that deploys computing power and data storage at the network edge, enabling faster data processing and improved user experiences. Energy-aware system management and optimization for EC is crucial for extending the operational lifetime of battery-powered edge devices and decreasing operational costs of edge infrastructures, ultimately facilitating sustainable and eco-friendly EC systems. However, the dynamic task arrival, heterogeneous computing tasks (varying data sizes, resource demands, and service performance requirements), fluctuating wireless network conditions, as well as uneven workload distribution and resource constraints at edge nodes pose considerable difficulties in the design of effective system management and optimization strategies. Traditional methods often rely on expert knowledge and cannot effectively adapt to highly dynamic EC systems. Driven by recent advancements in deep reinforcement learning (DRL), which excels at learning optimal decision-making policies directly from complex and high-dimensional environments, this research aims to leverage advanced DRL techniques to autonomously learn efficient management and optimization strategies for energy-aware intelligent EC. Firstly, this research develops a model-free DRL-based task offloading approach for collaborative EC to optimize the peer offloading among edge servers and computing resource scheduling, subject to the constraints of limited energy resources at edge devices and restricted computing capabilities of edge servers. Experimental results show that the developed approach can effectively adapt to the EC system changes, achieving a 16% higher system income than the Double Deep Q-Network (DDQN) baseline. Then, this research proposes a safe DRL-based joint charging scheduling and computation offloading scheme for electric vehicle (EV)-assisted EC to minimize the system energy consumption and its variance for enhanced power grid stability, while satisfying the performance requirements of computation tasks and charging demands of EVs. The feasibility of learning a charging scheduling strategy that can satisfy the charging demands of EVs is theoretically proven. Simulation results demonstrate that the proposed scheme can achieve near-optimal performances and realize up to 24% improvement over three state-of-the-art algorithms. Next, a new multi-agent demand response (DR) management approach is designed to optimize workload migration among edge nodes and computing resource scheduling, aiming at maximizing the system utility of EC from both providing computing services to users and reducing energy consumption in response to varying DR signals. A reward sharing mechanism is proposed to distribute local rewards among neighboring edge nodes, thereby facilitating collaborative policies that collectively maximize the overall system utility. Evaluation results on real-world datasets show the proposed approach outperforms two state-of-the-art algorithms by 14% and the NSD (No Service Degradation) baseline by 68% in system utility. Finally, this research exploits the intelligent charging station recommendation for EVs, in which charging stations equipped with computing power serve as edge nodes to process data and make charging recommendation decisions. A novel real-time distributed charging station recommendation algorithm based on federated meta-reinforcement learning (MRL) is introduced to minimize the charging duration experienced by EVs while accounting for dynamic EV charging requests and time-varying charging station availability. Simulation experiments using realistic datasets showcase that our algorithm reduces EV charging duration by 25% compared to the state-of-the-art multi-agent reinforcement learning method, while effectively balancing charging requests across stations.en_GB
dc.identifier.urihttp://hdl.handle.net/10871/139073
dc.language.isoenen_GB
dc.publisherUniversity of Exeteren_GB
dc.titleDeep Reinforcement Learning for Energy-Aware Intelligent Edge Computingen_GB
dc.typeThesis or dissertationen_GB
dc.date.available2024-11-26T12:25:41Z
dc.contributor.advisorMin, Geyong
dc.contributor.advisorHu, jia
dc.contributor.advisorLuo, chunbo
dc.publisher.departmentComputer Science
dc.rights.urihttp://www.rioxx.net/licenses/all-rights-reserveden_GB
dc.type.degreetitlePhD in Computer Science
dc.type.qualificationlevelDoctoral
dc.type.qualificationnameDoctoral Thesis
rioxxterms.versionNAen_GB
rioxxterms.licenseref.startdate2024-12-02
rioxxterms.typeThesisen_GB
refterms.dateFOA2025-03-07T01:04:59Z


Files in this item

This item appears in the following Collection(s)

Show simple item record