Show simple item record

dc.contributor.authorChen, Z
dc.date.accessioned2021-10-12T14:11:11Z
dc.date.issued2021-10-11
dc.description.abstractCloud computing has become one of the most prevailing computing paradigms, which promises on-demand provisioning of computing, storage, and networking resources with service level agreements (SLAs) between cloud service providers (CSPs) and users. Through moving computational resources to the network edge, multi-access edge computing (MEC) can effectively reduce the service latency and cut down the network traffic in centralized cloud computing by offloading the computation-intensive tasks from mobile devices (MDs) to the nearby edge servers, which means that these tasks are uploaded to edge servers and processed with more sufficient computational resources. In cloud computing, resource provisioning necessitates the adaptive and accurate prediction of cloud workloads. However, the existing methods cannot effectively predict the high-dimensional and highly-variable cloud workloads. This results in resource wasting and inability to satisfy SLAs. Since recurrent neural network (RNN) is naturally suitable for sequential data analysis, it has been recently used to tackle the problem of workload prediction. However, RNN often performs poorly on learning long-term memory dependencies, and thus cannot make the accurate prediction of workloads. Moreover, the ever-expanding scale of cloud datacenters necessitates automated resource allocation to best meet the requirements of low-latency and high energy-efficiency. However, due to the dynamic system states and various user demands, efficient resource allocation in the cloud faces huge challenges. Most of the existing solutions for cloud resource allocation cannot effectively handle the dynamic cloud environments because they depend on the prior knowledge of a cloud system, which may lead to excessive energy consumption and degraded Quality-of-Service (QoS). As for the offloading problem in edge computing, most of the classic offloading approaches commonly assume a specific application scene on their optimization objective functions. Thus, they might work well in a static environment with simple setups. However, it would be hard for them to fully adapt to the complex and dynamic environments with changeable system states and user demands. To this end, this thesis focuses on addressing the above important challenges in cloud and edge computing, where the major contributions are listed as follows. 1) A deep Learning based Prediction Algorithm for cloud Workloads (L-PAW) is proposed. First, a top-sparse auto-encoder (TSA) is designed to effectively extract the essential representations of workloads from the original high-dimensional workload data. Next, we integrate TSA and gated recurrent unit (GRU) block into RNN to achieve the adaptive and accurate prediction for highly-variable workloads. 2) An adaptive and efficient cloud resource allocation scheme based on Actor-Critic Deep Reinforcement Learning (DRL) is proposed. First, the actor parameterizes the policy (allocating resources) and chooses actions (scheduling jobs) based on the scores assessed by the critic (evaluating actions). Next, the resource allocation policy is updated by using gradient ascent while the variance of policy gradient is reduced with an advantage function (measuring the actions' advantage relative to the mean), which improves the training efficiency of the proposed method. 3) To enable the application of Blockchain for secure payment transactions between CSPs and users in Mobile Crowdsensing (MCS), a new blockchain-based framework for MCS systems is first designed to ensure high reliability under complex network scenarios with many MDs, where a novel Credit-based Proof-of-Work (C-PoW) algorithm is developed to reduce the complexity of PoW while maintaining the reliability of blockchain. Next, a Deep Reinforcement leiarning based Computation Offloading (DRCO) method is proposed to offload computation-intensive tasks of C-PoW to edge servers. By integrating deep neural networks (DNNs) and Proximal Policy Optimization (PPO), the DRCO can attain the optimal/near-optimal offloading policy for C-PoW tasks in the dynamic and complex MCS environment.en_GB
dc.identifier.urihttp://hdl.handle.net/10871/127428
dc.publisherUniversity of Exeteren_GB
dc.subjectCloud-Edge Computingen_GB
dc.subjectResource Management and Optimizationen_GB
dc.subjectArtificial Intelligenten_GB
dc.titleIntelligent Resource Management and Optimization in Cloud-Edge Computingen_GB
dc.typeThesis or dissertationen_GB
dc.date.available2021-10-12T14:11:11Z
dc.contributor.advisorHu, Jen_GB
dc.contributor.advisorMin, Gen_GB
dc.contributor.advisorLuo, Cen_GB
dc.publisher.departmentDepartment of Computer Science, College of Engineering, Mathematics and Physical Sciencesen_GB
dc.rights.urihttp://www.rioxx.net/licenses/all-rights-reserveden_GB
dc.type.degreetitleDoctor of Philosophy in Computer Scienceen_GB
dc.type.qualificationlevelDoctoralen_GB
dc.type.qualificationnameDoctoral Thesisen_GB
rioxxterms.versionNAen_GB
rioxxterms.licenseref.startdate2021-10-12
rioxxterms.typeThesisen_GB
refterms.dateFOA2021-10-12T14:11:25Z


Files in this item

This item appears in the following Collection(s)

Show simple item record