Show simple item record

dc.contributor.authorZhan, W
dc.contributor.authorLuo, C
dc.contributor.authorWang, J
dc.contributor.authorWang, C
dc.contributor.authorMin, G
dc.contributor.authorDuan, H
dc.contributor.authorZhu, Q
dc.date.accessioned2020-05-26T10:10:56Z
dc.date.issued2020-03-06
dc.description.abstractVehicular edge computing (VEC) is a new computing paradigm that has great potential to enhance the capability of vehicle terminals (VT) to support resource-hungry in-vehicle applications with low latency and high energy efficiency. In this paper, we investigate an important computation offloading scheduling problem in a typical VEC scenario, where a VT traveling along an expressway intends to schedule its tasks waiting in the queue to minimize the long-term cost in terms of a trade-off between task latency and energy consumption. Due to diverse task characteristics, dynamic wireless environment, and frequent handover events caused by vehicle movements, an optimal solution should take into account both where to schedule (i.e., local computation or offloading) and when to schedule (i.e., the order and time for execution) each task. To solve such a complicated stochastic optimization problem, we model it by a carefully designed Markov decision process (MDP) and resort to deep reinforcement learning (DRL) to deal with the enormous state space. Our DRL implementation is designed based on the state-of-the-art proximal policy optimization (PPO) algorithm. A parameter-shared network architecture combined with a convolutional neural network (CNN) is utilized to approximate both policy and value function, which can effectively extract representative features. A series of adjustments to the state and reward representations are taken to further improve the training efficiency. Extensive simulation experiments and comprehensive comparisons with six known baseline algorithms and their heuristic combinations clearly demonstrate the advantages of the proposed DRL-based offloading scheduling method.en_GB
dc.description.sponsorshipEuropean Commissionen_GB
dc.identifier.citationPublished online 6 March 2020en_GB
dc.identifier.doi10.1109/JIOT.2020.2978830
dc.identifier.urihttp://hdl.handle.net/10871/121159
dc.language.isoenen_GB
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_GB
dc.rights© 2020 IEEEen_GB
dc.subjectComputation offloadingen_GB
dc.subjectdeep reinforcement learningen_GB
dc.subjectmobile edge computingen_GB
dc.subjecttask schedulingen_GB
dc.subjectvehicular edge computingen_GB
dc.titleDeep Reinforcement Learning-Based Offloading Scheduling for Vehicular Edge Computingen_GB
dc.typeArticleen_GB
dc.date.available2020-05-26T10:10:56Z
dc.identifier.issn2327-4662
dc.descriptionThis is the author accepted manuscript. The final version is available from IEEE via the DOI in this recorden_GB
dc.identifier.journalIEEE Internet of Thingsen_GB
dc.rights.urihttp://www.rioxx.net/licenses/all-rights-reserveden_GB
dcterms.dateAccepted2020-03-03
exeter.funder::European Commissionen_GB
rioxxterms.versionAMen_GB
rioxxterms.licenseref.startdate2020-03-03
rioxxterms.typeJournal Article/Reviewen_GB
refterms.dateFCD2020-05-26T10:09:16Z
refterms.versionFCDAM
refterms.dateFOA2020-05-26T10:10:59Z
refterms.panelBen_GB


Files in this item

This item appears in the following Collection(s)

Show simple item record