Show simple item record

dc.contributor.authorWang, J
dc.contributor.authorHu, J
dc.contributor.authorMin, G
dc.contributor.authorZhan, W
dc.contributor.authorZomaya, AY
dc.contributor.authorGeorgalas, N
dc.date.accessioned2021-11-24T09:26:47Z
dc.date.issued2021-11-26
dc.date.updated2021-11-24T00:07:12Z
dc.description.abstractEdge computing is an emerging promising computing paradigm that brings computation and storage resources to the network edge, hence significantly reducing the service latency and network traffic. In edge computing, many applications are composed of dependent tasks where the outputs of some are the inputs of others. How to offload these tasks to the network edge is a vital and challenging problem which aims to determine the placement of each running task in order to maximize the Quality-of-Service (QoS). Most of the existing studies either design heuristic algorithms that lack strong adaptivity or learning-based methods but without considering the intrinsic task dependency. Different from the existing work, we propose an intelligent task offloading scheme leveraging off-policy reinforcement learning empowered by a Sequence-to-Sequence (S2S) neural network, where the dependent tasks are represented by a Directed Acyclic Graph (DAG). To improve the training efficiency, we combine a specific off-policy policy gradient algorithm with a clipped surrogate objective. We then conduct extensive simulation experiments using heterogeneous applications modelled by synthetic DAGs. The results demonstrate that: 1) our method converges fast and steadily in training; 2) it outperforms the existing methods and approximates the optimal solution in latency and energy consumption under various scenarios.en_GB
dc.description.sponsorshipEuropean Union Horizon 2020en_GB
dc.identifier.citationPublished online 26 November 2021en_GB
dc.identifier.doi10.1109/TC.2021.3131040
dc.identifier.grantnumber101008297en_GB
dc.identifier.urihttp://hdl.handle.net/10871/127931
dc.identifierORCID: 0000-0001-5406-8420 (Hu, Jia)
dc.language.isoenen_GB
dc.publisherInstitute of Electrical and Electronics Engineersen_GB
dc.rights© 2021 IEEE
dc.subjectMulti-access edge computingen_GB
dc.subjecttask offloadingen_GB
dc.subjectdeep reinforcement learningen_GB
dc.subjectsequence to sequence neural networksen_GB
dc.titleDependent task offloading for edge computing based on deep reinforcement learningen_GB
dc.typeArticleen_GB
dc.date.available2021-11-24T09:26:47Z
dc.identifier.issn0018-9340
dc.descriptionThis is the author accepted manuscript. The final version is available from IEEE via the DOI in this recorden_GB
dc.identifier.eissn1557-9956
dc.identifier.journalIEEE Transactions on Computersen_GB
dc.relation.ispartofIEEE Transactions on Computers
dc.rights.urihttp://www.rioxx.net/licenses/all-rights-reserveden_GB
dcterms.dateAccepted2021-11-21
rioxxterms.versionAMen_GB
rioxxterms.licenseref.startdate2021-11-21
rioxxterms.typeJournal Article/Reviewen_GB
refterms.dateFCD2021-11-24T00:07:15Z
refterms.versionFCDAM
refterms.dateFOA2021-12-14T14:47:18Z
refterms.panelBen_GB


Files in this item

This item appears in the following Collection(s)

Show simple item record