dc.contributor.author | Luo, M | |
dc.contributor.author | Du, B | |
dc.contributor.author | Zhang, W | |
dc.contributor.author | Song, T | |
dc.contributor.author | Li, K | |
dc.contributor.author | Zhu, H | |
dc.contributor.author | Birkin, M | |
dc.contributor.author | Wen, H | |
dc.date.accessioned | 2022-11-11T13:05:03Z | |
dc.date.issued | 2023-01-10 | |
dc.date.updated | 2022-11-11T11:43:21Z | |
dc.description.abstract | The electrification of shared mobility has become popular across the globe. Many cities have their new shared e-mobility systems
deployed, with continuously expanding coverage from central areas to the city edges. A key challenge in the operation of these
systems is fleet rebalancing, i.e., how EVs should be repositioned to better satisfy future demand. This is particularly challenging in
the context of expanding systems, because i) the range of the EVs is limited while charging time is typically long, which constrain
the viable rebalancing operations; and ii) the EV stations in the system are dynamically changing, i.e., the legitimate targets for
rebalancing operations can vary over time. We tackle these challenges by first investigating rich sets of data collected from a
real-world shared e-mobility system for one year, analyzing the operation model, usage patterns and expansion dynamics of this
new mobility mode. With the learned knowledge we design a high-fidelity simulator, which is able to abstract key operation details
of EV sharing at fine granularity. Then we model the rebalancing task for shared e-mobility systems under continuous expansion as
a Multi-Agent Reinforcement Learning (MARL) problem, which directly takes the range and charging properties of the EVs into
account. We further propose a novel policy optimization approach with action cascading, which is able to cope with the expansion
dynamics and solve the formulated MARL. We evaluate the proposed approach extensively, and experimental results show that our
approach outperforms the state-of-the-art, offering significant performance gain in both satisfied demand and net revenue. | en_GB |
dc.description.sponsorship | Engineering and Physical Sciences Research Council (EPSRC) | en_GB |
dc.description.sponsorship | Alan Turing Institute | en_GB |
dc.identifier.citation | Published online 10 January 2023 | en_GB |
dc.identifier.doi | 10.1109/TITS.2022.3233422 | |
dc.identifier.grantnumber | EP/N510129/1 | en_GB |
dc.identifier.uri | http://hdl.handle.net/10871/131740 | |
dc.identifier | ORCID: 0000-0002-7346-9024 (Luo, Man) | |
dc.language.iso | en | en_GB |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | en_GB |
dc.rights | © 2023 IEEE | |
dc.subject | Electric Vehicles | en_GB |
dc.subject | Shared Mobility Systems | en_GB |
dc.subject | Fleet Rebalancing | en_GB |
dc.subject | Deep Reinforcement Learning | en_GB |
dc.title | Fleet rebalancing for expanding shared e-mobility systems: A multi-agent deep reinforcement learning approach | en_GB |
dc.type | Article | en_GB |
dc.date.available | 2022-11-11T13:05:03Z | |
dc.identifier.issn | 1524-9050 | |
dc.description | This is the author accepted manuscript. The final version is available from IEEE via the DOI in this record | en_GB |
dc.identifier.eissn | 1558-0016 | |
dc.identifier.journal | IEEE Transactions on Intelligent Transportation Systems | en_GB |
dc.rights.uri | http://www.rioxx.net/licenses/all-rights-reserved | en_GB |
dcterms.dateAccepted | 2022-11-07 | |
dcterms.dateSubmitted | 2022-10-28 | |
rioxxterms.version | AM | en_GB |
rioxxterms.licenseref.startdate | 2022-11-07 | |
rioxxterms.type | Journal Article/Review | en_GB |
refterms.dateFCD | 2022-11-11T11:43:25Z | |
refterms.versionFCD | AM | |
refterms.dateFOA | 2023-02-23T14:16:05Z | |
refterms.panel | B | en_GB |