Show simple item record

dc.contributor.authorAli, KH
dc.contributor.authorAbusara, M
dc.contributor.authorTahir, AA
dc.contributor.authorDas, S
dc.date.accessioned2023-02-20T15:28:52Z
dc.date.issued2023-01-27
dc.date.updated2023-02-20T14:48:11Z
dc.description.abstractReal-time energy management of battery storage in grid-connected microgrids can be very challenging due to the intermittent nature of renewable energy sources (RES), load variations, and variable grid tariffs. Two reinforcement learning (RL)–based energy management systems have been previously used, namely, offline and online methods. In offline RL, the agent learns the optimum policy using forecasted generation and load data. Once the convergence is achieved, battery commands are dispatched in real time. The performance of this strategy highly depends on the accuracy of the forecasted data. An agent in online RL learns the best policy by interacting with the system in real time using real data. Online RL deals better with the forecasted error but can take a longer time to converge. This paper proposes a novel dual layer Q-learning strategy to address this challenge. The first (upper) layer is conducted offline to produce directive commands for the battery system for a 24 h horizon. It uses forecasted data for generation and load. The second (lower) Q-learning-based layer refines these battery commands every 15 min by considering the changes happening in the RES and load demand in real time. This decreases the overall operating cost of the microgrid as compared with online RL by reducing the convergence time. The superiority of the proposed strategy (dual-layer RL) has been verified by simulation results after comparing it with individual offline and online RL algorithms.en_GB
dc.description.sponsorshipEngineering and Physical Sciences Research Council (EPSRC)en_GB
dc.format.extent1334-
dc.identifier.citationVol. 16(3), article 1334en_GB
dc.identifier.doihttps://doi.org/10.3390/en16031334
dc.identifier.grantnumberEP/T025875/1en_GB
dc.identifier.urihttp://hdl.handle.net/10871/132509
dc.identifierORCID: 0000-0003-0450-8728 (Ali, Khawaja Haider)
dc.identifierORCID: 0000-0002-4195-5079 (Abusara, Mohammad)
dc.identifierORCID: 0000-0003-1985-6127 (Tahir, Asif Ali)
dc.identifierScopusID: 10439744200 | 57201834379 (Tahir, Asif Ali)
dc.identifierResearcherID: A-2515-2014 | C-3609-2014 (Tahir, Asif Ali)
dc.identifierORCID: 0000-0002-8394-5303 (Das, Saptarshi)
dc.identifierScopusID: 57193720393 (Das, Saptarshi)
dc.identifierResearcherID: D-5518-2012 (Das, Saptarshi)
dc.language.isoenen_GB
dc.publisherMDPIen_GB
dc.rights© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).en_GB
dc.subjectreinforcement learning (RL)en_GB
dc.subjectmicrogriden_GB
dc.subjectenergy managementen_GB
dc.subjectoffline and online RLen_GB
dc.subjectdual-layer Q-learningen_GB
dc.titleDual-Layer Q-Learning Strategy for Energy Management of Battery Storage in Grid-Connected Microgridsen_GB
dc.typeArticleen_GB
dc.date.available2023-02-20T15:28:52Z
dc.identifier.issn1996-1073
dc.descriptionThis is the final version. Available on open access from MDPI via the DOI in this recorden_GB
dc.descriptionData Availability Statement: The data are available from the lead or the corresponding author upon reasonable requestsen_GB
dc.identifier.eissn1996-1073
dc.identifier.journalEnergiesen_GB
dc.relation.ispartofEnergies, 16(3)
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_GB
dcterms.dateAccepted2023-01-19
rioxxterms.versionVoRen_GB
rioxxterms.licenseref.startdate2023-01-27
rioxxterms.typeJournal Article/Reviewen_GB
refterms.dateFCD2023-02-20T15:26:20Z
refterms.versionFCDVoR
refterms.dateFOA2023-02-20T15:28:56Z
refterms.panelBen_GB
refterms.dateFirstOnline2023-01-27


Files in this item

This item appears in the following Collection(s)

Show simple item record

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Except where otherwise noted, this item's licence is described as © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).