Show simple item record

dc.contributor.authorXu, W
dc.contributor.authorMeng, F
dc.contributor.authorGuo, W
dc.contributor.authorLi, X
dc.contributor.authorFu, G
dc.date.accessioned2021-02-22T11:10:43Z
dc.date.issued2021-05-21
dc.description.abstractOptimal operation of hydropower reservoir systems is a classical optimization problem of high dimensionality and stochastic nature. A key challenge lies in improving the interpretability of operation strategies, i.e., the cause-effect relationship between system outputs (or actions) and contributing variables such as states and inputs. Here we report for the first time a new Deep Reinforcement Learning (DRL) framework for optimal operation of reservoir systems based on Deep Q-Networks (DQN), which provides a significant advance in understanding the performance of optimal operations. DQN combines Q-learning and two deep ANN networks and acts as the agent to interact with the reservoir system through learning its states and providing actions. Three knowledge forms of learning considering the states, actions and rewards are constructed to improve the interpretability of operation strategies. The impacts of these knowledge forms and DRL learning parameters on operation performance are analysed. The DRL framework is tested on the Huanren hydropower system in China, using 400-year synthetic flow data for training and 30-year observed flow data for verification. The discretization levels of reservoir water level and energy output yield contrasting effects: finer discretization of water level improves performance in terms of annual hydropower generated and hydropower production reliability; however, finer discretization of hydropower production can reduce search efficiency and thus resulting DRL performance. Compared with benchmark algorithms including dynamic programming, stochastic dynamic programming, and decision tree, the proposed DRL approach can effectively factor in future inflow uncertainties when deciding optimal operations and generate markedly higher hydropower. This study provides new knowledge on the performance of DRL in the context of hydropower system characteristics and data input features, and shows promise of potentially being implemented in practice to derive operation policies that can be automatically updated by learning on new data.en_GB
dc.description.sponsorshipNational Natural Science Foundation of China (NSFC)en_GB
dc.description.sponsorshipRoyal Societyen_GB
dc.description.sponsorshipEngineering and Physical Sciences Research Council (EPSRC)en_GB
dc.identifier.citationVol. 147 (8), article 04021045en_GB
dc.identifier.doi10.1061/(ASCE)WR.1943-5452.0001409
dc.identifier.grantnumber51609025en_GB
dc.identifier.grantnumberIF160108en_GB
dc.identifier.grantnumberEP/N510129/1en_GB
dc.identifier.urihttp://hdl.handle.net/10871/124834
dc.language.isoenen_GB
dc.publisherAmerican Society of Civil Engineers (ASCE)en_GB
dc.rights© 2021 American Society of Civil Engineers
dc.subjectArtificial Intelligenceen_GB
dc.subjectDeep Q-Networken_GB
dc.subjectDeep Reinforcement Learningen_GB
dc.subjectHydropower Systemen_GB
dc.subjectReservoir Operationen_GB
dc.titleDeep Reinforcement Learning for Optimal Hydropower Reservoir Operationen_GB
dc.typeArticleen_GB
dc.date.available2021-02-22T11:10:43Z
dc.identifier.issn0733-9496
dc.descriptionThis is the author accepted manuscript. The final version is available from ASCE via the DOI in this recorden_GB
dc.descriptionData Availability Statement: Some or all data, models, or code that support the findings of this study are available from the corresponding author upon reasonable request. Data include the synthetic and observed flow time series. The code that has been used for the deep reinforcement learning is also available.en_GB
dc.identifier.journalJournal of Water Resources Planning and Managementen_GB
dc.rights.urihttp://www.rioxx.net/licenses/all-rights-reserveden_GB
dcterms.dateAccepted2021-02-21
exeter.funder::Royal Society (Government)en_GB
exeter.funder::Royal Society (Government)en_GB
exeter.funder::Alan Turing Instituteen_GB
rioxxterms.versionAMen_GB
rioxxterms.licenseref.startdate2021-02-21
rioxxterms.typeJournal Article/Reviewen_GB
refterms.dateFCD2021-02-22T09:12:56Z
refterms.versionFCDAM
refterms.dateFOA2021-07-05T14:56:40Z
refterms.panelBen_GB


Files in this item

This item appears in the following Collection(s)

Show simple item record