Show simple item record

dc.contributor.authorDeák, S
dc.contributor.authorLevine, P
dc.contributor.authorPearlman, J
dc.contributor.authorYang, B
dc.date.accessioned2023-05-31T13:44:13Z
dc.date.issued2023-05-31
dc.date.updated2023-05-31T13:23:45Z
dc.description.abstractWe construct a New Keynesian (NK) behavioural macroeconomic model with bounded-rationality (BR) and heterogeneous agents. We solve and simulate the model using a third-order approximation for a given policy and evaluate its properties using this solution. The model is inhabited by fully rational (RE) and BR agents. The latter are anticipated utility learners, given their beliefs of aggregate states, and they use simple heuristic rules to forecast aggregate variables exogenous to their micro-environment. In the most general form of the model, RE and BR agents learn from their forecasting errors by observing and comparing them with each other, making the composition of the two types endogenous. This reinforcement learning is then at the core of the heterogeneous expectations model and leads to the striking result that increasing the volatility of exogenous shocks, by assisting the learning process, increases the proportion of RE agents and is welfare-increasing.en_GB
dc.description.sponsorshipEconomic and Social Research Council (ESRC)en_GB
dc.identifier.citationVol. 16 (6), article 280en_GB
dc.identifier.doihttps://doi.org/10.3390/a16060280
dc.identifier.grantnumberES/K005154/1en_GB
dc.identifier.urihttp://hdl.handle.net/10871/133261
dc.identifierORCID: 0000-0003-2467-3202 (Deak, Szabolcs)
dc.language.isoenen_GB
dc.publisherMDPIen_GB
dc.rights© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).en_GB
dc.subjectnew Keynesian behavioural modelen_GB
dc.subjectheterogeneous expectationsen_GB
dc.subjectbounded rationalityen_GB
dc.subjectreinforcement learningen_GB
dc.titleReinforcement Learning in a New Keynesian Modelen_GB
dc.typeArticleen_GB
dc.date.available2023-05-31T13:44:13Z
dc.identifier.issn1999-4893
dc.descriptionThis is the final version. Available on open access from MDPI via the DOI in this recorden_GB
dc.descriptionData Availability Statement: No data were created or analysed in this study.en_GB
dc.identifier.journalAlgorithmsen_GB
dc.relation.ispartofAlgorithms, 16
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_GB
dcterms.dateAccepted2023-05-22
dcterms.dateSubmitted2023-03-24
rioxxterms.versionVoRen_GB
rioxxterms.licenseref.startdate2023-05-31
rioxxterms.typeJournal Article/Reviewen_GB
refterms.dateFCD2023-05-31T13:23:47Z
refterms.versionFCDAM
refterms.dateFOA2023-05-31T13:44:21Z
refterms.panelCen_GB
refterms.dateFirstOnline2023-05-31


Files in this item

This item appears in the following Collection(s)

Show simple item record

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Except where otherwise noted, this item's licence is described as © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).