Task-evoked pupillary responses track precision-weighted prediction errors and learning rate during interceptive visuomotor actions.
dc.contributor.author | Harris, DJ | |
dc.contributor.author | Arthur, T | |
dc.contributor.author | Vine, SJ | |
dc.contributor.author | Liu, J | |
dc.contributor.author | Abd Rahman, HR | |
dc.contributor.author | Han, F | |
dc.contributor.author | Wilson, MR | |
dc.date.accessioned | 2023-01-06T09:49:18Z | |
dc.date.issued | 2022-12-21 | |
dc.date.updated | 2023-01-06T09:30:34Z | |
dc.description.abstract | In this study, we examined the relationship between physiological encoding of surprise and the learning of anticipatory eye movements. Active inference portrays perception and action as interconnected inference processes, driven by the imperative to minimise the surprise of sensory observations. To examine this characterisation of oculomotor learning during a hand-eye coordination task, we tested whether anticipatory eye movements were updated in accordance with Bayesian principles and whether trial-by-trial learning rates tracked pupil dilation as a marker of 'surprise'. Forty-four participants completed an interception task in immersive virtual reality that required them to hit bouncing balls that had either expected or unexpected bounce profiles. We recorded anticipatory eye movements known to index participants' beliefs about likely ball bounce trajectories. By fitting a hierarchical Bayesian inference model to the trial-wise trajectories of these predictive eye movements, we were able to estimate each individual's expectations about bounce trajectories, rates of belief updating, and precision-weighted prediction errors. We found that the task-evoked pupil response tracked prediction errors and learning rates but not beliefs about ball bounciness or environmental volatility. These findings are partially consistent with active inference accounts and shed light on how encoding of surprise may shape the control of action. | en_GB |
dc.description.sponsorship | Leverhulme Trust | en_GB |
dc.identifier.citation | Vol. 12, article 22098 | en_GB |
dc.identifier.doi | https://doi.org/10.1038/s41598-022-26544-w | |
dc.identifier.uri | http://hdl.handle.net/10871/132155 | |
dc.identifier | ORCID: 0000-0003-3880-3856 (Harris, DJ) | |
dc.identifier | ORCID: 0000-0001-9329-1262 (Vine, SJ) | |
dc.identifier | ORCID: 0000-0001-8145-6971 (Wilson, MR) | |
dc.language.iso | en | en_GB |
dc.publisher | Nature Research | en_GB |
dc.relation.url | https://www.ncbi.nlm.nih.gov/pubmed/36543845 | en_GB |
dc.relation.url | https://osf.io/z96q8/ | en_GB |
dc.rights | © The Author(s) 2022. Open Access. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. Te images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | en_GB |
dc.subject | Humans | en_GB |
dc.subject | Pupil | en_GB |
dc.subject | Bayes Theorem | en_GB |
dc.subject | Learning | en_GB |
dc.subject | Eye Movements | en_GB |
dc.subject | Virtual Reality | en_GB |
dc.subject | Psychomotor Performance | en_GB |
dc.title | Task-evoked pupillary responses track precision-weighted prediction errors and learning rate during interceptive visuomotor actions. | en_GB |
dc.type | Article | en_GB |
dc.date.available | 2023-01-06T09:49:18Z | |
dc.identifier.issn | 2045-2322 | |
exeter.article-number | 22098 | |
exeter.place-of-publication | England | |
dc.description | This is the final version. Available on open access from Nature Research via the DOI in this record. | en_GB |
dc.description | Data availability: All relevant data and code are available online from: https://osf.io/z96q8/ | en_GB |
dc.identifier.journal | Scientific Reports | en_GB |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | en_GB |
dcterms.dateAccepted | 2022-12-15 | |
rioxxterms.version | VoR | en_GB |
rioxxterms.licenseref.startdate | 2022-12-21 | |
rioxxterms.type | Journal Article/Review | en_GB |
refterms.dateFCD | 2023-01-06T09:43:39Z | |
refterms.versionFCD | VoR | |
refterms.dateFOA | 2023-01-06T09:50:16Z | |
refterms.panel | A | en_GB |
refterms.dateFirstOnline | 2022-12-21 |
Files in this item
This item appears in the following Collection(s)
Except where otherwise noted, this item's licence is described as © The Author(s) 2022. Open Access. This article is licensed under a Creative Commons Attribution 4.0 International
License, which permits use, sharing, adaptation, distribution and reproduction in any medium or
format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the
Creative Commons licence, and indicate if changes were made. Te images or other third party material in this
article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the
material. If material is not included in the article’s Creative Commons licence and your intended use is not
permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.