Show simple item record

dc.contributor.authorMiller, ID
dc.contributor.authorDumay, N
dc.contributor.authorPitt, M
dc.contributor.authorLam, B
dc.contributor.authorArmstrong, BC
dc.date.accessioned2021-01-12T09:32:54Z
dc.date.issued2020-08-01
dc.description.abstractHow do neural network models of quasiregular domains learn to represent knowledge that varies in its consistency with the domain, and generalize this knowledge appropriately? Recent work focusing on spelling-to-sound correspondences in English proposes that a graded “warping” mechanism determines the extent to which the pronunciation of a newly learned word should generalize to its orthographic neighbors. We explored the micro-structure of this proposal by training a network to pronounce new made-up words that were consistent with the dominant pronunciation (regulars), were comprised of a completely unfamiliar pronunciation (exceptions), or were consistent with a subordinate pronunciation in English (ambiguous). Crucially, by training the same spelling-to-sound mapping with either one or multiple items, we tested whether variation in adjacent, within-item context made a given pronunciation more able to generalize. This is exactly what we found. Context variability, therefore, appears to act as a modulator of the warping in quasiregular domains.en_GB
dc.description.sponsorshipEconomic and Social Research Council (ESRC)en_GB
dc.description.sponsorshipNSERCen_GB
dc.description.sponsorshipCFIen_GB
dc.identifier.citationCOGSCI 2020: 42nd Annual Meeting of the Cognitive Science Society. Virtual, 29 July - 1 August 2020, pp. 363 - 369en_GB
dc.identifier.grantnumberES/R006288/1en_GB
dc.identifier.grantnumberDG 502584en_GB
dc.identifier.grantnumberJELF/ORF 36578en_GB
dc.identifier.urihttp://hdl.handle.net/10871/124367
dc.language.isoenen_GB
dc.publisherCognitive Science Societyen_GB
dc.relation.urlhttps://cognitivesciencesociety.org/cogsci-2020/en_GB
dc.relation.urlhttps://cogsci.mindmodeling.org/2020/en_GB
dc.rights©2020 The Author(s). Open access. This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY).en_GB
dc.subjectquasiregularityen_GB
dc.subjectneural network modelsen_GB
dc.subjectcontext variabilityen_GB
dc.subjectread alouden_GB
dc.subjectspelling-to-sound correspondencesen_GB
dc.subjectreading acquisitionen_GB
dc.titleContext variability promotes generalization in reading aloud: Insight from a neural network simulationen_GB
dc.typeConference paperen_GB
dc.date.available2021-01-12T09:32:54Z
dc.descriptionThis is the final version. Available from the Cognitive Science Society via the link in this recorden_GB
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_GB
exeter.funder::Economic and Social Research Council (ESRC)en_GB
rioxxterms.versionVoRen_GB
rioxxterms.licenseref.startdate2020-08-01
rioxxterms.typeConference Paper/Proceeding/Abstracten_GB
refterms.dateFCD2021-01-12T09:28:48Z
refterms.versionFCDVoR
refterms.dateFOA2021-01-12T09:32:58Z
refterms.panelAen_GB
refterms.depositExceptionpublishedGoldOA


Files in this item

This item appears in the following Collection(s)

Show simple item record

©2020 The Author(s). Open access. This work is licensed under a Creative
Commons Attribution 4.0 International License (CC BY).
Except where otherwise noted, this item's licence is described as ©2020 The Author(s). Open access. This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY).