Show simple item record

dc.contributor.authorByrne, Á
dc.contributor.authorRinzel, J
dc.contributor.authorRankin, J
dc.date.accessioned2019-10-29T15:19:56Z
dc.date.issued2019-10-05
dc.description.abstractWe explore stream segregation with temporally modulated acoustic features using behavioral experiments and modelling. The auditory streaming paradigm in which alternating high- A and low-frequency tones B appear in a repeating ABA-pattern, has been shown to be perceptually bistable for extended presentations (order of minutes). For a fixed, repeating stimulus, perception spontaneously changes (switches) at random times, every 2–15 s, between an integrated interpretation with a galloping rhythm and segregated streams. Streaming in a natural auditory environment requires segregation of auditory objects with features that evolve over time. With the relatively idealized ABA-triplet paradigm, we explore perceptual switching in a non-static environment by considering slowly and periodically varying stimulus features. Our previously published model captures the dynamics of auditory bistability and predicts here how perceptual switches are entrained, tightly locked to the rising and falling phase of modulation. In psychoacoustic experiments we find that entrainment depends on both the period of modulation and the intrinsic switch characteristics of individual listeners. The extended auditory streaming paradigm with slowly modulated stimulus features presented here will be of significant interest for future imaging and neurophysiology experiments by reducing the need for subjective perceptual reports of ongoing perception.en_GB
dc.description.sponsorshipSwartz Foundationen_GB
dc.description.sponsorshipEngineering and Physical Sciences Research Council (EPSRC)en_GB
dc.identifier.citationVol. 383, article 107807en_GB
dc.identifier.doi10.1016/j.heares.2019.107807
dc.identifier.grantnumberEP/R03124X/1en_GB
dc.identifier.grantnumberEP/N014391/1en_GB
dc.identifier.urihttp://hdl.handle.net/10871/39383
dc.language.isoenen_GB
dc.publisherElsevieren_GB
dc.relation.urlhttps://github.com/james-rankin/auditory-streamingen_GB
dc.rights©2019 Published by Elsevier B.V. Open access under a Creative Commons license: https://creativecommons.org/licenses/by/4.0/en_GB
dc.subjectAuditory streamingen_GB
dc.subjectBistabilityen_GB
dc.subjectEntrainmenten_GB
dc.titleAuditory streaming and bistability paradigm extended to a dynamic environmenten_GB
dc.typeArticleen_GB
dc.date.available2019-10-29T15:19:56Z
dc.identifier.issn0378-5955
dc.descriptionThis is the final version. Available on open access from Elsevier via the DOI in this recorden_GB
dc.descriptionData availability: All experimental data and model code are available in the github repository james-rankin/auditory-streaming: https://github.com/james-rankin/auditory-streamingen_GB
dc.identifier.journalHearing Researchen_GB
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_GB
dcterms.dateAccepted2019-10-01
rioxxterms.versionVoRen_GB
rioxxterms.licenseref.startdate2019-10-01
rioxxterms.typeJournal Article/Reviewen_GB
refterms.dateFCD2019-10-29T15:17:08Z
refterms.versionFCDVoR
refterms.dateFOA2019-10-29T15:20:49Z
refterms.panelBen_GB


Files in this item

This item appears in the following Collection(s)

Show simple item record

©2019 Published by Elsevier B.V. Open access under a Creative Commons license: https://creativecommons.org/licenses/by/4.0/
Except where otherwise noted, this item's licence is described as ©2019 Published by Elsevier B.V. Open access under a Creative Commons license: https://creativecommons.org/licenses/by/4.0/