Show simple item record

dc.contributor.authorMedathati, NVK
dc.contributor.authorRankin, J
dc.contributor.authorMeso, AI
dc.contributor.authorKornprobst, P
dc.contributor.authorMasson, GS
dc.date.accessioned2018-03-06T15:07:23Z
dc.date.issued2017-09-12
dc.description.abstractIn sensory systems, a range of computational rules are presumed to be implemented by neuronal subpopulations with different tuning functions. For instance, in primate cortical area MT, different classes of direction-selective cells have been identified and related either to motion integration, segmentation or transparency. Still, how such different tuning properties are constructed is unclear. The dominant theoretical viewpoint based on a linear-nonlinear feed-forward cascade does not account for their complex temporal dynamics and their versatility when facing different input statistics. Here, we demonstrate that a recurrent network model of visual motion processing can reconcile these different properties. Using a ring network, we show how excitatory and inhibitory interactions can implement different computational rules such as vector averaging, winner-take-all or superposition. The model also captures ordered temporal transitions between these behaviors. In particular, depending on the inhibition regime the network can switch from motion integration to segmentation, thus being able to compute either a single pattern motion or to superpose multiple inputs as in motion transparency. We thus demonstrate that recurrent architectures can adaptively give rise to different cortical computational regimes depending upon the input statistics, from sensory flow integration to segmentation.en_GB
dc.description.sponsorshipGSM and AIM are supported by a grant from the Agence Nationale de la Recherche (SPEED, ANR-13-SHS2–0006) and by the Centre National de la Recherche Scientifique. NVKM and PK were supported by the European Union Seventh Framework Programme (FP7/2007-2013, grant agreement n° 318723, MATHEMACS). JR was partially funded by a Swartz Foundation postdoc grant. Some results from this computational study have been presented in international conferences (AREADNE and ICMNS 2016).en_GB
dc.identifier.citationVol. 7, article 1270en_GB
dc.identifier.doi10.1038/s41598-017-11373-z
dc.identifier.urihttp://hdl.handle.net/10871/31881
dc.language.isoenen_GB
dc.publisherSpringer Natureen_GB
dc.rights© The Author(s) 2017. Open Access. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.en_GB
dc.titleRecurrent network dynamics reconciles visual motion segmentation and integrationen_GB
dc.typeArticleen_GB
dc.date.available2018-03-06T15:07:23Z
dc.identifier.issn2045-2322
exeter.article-numberARTN 11270en_GB
dc.descriptionThis is the final version of the article. Available from Springer Nature via the DOI in this recorden_GB
dc.identifier.journalScientific Reportsen_GB


Files in this item

This item appears in the following Collection(s)

Show simple item record