dc.contributor.author | Medathati, NVK | |
dc.contributor.author | Rankin, J | |
dc.contributor.author | Meso, AI | |
dc.contributor.author | Kornprobst, P | |
dc.contributor.author | Masson, GS | |
dc.date.accessioned | 2018-03-06T15:07:23Z | |
dc.date.issued | 2017-09-12 | |
dc.description.abstract | In sensory systems, a range of computational rules are presumed to be implemented by neuronal subpopulations with different tuning functions. For instance, in primate cortical area MT, different classes of direction-selective cells have been identified and related either to motion integration, segmentation or transparency. Still, how such different tuning properties are constructed is unclear. The dominant theoretical viewpoint based on a linear-nonlinear feed-forward cascade does not account for their complex temporal dynamics and their versatility when facing different input statistics. Here, we demonstrate that a recurrent network model of visual motion processing can reconcile these different properties. Using a ring network, we show how excitatory and inhibitory interactions can implement different computational rules such as vector averaging, winner-take-all or superposition. The model also captures ordered temporal transitions between these behaviors. In particular, depending on the inhibition regime the network can switch from motion integration to segmentation, thus being able to compute either a single pattern motion or to superpose multiple inputs as in motion transparency. We thus demonstrate that recurrent architectures can adaptively give rise to different cortical computational regimes depending upon the input statistics, from sensory flow integration to segmentation. | en_GB |
dc.description.sponsorship | GSM and AIM are supported by a grant from the Agence Nationale de la Recherche (SPEED, ANR-13-SHS2–0006) and by the Centre National de la Recherche Scientifique. NVKM and PK were supported by the European Union Seventh Framework Programme (FP7/2007-2013, grant agreement n° 318723, MATHEMACS). JR was partially funded by a Swartz Foundation postdoc grant. Some results from this computational study have been presented in international conferences (AREADNE and ICMNS 2016). | en_GB |
dc.identifier.citation | Vol. 7, article 1270 | en_GB |
dc.identifier.doi | 10.1038/s41598-017-11373-z | |
dc.identifier.uri | http://hdl.handle.net/10871/31881 | |
dc.language.iso | en | en_GB |
dc.publisher | Springer Nature | en_GB |
dc.rights | © The Author(s) 2017. Open Access. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. | en_GB |
dc.title | Recurrent network dynamics reconciles visual motion segmentation and integration | en_GB |
dc.type | Article | en_GB |
dc.date.available | 2018-03-06T15:07:23Z | |
dc.identifier.issn | 2045-2322 | |
exeter.article-number | ARTN 11270 | en_GB |
dc.description | This is the final version of the article. Available from Springer Nature via the DOI in this record | en_GB |
dc.identifier.journal | Scientific Reports | en_GB |