Attentional modulation of non-tuned tuning curves

Tuning curves are the functions that relate the responses of sensory neurons to various values within one continuous stimulus dimension (such as the orientation of a bar in the visual domain or the frequency of a tone in the auditory domain). They are commonly determined by fitting a bell-shaped curve to the measured responses. However, in experimental data, single-cell tuning curves are extremely noisy and it is often difficult to even distinguish clear peaks. In our PLoS ONE paper we introduce a model-free approach to parameterize even these « weird » tuning curves and to track the effects that attentional modulation has on them.

In our study, we illustrate the general problem of tuning « weirdness » by fitting diverse models to representative recordings from area MT in rhesus monkey visual cortex during multiple attentional tasks involving complex composite stimuli (in collaboration with Stefan Treue, Göttingen). We find that all models can be well-fitted, that the best model generally varies between neurons and that statistical comparisons between neuronal responses across different experimental conditions are affected quantitatively and qualitatively by specific model choices. As a robust alternative to an often arbitrary model selection, we introduce a model-free approach, in which features of interest are extracted directly from the measured response data without the need of fitting any model. In our attentional datasets, we demonstrate that data-driven methods provide descriptions of tuning curve features such as preferred stimulus direction or attentional gain modulations which are in agreement with fit-based approaches when a good fit exists.

Furthermore, these methods naturally extend to the frequent cases of uncertain model selection. We show that model-free approaches can identify attentional modulation patterns, such as general alterations of the irregular shape of tuning curves, which cannot be captured by fitting stereotyped conventional models. Finally, by comparing datasets across different conditions, we demonstrate effects of attention that are cell- and even stimulus-specific.

Based on these proofs-of-concept, we conclude that our data-driven methods can reliably extract relevant tuning information from neuronal recordings, including cells whose seemingly haphazard response curves defy conventional fitting approaches.

To know more:

  • M. Helmer, V. Kozyrev, V. Stephan, S. Treue, T. Geisel & D. Battaglia. Model-Free Estimation of Tuning Curves and Their Attentional Modulation, Based on Sparse and Noisy Data. PLoS ONE 11, e0146500 (2016).