skip to content

Adaptive Brain Lab

 

Perceiving the three-dimensional (3D) structure of the world represents a fundamentally difficult challenge in inferring depth from ambiguous sensory inputs. The inverse problem of estimating 3D structure from 2D images is illposed: sensations are typically compatible with a vast range of plausible physical causes, so how does the brain select the appropriate interpretation? The computational difficulty of the problem is illustrated by the limited success of artificial systems. Somehow, the brain is able to achieve rapid and robust perception.

A principal strategy used by the brain to compute depth is to combine cues within modalities (e.g., 3D cues of binocular disparity, texture, perspective, blur, etc.) and between sensory modalities (e.g., vision with touch or audition). While individual cues are ambiguous and subject to random variability (i.e., noise), by integrating them, the brain gains two benefits: (i) ambiguity can be reduced and (ii) depth estimates become better because noise is reduced. These processes are fundamental to successful everyday behaviour, yet we have a poor understanding of the neural mechanisms that support integration. I plan to use convergent approaches from cognitive neuroscience, machine learning and state-of-the-art brain imaging to tackle the problem of depth perception.

Recent work has identified a key locus of depth cue integration within the human brain (cortical area V3B). This provides an important step forward by identifying where information is combined. However, the principal challenge is to describe how information is processed within this region. Specifically, what are the computational processes that support integration, and how are these instantiated by neural populations? I aim to test the hypothesis that cue integration is achieved by modulating the balance between excitation and inhibition within local neural circuits. This idea has its roots in abstract theoretical models. Here, I aim to develop a biologically-plausible model using cutting-edge neural network algorithms and then test this model against the human brain using state-of-the-art cognitive neuroscience techniques.

I will use a combination of computational modelling (convolutional neural networks [CNNs]), behavioural, brain imaging (ultra high-field fMRI/MR spectroscopy [MRS]), and brain stimulation (high definition transcranial direct current stimulation [HD-tDCS]) techniques. Increasingly, neuroscientists are collaborating with engineers and computer scientists to develop CNNs that simulate complex cognitive processes. These models have provided valuable insight into neural mechanisms. In particular, we can link the models to known physiological properties, and make predictions for new experiments, for instance by perturbing isolated parameters, or ‘lesioning’ the model in instructive ways. We can also abstract the computational principles exhibited by a complex, multi-parameter neural network model to derive a lowparameter, analytically-derived implementation. This has been done by the host laboratory for the case of an individual depth cue (binocular disparity); here, I aim to expand this to the case of combined cues. I plan to test predictions about the role of excitation and suppression during cue integration by (i) using MRS to measure neural concentrations of excitatory/inhibitory neurotransmitters (Glutamate/GABA) and correlate these with behavioural performance and (ii) comparing effects of HD-tDCS on behavioural performance with model simulations of perturbed excitability. Finally, to guide and motivate further development of the model to incorporate additional depth cues I aim to map the functional organization of depth cue integration using ultra high-field imaging techniques. My overall approach is to marry synthetic neural network model development with cognitive neuroscience testing of healthy human participants so that we gain convergent insight into the mechanisms of perceiving depth.

Leverhulme Trust Fellowship to Reuben Rideaux: ECF-2017-573

Match-funded by Isaac Newton Trust Fellowship to Reuben Rideaux: 17.08(o)