skip to content

Adaptive Brain Lab


Our ability to extract abstract information from our experiences and group it into meaningful units (categories) is a fundamental cognitive skill for interpreting the complex environments we inhabit. How does the human brain learn about the regularities and context of novel perceptual experiences that have not been honed by evolution and development and decide on their interpretation and classification? We propose a novel interdisciplinary approach that integrates advanced multimodal imaging (fMRI, MEG, EEG) methods and state-of-the art machine learning algorithms to examine the neural architecture that underlies classification learning and decisions in the human brain. We aim to a) create an electrical-haemodynamic signal space in which neuronal assemblies and their interactions can be characterised, and b) to develop a unified algorithmic method for efficiently analyzing neural imaging and behavioural data. In particular, we will use machine pattern classifiers to define perceptual decision images that reveal the critical stimulus features on which the observers base their perceptual classifications, and neural decision images that reveal the neural selectivity, plasticity and dynamics with which these features are encoded and learnt by the human brain. Our methodological and theoretical developments will provide a) novel and sensitive tools for the assessment of the behavioural and neural signatures of perceptual decisions in neuroscience, and b) novel challenges and insights in machine learning for the optimisation of biologically-constrained algorithms with direct applications for expert recognition systems. Further, our findings will advance our understanding of the link between sensory input, neural code and human behaviour and have potential applications for studying the development of perceptual decision processes across the life span, and their impairment and potential for recovery of function in ageing and disorders of visual and social cognition.