skip to primary navigationskip to content

PRISM: Perceptual Representation of Illumination, Shape & Material.

prism logo


Whenever we open our eyes, we immediately gain access to a rich world of meaningful visual sensations. For a long time, research on visual perception has focussed primarily on our ability to recognize objects. However,   Our visual experience provides us with a richly detailed representation of the physical properties of the scene.  For example, imagine looking at some fabric draped across a chair, illuminated by the light from a window.  Based on the way light falls across the surface of the drapery, patterns of chiaroscuro (or ‘shading’) occur that allow us to make detailed judgments about the shape of its folds and undulations.  The distribution of light throughout the scene—the shadows it casts; the way it gathers in concavities—tells us about the illumination.  We can judge, just by looking at the drapery, on which side of the room the window is located and whether it is a sunny or overcast day.  Furthermore, based on the shape of the material, and the way the light plays across its surface, we are able to discern in great detail its material properties: what it would feel like if we were to touch it, whether it is heavy or light, rough or smooth, warm or cold.  We can predict how it would move and change shape in response to external forces: would it slip through our fingers like satin, or spring back into shape, like coarse-knitted wool?

Somehow the human brain is able to infer all these properties of the scene from the pattern of light that lands on the retinae.  In general, this is a very challenging problem for the brain to solve, because the amount of light at each point in the retinal image is a complex combination of multiple physical causes.  The intensity projected by a given surface patch onto the retina depends on three factors: the light arriving at the surface (illumination), the local geometrical properties of the surface (shape), and the microscopic properties of the surface (material), which determine how it interacts with light.  In order to estimate the shape, illumination or material properties in the scene the brain must somehow separate the light intensities in the image into these distinct physical causes.  Understanding how the brain achieves this is one of the most significant challenges in neuroscience, and is the major goal of the proposed research project.  Unlike previous research, which has focussed on one or two of the three causal factors independently, in this project we will try to build a complete understanding of how the visual system separates the image into distinct physical causes, by looking at how all three factors interact simultaneously.

Filed under:

RSS Feed Latest news

BRAINFest 2017

Jun 29, 2017

This June the Adaptive Brain Lab was delighted to talk about neuroscience with the general public at Cambridge’s first ever BRAINFest.

View all news