Brain models embedded withing virtual reality
OFTNAI currently supports a computer modelling centre within the Oxford University Department of Experimental Psychology. Here 3D virtual reality software is used to embed brain models within simulated virtual environments. The use of 3D virtual reality allows careful control of the positions and velocities of visual stimuli, as well as the point of view of the brain model within the environment. We have found that using this kind of realistic sensory input is critical to how, for example, models of the visual system develop their synaptic connections. We are involved in modelling various areas of brain function, including vision, spatial representation, motor behaviour and navigation. However, these systems interact and hence so should the models. The use of 3D virtual reality will allow us to explore how models of different brain areas can work together given realistic sensory input. This approach will permit the integration of various sources of experimental data into a unified theoretical framework, in which complete brain models are embedded in simulated 3D environments. We believe that this approach will eventually become invaluable for guiding further empirical research in brain science.
Case study:
Learning invariant visual object recognition with Continuous Transformation Learning
Simulations using 3D virtual reality software have helped us to investigate how the continuous transformation of visual stimuli in the world may help the visual areas of the brain to learn to recognise objects and faces from different viewpoints. In simulation studies, we have exposed a hierarchical neural network model of the visual system to visual input from rotating objects generated using 3D virtual reality software. Continuous Transformation Learning utilizes the spatial continuity inherent in how objects transform in the real world, combined with associative learning of the feedforward connection weights, to enable the network to develop view invariant representations of visual objects (Stringer, S.M., Perry, G., Rolls, E.T. and Proske, J.H. (2006). Biological Cybernetics 94: 128-142).
Views of the two objects used in the simulations, (left) a cube, and (right) a tetrahedron. During training, the network was exposed to views of rotations around the vertical axis with variable step sizes between the views (18 degrees in this figure).
Results from a computer simulation after training. (a) The firing rate response profile of a fourth layer output neuron to the cube and tetrahedron stimuli as they are rotated through 180 degrees.This cell has learned to respond to the cube in all viewing angles, but does not respond to the tetrahedron from any view. (b) The firing rate response profiles of two fourth layer cells that have learned to respond to the tetrahedron. These cells respond to different complementary parts of the view-space of the tetrahedron. (c) The view of the tetrahedron at 65 degrees, at which a new face comes into view triggering a switch in the firing of the two tetrahedron cells.