our computer modelling centre
OFTNAI is currently supporting a computer modelling centre which investigates the space representation in the brain. The computer modelling centre is part of the Oxford University Department of Experimental Psychology.
The ability to successfully navigate through an environment is fundamental to many organisms. Spatial cognition is a critical area of research. To give two examples: spatial cognitive deficits may present in a variety of neurodegenerative diseases, such as Alzheimer’s, Parkinson’s, and dementia.
This can help to guide differential early diagnosis and target specific pharmacological interventions. In robotics, an understanding of biological spatial processing can guide development of autonomous, situated, and embodied artificial intelligences.
the impact of our research on spatial recognition
A variety of cell types are known to underpin spatial processing in the brain. For example, Place Cells encode position in an environment, Grid Cells encode distance travelled, whilst Head Direction (HD) cells encode directional heading.
One aspect of our work focuses on generating novel hypotheses about the architecture and firing properties of both HD cells and their principle inputs, developing through Hebbian learning during interaction with the world. We have demonstrated how theorized Continuous Attractor Neural Network architectures may self-organise and how this organisation can allow the HD system to accurately track HD using vestibular signals (path integration).
We relate path integration accuracy directly to known HD architectural limitations, proposing an update to traditional CANN models.
In collaboration with UCL’s Jeffery Lab, we have provided a theoretical explanation for experimental results showing how HD cell firing is generated by an integration of internally and externally derived information sources. We have also provided an explanation for how the HD system might learn to be differentially influence by distal visual cues.
We have also developed detailed computer models of spatial processing and memory storage in the hippocampus. Our models explain how damage to the hippocampus may lead to amnesia for episodic memories.
We have recently developed computer models based on Multi-Packet CANNs, which may explain how the brain is able to represent the full 3D spatial structure of an animal's environment. Control systems based on these models may help robots to move more easily within cluttered real-world environments.
Such models may permit more flexible movement of manufacturing manipulators, and provide more robust navigation for autonomous vehicles and mobile robots.
Our research involves working towards a complete self-organising model of the rodent HD cell system. Current research questions include:
- What is the role of vestibular input in proximal-distal visual landmark distinctions?
- How might CANN-like architectures be self-organised in the absence of visual information?
- Why are HD cells spread across multiple brain areas?
Case study:
The representation of 3-dimensional space
Multi-Packet Continuous Attractor Neural Networks are able to represent the locations of multiple spatial features in an environment. These models are thus able to represent the full 3D spatial structure of the environment. In addition, the spatial representations may be updated using idiothetic (velocity) signals as the agent moves through its environment. This important capability in animals, which has been simulated in our models, is known as path integration.
Architecture of multi-packet continuous attractor network model, including idiothetic inputs. The network is composed of a set of feature (F) cells which encode the position and orientation of the features in the environment with respect to the agent, and a set of idiothetic (ID) cells which fire when the agent moves. For details, see Stringer, S.M., Rolls, E.T. and Trappenberg,T.P. (2004).
Results from a computer simulation with two activity packets active in two different feature spaces in the same continuous attractor network.The left plot shows the firing rates of the subset of feature cells belonging to the first feature space, and the right plot shows the firing rates of the subset of feature cells belonging to the second feature space.Thus, the left and right plots show two activity packets moving within their respective feature spaces.