Learning, neuroplasticity, and sleep: research projects

Ongoing studies

PDP-squared: Meaningful PDP language models using parallel distributed processors

Abstract

Parallel Distributed Processing (PDP) is a form of computation where a large number of processing units performing simple calculations can be employed all together to solve much more complex problems. Perhaps the best example of this is the human brain, which contains approximately one hundred billion neurones. Individually these neurones simply have to decide whether to fire or not, and they do this based upon how many other neurones that are connected to them have fired recently. When this simple local computation is distributed over billions of neurones it is capable of supporting all the extremely complex behaviours that humans exhibit / talking, reading, walking, running etc / behaviours that are well beyond the abilities of more traditional computers. For this and other reasons, many psychologists believe that PDP models are the best way of describing human cognition. Unfortunately, at the moment these models are invariably simulated using standard PCs, which means that each unit in the model has to be dealt with one after the other in a serial process. This serial processing imposes severe limitations upon the complexity of problems that can be tackled.

Our goal is to  understand how the brain supports language function, how this breaks down after brain damage and the mechanisms that support recovery/rehabilitation. This will require a model of language that is capable of simulating speech, repetition, comprehension, naming and reading. To train such a model using existing pc-based simulators would take far too long /possibly more than a lifetime. So the first objective of this project is to produce a parallel distributed processing machine that is truly parallel (PDP-squared). We intend to use an array of 10,000 ARM processors incorporated into a machine that will be able to run our simulations of human behaviour 500-1000 times faster than is currently possible on a single pc. Once we have successfully produced this machine (Phase1 of the project), we will use it to build a model of normal human language function that can support reading (both aloud and for meaning), comprehension, speech, naming and repetition for all of the single monosyllabic words in English. We will validate this model by showing that damaging it can lead to the same patterns of behaviour as found in brain damaged individuals (Phase 2). Finally we will use the model to predict the results of different speech therapy strategies and will test these predictions in a population of stroke patients who have linguistic problems. 

Duration of the project

September 2008 - August 2013

Funding body

Joint funding from EPSRC, BBSRC and MRC under the Cognitive Systems Foresight Programme

Members of the project

Dr Stephen WelbournePrincipal investigator
Professor Matthew Lambon RalphCo-investigator
Professor Steve FurberCo-investigator
Professor James L (Jay) McClellandCollaborator