Mathematics of the Brain
The Mathematics of the Brain
Ambleside, Lake District, UK
September 1 – 2, 2009
Stefan Rotter, Bernstein Center for Computational Neuroscience, Freiburg, Germany
Roman Borysuk, University of Plymouth, Computational neuroscience, UK
Viktor Kazantsev, University of Nizhny Novgorod, Russia
Marcus Kaiser, University of Newcastle, UK
Yulia Timofeeva, University of Warwick, UK
Henk Nijmeijer, Eindhoven University of Technology, The Netherlands,
Jonathan Dawes, University of Bath, UK
Grigory Litvinov, Independent University of Moscow, Russia
Modern computers are extremely fast devices comparing with the biological brain, however, in many problems such as recognition, decision making, locomotion control, etc, performance of the brain is much higher than performance of the most advanced computational devices. The difference stems from the algorithmic approaches used by computers and the biological brain. Although a huge progress has been made by experimental and theoretical neuroscience in understanding the brain functioning, the algorithms of information processing in the brain are still unknown and wait to be discovered.
There are several obvious difficulties on the way to understanding the computational brain. For example, the brain is a parallel system of information processing with a huge number of dimensions. Several approaches have been developed in mathematics, theoretical physics and data mining to struggle with this difficulty. These approaches are based, mostly, on two complementary ideas:
(i) model (dimensionality) reduction and
(ii) averaging (measure concentration).
A model reduction technology aims to construct a small amount of variables which govern the system dynamics. The measure concentration idea follows the central limit theorem and declares that in a case of high dimensional system, the behaviour of statistical ensembles may be surprisingly simple. In fact this theorem states the equivalence of ensembles in statistical and provides some important examples. It seems suitable to apply these ideas to attack the computational brain models; however, the success in this direction has been rather modest. The reason for that is simple: the most important and interesting features of the computational brain could not be simplified enough and finding of the proper way for mathematical description of the brain is a very difficult and challenging problem.
Another difficulty comes from the consideration that the system's behaviour is rather irregular and includes stochastic elements but at the same time is very reliable and precise. One useful analogy is to approximate the system's behaviour by trajectories wandering around multiple attractors. These attractors are either unstable (but can be stabilised by relatively small perturbation), or even appear in the system after small perturbation (these "ghost attractors" are absent in the system under consideration, but can attract trajectories for some finite time). The wandering between attractors goes under control of some specific "algorithm" corresponding to some particular brain function. Although this "algorithm" does not solve the exact problem but instead finds some approximation of the solution, the neurobiology provides evidence that these "algorithms" very effective from computational point of view and in fact are optimal.
During this two day Workshop we would like to consider different approaches to understanding the computational brain and discuss appropriate methods for this study from the arsenal of modern mathematics. The topics include but are not limited to:
1. Model reduction and measure concentration techniques for the analysis and modelling of brain function
2. Mathematical principles of plausible brain computation. We shall review current challenges and opportunities which high-dimensional dynamical systems present as a universal media for carrying out computations
3. Model inference. We will discuss the issues of qualitative and quantitative mathematical modelling of brain: what features a typical model should reproduce, and how to fit parameters of a typical model to data in a computationally efficient, reliable and meaningful way