My 4 year old son is currently playing around with Plus-plus, which is a construction toy not unlike Lego. The main difference is however that Plus-plus only has one type of brick (in many different colours), shaped as two joined plus signs (see figure below). The shape of the bricks has implications for how they can be joined up and consequently what type of patterns one can construct. For example it is impossible to construct a square since it will be jagged along the edges. Any shape you can imagine can thus only be approximately constructed with Plus-plus, and this is where my frustration sets in. I have clear idea of what I want to build, say a dog, but the substrate won't let me. I'm trying to be creative, but Plus-plus is putting constraints on what I can express. Of course the same happens when I build with Lego, but in that case I have already internalised the constraints and they therefore don't bother me to the same extent. I think the same holds true for any other creative process, such as doing research. The methods that we use for joining together known facts into as of yet unknown facts constrain our creative process. But most of the time we are not aware of our own limitations and happily build our jagged truths that only make up a tiny fraction of what could possibly be expressed.
Tuesday, 30 April 2019
Friday, 24 March 2017
"Bad luck" and cancer
Yesterday I was contacted by a journalist from SVT (Swedish public service) who asked for comments on a new paper by Tomasetti, Li & Vogelstein. Time was short so I didn't have time to understand the paper in full detail, but the article contains some brief comments from yours truly.
http://www.svt.se/nyheter/vetenskap/ny-forskning-slumpen-vanligaste-orsaken-till-cancer
http://www.svt.se/nyheter/vetenskap/ny-forskning-slumpen-vanligaste-orsaken-till-cancer
Tuesday, 21 March 2017
The slope of solutions to the Fisher equation
This blog post describes what I believe to be a common misconception about solutions to the Fisher equation, namely that the slope is inversely proportional to the wave speed. I realised this during supervision of a Bachelor thesis when the students couldn't find agreement between the analytical result and their numerical solutions.
The Fisher equation is partial differential equation of the form:
$$\frac{\partial u(x,t)}{\partial t}=D \nabla^2 u + r u(1-u)$$
where u(x,t) represents the density of cancer cells at time t and location x, and the parameters of the model are D, the diffusivity of cancer cells (i.e. how fast they migrate) and r, the rate of cell division.
The Fisher equations exhibits travelling wave solution, i.e. a fixed front profile that is being translated in space as time progresses. These solutions are typically characterised by their velocity c and slope s. It has been shown that the wave speed is given by
$$c=2\sqrt{Dr}.$$
A partial proof of this can be found in Mathematical Biology by James Murray. In the same book it is claimed that the slope
$$s=1/4c,$$
which implies that faster waves are less steep. This statement is accompanied by the below figure.
Substituting the expression for c one is lead to believe that
$$s=\frac{1}{8\sqrt{D r}}.$$
This expression is never mentioned in Murray's book, but I would claim that the presentation is quite misleading, because if one looks closer at the analysis that leads up to the statement s=1/4c, then one finds that the analysis is done on a non-dimensional version of the Fisher equation, which has wavespeed c = 2. This implies that the statement s=1/4c simply means s = 1/8.
If one instead carries out the exact same analysis on the dimensional Fisher equation one finds that
$$s=1/8\sqrt{r/D}.$$
I think this fact has been missed by many mathematical biologist and I hope this blog post can shed some light on this misunderstanding.
The Fisher equation is partial differential equation of the form:
$$\frac{\partial u(x,t)}{\partial t}=D \nabla^2 u + r u(1-u)$$
where u(x,t) represents the density of cancer cells at time t and location x, and the parameters of the model are D, the diffusivity of cancer cells (i.e. how fast they migrate) and r, the rate of cell division.
The Fisher equations exhibits travelling wave solution, i.e. a fixed front profile that is being translated in space as time progresses. These solutions are typically characterised by their velocity c and slope s. It has been shown that the wave speed is given by
$$c=2\sqrt{Dr}.$$
A partial proof of this can be found in Mathematical Biology by James Murray. In the same book it is claimed that the slope
$$s=1/4c,$$
which implies that faster waves are less steep. This statement is accompanied by the below figure.
Substituting the expression for c one is lead to believe that
$$s=\frac{1}{8\sqrt{D r}}.$$
This expression is never mentioned in Murray's book, but I would claim that the presentation is quite misleading, because if one looks closer at the analysis that leads up to the statement s=1/4c, then one finds that the analysis is done on a non-dimensional version of the Fisher equation, which has wavespeed c = 2. This implies that the statement s=1/4c simply means s = 1/8.
If one instead carries out the exact same analysis on the dimensional Fisher equation one finds that
$$s=1/8\sqrt{r/D}.$$
I think this fact has been missed by many mathematical biologist and I hope this blog post can shed some light on this misunderstanding.
Tuesday, 6 December 2016
8th Swedish Meeting for Mathematical Biology
Next week on the 15th-16th Dec the Mathematical Sciences at Chalmers/GU is hosting the 8th Swedish Meeting for Mathematical Biology. The first meeting was held in 2009 organised by David Sumpter at Uppsala University, and this is the second time the meeting is held in Gothenburg (last time was 2010).
The purpose of the conference is to gather Swedish researchers who use mathematics in order to understand biological systems, e.g. in evolutionary biology, epidemiology, ecology and cancer research. The meeting spans two days and consists of two invited talks by Ivana Gudelj and Luigi Preziosi. The remaining time is allocated for contributed talks with a typical duration of 20 minutes. Among these we prioritise PhD-students and young researchers. In addition to the talks there is also a poster session.
For more information please have a look at our webpage. It still possible to register for the meeting. You can do so by sending me an email.
The purpose of the conference is to gather Swedish researchers who use mathematics in order to understand biological systems, e.g. in evolutionary biology, epidemiology, ecology and cancer research. The meeting spans two days and consists of two invited talks by Ivana Gudelj and Luigi Preziosi. The remaining time is allocated for contributed talks with a typical duration of 20 minutes. Among these we prioritise PhD-students and young researchers. In addition to the talks there is also a poster session.
For more information please have a look at our webpage. It still possible to register for the meeting. You can do so by sending me an email.
Monday, 14 November 2016
The impact of anticipation in dynamical systems
We have just submitted a manuscript that investigates the role of prediction in models of collective behaviour. The idea is quite simple: take a model where animals attract/repel each other based on a pairwise potential, and adjust it so that the animals act not on current, but on future positions (including their own). These anticipated or predicted position are assumed to be simple linear extrapolations some time T into the future. In other words, instead of using current positions x to calculate forces, we use x+T*v, where v is the velocity.
This seemingly simple modification changes the dynamics dramatically. For a typical interaction potential (e.g. Morse potential) the case of no prediction yields no pattern formation, simply particles attracting and colliding. But for an intermediate range of T we observe rapid formation of a milling structure. This means that prediction induces pattern formation and stabilises the dynamics.
Abstract:
The flocking of animals is often modelled as a dynamical system, in which individuals are represented as particles whose interactions are determined by the current state of the system. Many animals, however, including humans, have predictive capabilities, and presumably base their behavioural decisions - at least partially - upon an anticipated state of their environment. We explore a minimal version of this idea in the context of particles that interact according to a pairwise potential. Anticipation enters the picture by calculating the interparticle forces from linear extrapolation of the positions some time $\tau$ into the future. Our analysis shows that for intermediate values of $\tau$ the particles rapidly form milling structures, induced by velocity alignment that emerges from the prediction. We also show that for $\tau > 0$, any dynamical system governed by an even potential becomes dissipative. These results suggest that anticipation could play an important role in collective behaviour, since it induces pattern formation and stabilises the dynamics of the system.
arXiv: http://arxiv.org/abs/1611.03637
This seemingly simple modification changes the dynamics dramatically. For a typical interaction potential (e.g. Morse potential) the case of no prediction yields no pattern formation, simply particles attracting and colliding. But for an intermediate range of T we observe rapid formation of a milling structure. This means that prediction induces pattern formation and stabilises the dynamics.
Abstract:
The flocking of animals is often modelled as a dynamical system, in which individuals are represented as particles whose interactions are determined by the current state of the system. Many animals, however, including humans, have predictive capabilities, and presumably base their behavioural decisions - at least partially - upon an anticipated state of their environment. We explore a minimal version of this idea in the context of particles that interact according to a pairwise potential. Anticipation enters the picture by calculating the interparticle forces from linear extrapolation of the positions some time $\tau$ into the future. Our analysis shows that for intermediate values of $\tau$ the particles rapidly form milling structures, induced by velocity alignment that emerges from the prediction. We also show that for $\tau > 0$, any dynamical system governed by an even potential becomes dissipative. These results suggest that anticipation could play an important role in collective behaviour, since it induces pattern formation and stabilises the dynamics of the system.
arXiv: http://arxiv.org/abs/1611.03637
Thursday, 13 October 2016
Copernicus was not right
During my parental leave I took the opportunity to learn more about areas that I normally don't have time to explore. One of these topics was history of science and in particular the changing world views that man has held throughout history.
Perhaps the largest shift in our view of the world happened when the geocentric world view was replaced by the heliocentric. Although some ancient philosophers argued for a heliocentric worldview (most notably Aristarchos of Samos), the general belief was that the earth was located at the centre of the universe and that the planets were carried round the earth on spheres, the outmost one holding the fixed stars. This framework was described in mathematical terms by Claudius Ptolemaeus in the 2nd century AD, in his astronomical work Almagest. Ptolemaeus constructed a mathematical model in which all planets orbited the earth on circles, and in addition each planet travelled on a smaller circle, an epicycle, along its trajectory around the earth. The Ptolemaic system could predict the future positions of the planets with good accuracy, and in addition harmonised well with the world view of Christianity. These two reasons contributed to the fact that the Ptolemaic system remained dominant for over a thousand years.
The first serious attack on it was delivered by Nicolaus Copernicus, who in the book De revolutionibus orbium coelestium (1543) suggested a heliocentric system. The motivation for this was two-fold. Firstly, Copernicus did not like the fact that the ordering of the planets in the Ptolemaic systems was arbitrary and simply a convention (since both the distance to earth and the speed of the planet could be adjusted to fit the data there was in modern terms one free parameter in the solution), secondly he disapproved of Ptolemaeus use of an equant point in his system. The equant is the point from which the centre of the epicycle of each planet is perceived to move with a uniform angular speed. In other words, to a hypothetical observer placed at the equant point, the center of the epicycle would appear to move at a steady angular speed. However, in order to account for the retrograde motion of planets Ptolemaeus had to place the equant point next to earth (not at the centre of the universe). This meant that although the Ptolemaic system was constructed from circular motion there was something asymmetric about it. In conclusion Copernicus critique was aesthetic in nature. It was not about having a good fit to the data, but an elegant model.
The point I want to make is that Copernicus was not driven by an urge to create a system that was more accurate at predicting planetary motion. In fact the initial heliocentric model made predictions that were on par with the Ptolemaic system. In addition Copernicus insisted that planetary orbits were circular (and he avoided the equant) and therefore he needed even more epicycles than the Ptolemaic system. Since the system was modified several times an exact number is difficult to come up with, but it is estimated that Copernicus initially used 48 epicycles.
This is in complete contrast with the folk science story that claims that the Ptolemaic system had to be amended with more and more epicycles in order to explain data on planetary motion. And along came Copernicus and fixed the problem and got rid all epicycles by proposing a heliocentric model.
No, Copernicus took a step in the right direction, but it was not until Johannes Kepler in 1609 discovered that planetary orbits are elliptical that epicycles could be discarded from the heliocentric model.
I'm not quite sure about the take home message of this post. But one thing that I've learnt is that the scientists that we most often associate with the scientific revolution (which by extension reduced the powers of the Church) were deeply devout and held metaphysical beliefs similar to those of Aristotle and Plato. For example Kepler was convinced that the radii of the planetary orbits could be explained by circumscribing Platonic solids within one another. And as we have seen above Copernicus thought that the equant point disturbed the circular symmetry and therefore suggested a model containing only circles.
So I guess my conclusion is this: Copernicus was not right, he was just less wrong. And I guess this applies to all scientists. We can never expect to be right, just less wrong than our predecessors.
Perhaps the largest shift in our view of the world happened when the geocentric world view was replaced by the heliocentric. Although some ancient philosophers argued for a heliocentric worldview (most notably Aristarchos of Samos), the general belief was that the earth was located at the centre of the universe and that the planets were carried round the earth on spheres, the outmost one holding the fixed stars. This framework was described in mathematical terms by Claudius Ptolemaeus in the 2nd century AD, in his astronomical work Almagest. Ptolemaeus constructed a mathematical model in which all planets orbited the earth on circles, and in addition each planet travelled on a smaller circle, an epicycle, along its trajectory around the earth. The Ptolemaic system could predict the future positions of the planets with good accuracy, and in addition harmonised well with the world view of Christianity. These two reasons contributed to the fact that the Ptolemaic system remained dominant for over a thousand years.
The first serious attack on it was delivered by Nicolaus Copernicus, who in the book De revolutionibus orbium coelestium (1543) suggested a heliocentric system. The motivation for this was two-fold. Firstly, Copernicus did not like the fact that the ordering of the planets in the Ptolemaic systems was arbitrary and simply a convention (since both the distance to earth and the speed of the planet could be adjusted to fit the data there was in modern terms one free parameter in the solution), secondly he disapproved of Ptolemaeus use of an equant point in his system. The equant is the point from which the centre of the epicycle of each planet is perceived to move with a uniform angular speed. In other words, to a hypothetical observer placed at the equant point, the center of the epicycle would appear to move at a steady angular speed. However, in order to account for the retrograde motion of planets Ptolemaeus had to place the equant point next to earth (not at the centre of the universe). This meant that although the Ptolemaic system was constructed from circular motion there was something asymmetric about it. In conclusion Copernicus critique was aesthetic in nature. It was not about having a good fit to the data, but an elegant model.
The point I want to make is that Copernicus was not driven by an urge to create a system that was more accurate at predicting planetary motion. In fact the initial heliocentric model made predictions that were on par with the Ptolemaic system. In addition Copernicus insisted that planetary orbits were circular (and he avoided the equant) and therefore he needed even more epicycles than the Ptolemaic system. Since the system was modified several times an exact number is difficult to come up with, but it is estimated that Copernicus initially used 48 epicycles.
This is in complete contrast with the folk science story that claims that the Ptolemaic system had to be amended with more and more epicycles in order to explain data on planetary motion. And along came Copernicus and fixed the problem and got rid all epicycles by proposing a heliocentric model.
No, Copernicus took a step in the right direction, but it was not until Johannes Kepler in 1609 discovered that planetary orbits are elliptical that epicycles could be discarded from the heliocentric model.
I'm not quite sure about the take home message of this post. But one thing that I've learnt is that the scientists that we most often associate with the scientific revolution (which by extension reduced the powers of the Church) were deeply devout and held metaphysical beliefs similar to those of Aristotle and Plato. For example Kepler was convinced that the radii of the planetary orbits could be explained by circumscribing Platonic solids within one another. And as we have seen above Copernicus thought that the equant point disturbed the circular symmetry and therefore suggested a model containing only circles.
So I guess my conclusion is this: Copernicus was not right, he was just less wrong. And I guess this applies to all scientists. We can never expect to be right, just less wrong than our predecessors.
Thursday, 1 September 2016
Parameter variation or a take on interdisciplinary science
This text is written from a personal perspective and I'm not sure how well it applies to other scientists. If you agree or disagree please let me know.
A standard tenet of experimental science is that the number of parameters that one varies in an experimental set-up should be kept to a minimum. This makes is possible to disentangle the effects of different variables on the outcome of the experiment. It has been claimed that the pace at which physics has moved forward in the last century (and molecular biology in the last half century) is due to possibility of physicists to isolate phenomena in strict experimental set-ups. In such a setting each variable can be varied individually, while all others are kept constant. This is in stark contrast to e.g. sociology, where controlled experiments are much harder to perform.
In a sense the process of doing science is similar to an experiment with a number of parameters. The 'experiment' corresponds to a specific scientific question and the 'parameters' correspond to different approaches to solving the problem. However, we do not know which approach will be successful. If not, it would not classify as research.
Most approaches or methods are in fact aggregates of many submethods. To give an example, say that I would like to describe some biological system using ordinary differential equations. Then the equations I write down might be novel, but I rely on established methods for solving these equations. I try to describe the system by trying (varying) the equations that describe the dynamics until I find the ones I'm happy with. In this sense we use both existing and new methods when trying to solve some scientific question. However, in order to actually make progress we often minimise the number of novel methods in our approach. If possible we only vary one method and keep all others fixed.
The problem with interdisciplinary research is that it often calls for novelty on the part of all the involved disciplines. In the case of mathematical biology for example we are asked to invent new mathematics at the same time as we discover new biology. Maybe this is not always the case, but to a certain extent these expectations are always present. A mathematician is expected to develop new mathematical tools, while a biologist is expected to discover new things about biology.
If both parts enter a project with the ambition of advancing their own discipline this might introduce too much uncertainty in the scientific work (we are now varying two "parameters" in the experiment), which could lead to little or no progress. If the mathematician stands back then new biology can be discovered using existing mathematical tools, while existing biological knowledge and data could serve as a testing ground for novel mathematics.
So what is the solution to this problem? I'm not sure. But being clear about your intentions in an interdisciplinary project is a good starting point. And maybe taking turns when it comes to novelty with an established collaborator.
A standard tenet of experimental science is that the number of parameters that one varies in an experimental set-up should be kept to a minimum. This makes is possible to disentangle the effects of different variables on the outcome of the experiment. It has been claimed that the pace at which physics has moved forward in the last century (and molecular biology in the last half century) is due to possibility of physicists to isolate phenomena in strict experimental set-ups. In such a setting each variable can be varied individually, while all others are kept constant. This is in stark contrast to e.g. sociology, where controlled experiments are much harder to perform.
In a sense the process of doing science is similar to an experiment with a number of parameters. The 'experiment' corresponds to a specific scientific question and the 'parameters' correspond to different approaches to solving the problem. However, we do not know which approach will be successful. If not, it would not classify as research.
Most approaches or methods are in fact aggregates of many submethods. To give an example, say that I would like to describe some biological system using ordinary differential equations. Then the equations I write down might be novel, but I rely on established methods for solving these equations. I try to describe the system by trying (varying) the equations that describe the dynamics until I find the ones I'm happy with. In this sense we use both existing and new methods when trying to solve some scientific question. However, in order to actually make progress we often minimise the number of novel methods in our approach. If possible we only vary one method and keep all others fixed.
The problem with interdisciplinary research is that it often calls for novelty on the part of all the involved disciplines. In the case of mathematical biology for example we are asked to invent new mathematics at the same time as we discover new biology. Maybe this is not always the case, but to a certain extent these expectations are always present. A mathematician is expected to develop new mathematical tools, while a biologist is expected to discover new things about biology.
If both parts enter a project with the ambition of advancing their own discipline this might introduce too much uncertainty in the scientific work (we are now varying two "parameters" in the experiment), which could lead to little or no progress. If the mathematician stands back then new biology can be discovered using existing mathematical tools, while existing biological knowledge and data could serve as a testing ground for novel mathematics.
So what is the solution to this problem? I'm not sure. But being clear about your intentions in an interdisciplinary project is a good starting point. And maybe taking turns when it comes to novelty with an established collaborator.
Subscribe to:
Posts (Atom)