Monday, 1 December 2014

Is cancer really a game?

In response to a recent debate on Twitter, I have, together with Philipp Altrock, written a longer (i.e. more than 140 characters) piece on the limitations of evolutionary game theory as a tool to model cancer. It appears in the form of a guest post on TheEGG, a blog run by Artem Kaznatcheev.

Here's a link: Is cancer really a game?

Wednesday, 22 October 2014

4th IMO workshop on viruses in cancer

The 4th workshop on Intergrative Mathematical Oncology is being held at Moffitt Cancer Center in November. I attended the event last year and must say that it was a great experience (in part maybe because our team won it). A lot of hard work (forcing mathematicians, experimentalist and clinicians to find a common language), and late nights, but in the end it was definitely worth it. The project that secured out victory last year is still very much active and a publication is planned early next year.

Friday, 17 October 2014

Chalmers magasin

The latest issue of Chalmers magasin (aimed mainly at alumni of the university) features an interview with me. The main focus is the use of mathematics to aid cancer research.

Here's a link to the online version:

Wednesday, 24 September 2014

Anything goes

I have never received any formal training in the scientific method. This might sound a bit surprising given that I make a living as a scientist. Instead I have picked up bits and pieces ever since my undergraduate training in physics and throughout my scientific career. This intensified during the research that I did for a textbook on scientific modelling (currently only in Swedish, but an English version is under way), and I read extensively about the scientific revolution, empiricism, logical positivism and Thomas Kuhn's ideas on paradigm shifts.

My latest foray into philosophy of science is the book/extended essay Against Method by Paul Feyerabend. It was first published in 1975 and has ever since been both celebrated and strongly disliked by philosophers and scientists alike (but for different reasons that I will get back to). The main thesis is that there is no coherent scientific method and never has been. Instead Feyerabend proposes theoretical anarchy in which anything goes. For example he shows that introduction of hypotheses that contradict established theories is sometimes a sensible way forward, a move that would be strongly disliked by any philosopher of rationalist creed. He delves specifically into the Copernican revolution and suggests that Galileo used a great deal of propaganda and smoke and mirrors to expound the heliocentric world-view. For example telescopic observations, which are often cited as supporting evidence, were at the time highly speculative and required the development of auxiliary sciences (e.g. optics and meteorology) before they could be considered as solid evidence. Also, the Copernican model of the solar system was no better at explaining empirical observations than was the prevailing Ptolemaic model. Despite this (and hence in contradiction to reason) the Copernican world-view gained followers and was later supported by numerous independent evidence.

I find Feyerabends account quite convincing and also supported by my own experience of 'doing science'. Most of the models, theories and hypothesis investigated in the field of mathematical biology wouldn't stand much of a chance if they were exposed to the rigour of proper science (i.e. as defined by philosopher of science). In many cases the models aren't even falsifiable since their connection to actual phenomena is at best vague. Not to speak of the field of Artificial Life where models aren't even aimed at resembling any real phenomena. Rather they serve as means to aid and guide our feeble thinking. Much of my own work (devising and analysing mathematical models) has the goal of connecting with biology, not right now, but at some point in the future. But this doesn't render the research useless. It is speculative (in relation to existing knowledge, the models themselves are logically consistent), but it is heading somewhere.

This resonates with another conclusions drawn by Feyerabend, which is that the transition from one theory to the next entails a decrease in empirical content. We take a step back, speculate, and slowly approach the empirical facts and observations. My feeling is that mathematical (cancer) biology is in precisely this situation: we are exploring new concepts and ideas (i.e. developing or extending our ontology). And to me this makes perfect sense, and is something that should be encouraged.

My feeling is that philosophers of science dislike this book because it essentially makes them unemployed. If anything goes then there's no point devising or even describing a scientific method. Let the scientists have a go at it. Let anarchy rule. The cool reception from  scientists I think has less to do with methodological anarchy (which most of us are quite familiar with) but rather with the perceived anti-scientism that Feyerabend has been accused of. If science does not rely on a specific method (not even reason or adherence to empirical fact) then it should be viewed as any other human activity such as religion or the arts. This is were many scientists (including me) start getting a bit uncomfortable. What I think he is trying to say is that there are many facets to human life and that science cannot and never will provide the full picture. There are always other views/stories/perspectives that inform us about the human condition.

EDIT: I think this quote by physisct Max Born sums up Feyerabend's thesis in a nice way:

"I believe there is no philosophical high-road in science, with epistemological signposts. No, we are in a jungle and find our way by trial and error, building our road behind us as we proceed."

Tuesday, 16 September 2014

Back home

After almost a year in Tampa at the Intergrated Mathematical Oncology department I've now returned to Sweden and a position as assistant professor. The position is in the Mathematical Sciences at Chalmers University of Technology and besides teaching I will continue my work on cancer modelling, focusing in particular on brain tumour growth (which was the focus during my post-doc at the Sahlgrenska Cancer Center). This time around my work will be more theoretical with the aim of connecting microscopic cell-level dynamics with macroscopic outcomes.

Tuesday, 13 May 2014

Pint of Science

Next week on Wednesday the 21st of May I'm giving a popular science talk at the Pint of Science festival in Tampa. The title of my talk is "When small things are a big deal", and will be about the emergence of complex phenomena out of simple microscopic rules. If you're in Tampa be sure to join me at the Dough for my presentation, and also the one given by Julio Powsang.

Monday, 24 February 2014

Forecasting tumour growth

The concept of personalised oncology is often compared to weather prediction. The idea being that with increased amounts of data from patients (genetic sequencing, phenotypic characterisation of cancer cells, imaging etc.) and more advanced mathematical models, we will be able predict tumour progression and responses to therapy in individual patients with increased accuracy.

When making this comparison, the patient data is analogous to the current state of the atmosphere (temperature, air pressure, wind speed and direction etc.) used as input to fluid dynamics models, which can for example be used in order to predict the future course of a hurricane (or in terms of cancer predict the rate of growth under different therapies).

The current state of personalised medicine is however falling short on both accounts: the data acquired from patients is meager (although microarray data is 'big' it's pretty useless as input to computational models), and our present-day theoretical understanding of tumour growth is limited. Even if we had all the data we could dream of it would most likely be useless because of our ignorance.

Although the analogy seems to fall short it actually sheds some light on our inability to predict tumour growth, the reason being that meteorology was in a similar situation roughly a century ago. Before the advent of digital computers, weather prediction was a difficult business based on previous experience and certain rules of thumb. Or in the words of Lew Fry Richardson in the preface of "Weather prediction by numerical process" (1922):

The process of forecasting, which has been carried on in London for many years, maybe typified by one of its latest developments, namely Col. E. Gold's Index of Weather Maps. It would be difficult to imagine anything more immediately practical. The observing stations telegraph the elements of present weather. At the head office these particulars are set in their places upon a large-scale map. The index then enables the forecaster to find a number of previous maps which resemble the present one. The forecast is based on the supposition that what the atmosphere did then, it will do again now. There is no troublesome calculation, with its possibilities of theoretical or arithmetical error. The past history of the atmosphere is used, so to speak, as a full-scale working model of its present self.

This process of manually predicting the weather is similar to the way clinicians in the present day decide upon different cancer therapies. Treatment choices are based on sparse data and previous experience of similar patients.

The seminal work of Vilhelm Bjerknes and the above quoted book lay the foundation of quantitative weather prediction. What was still lacking was however the computational power, which held back large scale weather forecasting by another 30 years. The situation in mathematical oncology today is however reversed. We have all the computational power we need, but still lack the appropriate theoretical understanding. Hope fully one day that will change.

Thursday, 20 February 2014

The role of mathematical oncology

EDIT: I forgot to mention a paper on Chronic Myeloid Leukemia by Nowak et al. which possibly contradicts the point I'm trying to make. Thanks Heiko Enderling for pointing this out.

EDIT 2: Artem Kaznatcheev has written an excellent blog post arguing for the fact that the above mentioned paper does not contradict my argument.

The other day I was discussing the merits of mathematical oncology with some colleagues in the collaboratorium (a shared space at the IMO where scientific discussion blend with the smell of espresso) here at IMO. We came to the conclusion that our field of research still lacked that defining publication where the use of mathematical modelling was clearly tied to a clinical change benefiting patients. Or in other words, an instance where mathematical oncology has been proven to make a difference.

Other fields of mathematical biology have already had this pleasure. For example Ronald Ross developed a model of malaria dynamics in the 1910s (known as the SIR-model), which allowed for a completely new understanding of of malaria and opened up the field of epidemiology. A more recent example is the use of modelling in the discovery of the high turnover rates of HIV-particles (

Given the fact that more and more researchers work in mathematical oncology, isn't it just a matter of time before that landmark publication appears? Actually I think the answer is no.

The reason for me being so pessimistic pertains to the relation between the complexity of the problem and the amount of knowledge that you can fit into a standard publication (journal paper, conference proceeding etc.). Some of you might object and say the way we communicate our results ought to be secondary to the subject at hand, but I would like to argue that the politics and funding structure of science imposes a certain mode of communication that in turn influence how we approach research questions. What biologist can stick to a specific research agenda, work on it for 15+ years, and then publish a monograph on the topic? (The answer to this question is obviously Darwin, whose meticulous work couldn't have been carried out today.)

This is not to say that mathematical modelling does not contribute to our understanding of cancer, but rather that the insights gained from it arrive in smaller chunks and are absorbed by the experimental and clinical community. These insights and novel concepts then shapes their thinking and inspires them to perform new experiments or look at existing data in new ways.

In support of this thesis I would like to cite two ideas that today are ubiquitous in cancer research: networks and intra-tumoral heterogeneity.

The idea that intra-cellular signalling forms a network with feedback loops, crosstalk and robust properties does not emanate from mathematical oncology per se, but rather from complex systems theory and statistical physics. Nevertheless it is now part of the vocabulary of cancer biologists and helps in moving the subject forward.

The concept of tumour evolution was coined by Nowell in 1976,  and has been the subject of a large number of mathematical modelling papers. For a long time it was believed that this process was characterised by selective sweeps whereby a single clone would dominate the tumour cell population. However a number of theoretical studies suggested that spatial heterogeneity could give rise to a diversity of subclones, and also that tumour cells with similar phenotypes could harbour different genotypes, both processes contributing to intra-tumoral heterogeneity. When the technology arrived to measure genetic heterogeneity in cancers it was indeed shown that diversity can be large enough to classify different parts of a tumour into different subtypes.

Would these measurements have been made in the absence of theory supporting it? We can never know, but for one thing we can say that the theory facilitated the discovery.

In conclusion I think that mathematical oncology has an important role to play in advancing our knowledge of cancer, but I don't think it will happen through landmark discoveries, rather through piecemeal additions.

Wednesday, 5 February 2014

Preprint: Evolutionary dynamics of shared niche construction

I have just uploaded a new preprint on arXiv (and bioaRxiv, a new preprint repository for biology) that explores the evolutionary dynamics of shared niche construction.

In the model we assume that the carrying capacity of each species in the population consists of the sum of two parts: an intrinsic part, and a contribution from all species present in the system. If the constructed niche is highly specific, only the first part is included, while a non-specific niche construction corresponds to the second contribution dominating.

Now it turns out that the evolutionary dynamics of the system strongly depends on the specificity: when the carrying capacity is intrinsic, selection is almost exclusively for mutants with higher carrying capacity, while a shared carrying capacity yields selection purely on growth rate.

The below figure illustrates this fact. In the upper panel, where specificity is low, the invasion of a mutant can lead to a decrease in total population size, while in the lower panel, where carrying capacity is intrinsic, each successful invasion increases the total population size. 

Coming from a background in cancer I prefer interpret this result in the context of tumour growth. If you think of different types (or subclones) of cancer cells as being able to withstand and survive different cell densities (i.e. the niche is specific to each subclone) then growth rate of a rare mutant is irrelevant for determining if it spreads in a tumour populated at the maximal cell density of the resident subclone. Only if it can divide and survive at higher densities will it spread and take over the tumour.

The other extreme can be viewed in terms of diffusible factors, such as angiogenetic factors that attract new blood vessels to the growing tumour. The release of a factor benefits all cells (within a reasonable distance) and hence increases the carrying capacity of all subclones. Now a mutant that produces less factors compared to the resident will still receive the benefit, and if it divides faster, will spread in the population. This situation is analogous to the appearance of cheaters in the classical public goods game.


Many species engage in niche construction that ultimately leads to an increase in the carrying capacity of the population. We have investigated how the specificity of this behaviour affects evolutionary dynamics using a set of coupled logistic equations, where the carrying capacity of each genotype consists of two components: an intrinsic part and a contribution from all genotypes present in the population. The relative contribution of the two components is controlled by a specificity parameter $\gamma$, and we show that the ability of a mutant to invade a resident population depends strongly on this parameter. When the carrying capacity is intrinsic, selection is almost exclusively for mutants with higher carrying capacity, while a shared carrying capacity yields selection purely on growth rate. This result has important implications for our understanding of niche construction, in particular the evolutionary dynamics of tumor growth.