Tuesday, 26 November 2013

The 3rd IMO Workshop on Personalized Medicine

Last week I attended an interdisciplinary workshop at Moffitt organised by the Integrated Mathematical Oncology department. It was the third incarnation of the event and this time the focus was on personalised medicine. The structure of the workshop was as follows: the participants were divided into four teams with roughly 10 people in each, containing clinicians, experimentalists and theoreticians. Each team was assigned a specific type of cancer (in line with the knowledge of the clinicians and experimentalists), and the aim was to construct, analyse and present a clinically relevant model, all within 4 days.

I ended up in the "lung team" as one out of three team leaders (the others being Lori Hazlehurst and Ben Creelan), and we decided to work on the problem of drug resistance in stage IV non-small cell lung cancer. Four days is a very short time to achieve the goals described above, and the workshop was an intense experience. Reaching across disciplines, trying to talk the same language, define a reasonable question, formulate a mathematical model, simulate it, get nice graphical results and create a nice looking presentation. Our team probably averaged 12 hours of work per day, and the last evening we didn't get to bed until 3 am. As tough as it might seem it was also rewarding, and I learned a lot.

In contrast to most academic events there was also a competitive element. Three external judges picked a winning team based on a number of criteria (originality, success, data utilisation etc.), and the winning team leaders were awarded a $50K pilot grant. The idea being that the project started at the workshop will develop into a full scale interdisciplinary research project.

All four teams (blood, lung, urogenital, breast) did a great job, but apparently the judges thought that our team was the most accomplished and awarded us the grant. What up to then had seemed abstract and remote all of a sudden became very real, and I'm now looking forward to spending the grant on refining and validating our model.

Friday, 1 November 2013

Big Data Business

In a talk given at the Royal Society on the origin of life, John Maynard Smith noted that while the 19th century had been the century of energy, in which science and engineering were concerned with transformning energy from one form to another (chemical to mechanical as in a steam engine or mechanical to elecrical as in a dynamo), the 20th century was about information, and in particular the transformation of information. In biology we have learned that our genes are written in a genetic code, that is being transmitted, translated and transcribed, and high-energy physics is to a large extent focused on interpreting the massive amounts of information that is being produced in particle accelerators. Today information technology is a major industry, and it is possible to make a fortune on simply transforming one form of information into another more useful shape.

The latter activity is the topic of the book Big Data: a revolution that will transform how we live, work and think by Viktor Mayer-Schönberger and Kenneth Cukier. In it they explore the consequences of our ever increasing ability to gather, store and process data, and focus in particular on the implications it has for business. They show how IT-giants such as Google and Amazon have gotten ahead in the game by relying on the power of data, and have developed clever ways of acquiring and utilising it. For example Google has built a spell-checking algorithm from all the billions of misspelled search queries that they have amassed. In a similar way Google have also constructed a translation algorithm based on millions of webpages that happen to exist in multiple languages.

The data is messy, but the shear volume overcomes the problems. This represents, they authors claim, the opposite to the traditional approach to acquiring knowledge, i.e. careful data acquisition and analysis, studying only a subset of the totality of data. Big data is taking all possible information into account, or N = all, in the words of the authors.

A more creative and surprising (at least to me) example of big data, which highlights the potential of data transformation, is the ability to predict local economic growth and unemployment figures from analysing geo-location data from gps devices. The book is full of such examples, which although interesting become slightly tedious after a while. More importantly perhaps they made me conscious of all the different ways in which we give away our personal information in exchange for "free services" offered by big data companies. The company performing the above mentioned predictions gathers its data from a "free" gps app.

For the single user, Twitter represents a way to communicate and connect in a rapid and free manner, but to the company the tweets represent datafied moods and feelings of millions of people that are updated every instant. Twitter thus has direct and quantifiable access to millions of peoples minds in real-time. It is therefore no surprise that data from Twitter can be used in order to predict everything from box office sales to election results.

The book is mainly aimed at business people, but still touches on the implications for science and society at large. One recurring topic is that we in the future will move away from causation and rely more on correlation in our attempts to understand the world. This might be so, but, as the authors rightly point out, we still need theory to place the data and the conclusions drawn from it in a framework of understanding.

Despite its brevity (just under 200 pages excluding references and notes) it is a bit repetitive, and a few factual errors also detract from its appeal (no, Steve Jobs did not survive longer because he had his genome sequenced, experiments are not often complicated and unethical, and yes it was possible to determine ones position prior to gps-technology, using for example a chronometer and sextant.)

In any case I would recommend the book to those who are curious about how information gathering and analysis is changing our society and how business is done, but wouldn't recommend to those that are hoping for a more scientific or philosophical view on Big Data.

Monday, 21 October 2013

Moffitt Cancer Center

Today is my first official day on my new position as a research scientist in the Integrated Mathematical Oncology group at Moffitt Cancer Center. I'll be working with Alexander 'Sandy' Anderson, my former PhD-supervisor, who is now heading the IMO.

Apart from new hire orientation and other exciting administrative stuff I'm currently working on a project related to evolution of resistance. The plan is to look at how drug specificity and fitness landscape topography influences the evolution of resistance. Hopefully I'll have more to say in a not too distant future.

Wednesday, 16 October 2013

Robust science

As a part of my book project on complexity I have been reading William Wimsatt's 'Re-engineering philosophy for limited beings', which in essence is a subset of his papers published in the last 30 years merged into a coherent whole.

The main purpose of the book is to introduce a new philosophy of science, which accounts for our limitations as human beings. Traditionally philosophy of science has assumed that scientists are perfect beings, having infinite computational power and never making any mistakes, and Wimsatt's aim is to replace this view with one in which scientists are fallible and error-prone. Now if this is the case, how do we formulate a scientific method that accounts for and embraces these limitations?

The greater part of the book is devoted to answering this and related questions, and I will here only mention one aspect that I found particularly intriguing.

The traditional account of a scientific theory is a set of assumptions or axioms together with some rules of deduction that dictate how novel and true statements can be produced. Assuming that the axioms are true we can generate a possibly infinite set of true statements all connected somehow by the rules of deduction. The picture is that of a network, where true statements are nodes and deductions form the links.

In reality however scientific statements are rarely held together by truth preserving rules of inference, but rather by experimental data, hand-waving analytical results and results from models. All of these may contain flaws, which run the risk of undermining the theory. If an experimental result turn out to be wrong for some reason, then the corresponding link in the network breaks, and if the link points to a statement with only one link, then that statement has to go.

How then should we deal with this situation? Wimsatt's answer is that we already are dealing with it by making robust inferences. In general we don't trust the results of single model, but instead require some independent verification. And if two different models provide the same answer then it's more likely to be accurate. And the more links that are pointing towards a statement the more likely it is to be true. Even if some of the experimental results or conclusions made from models turn out to be flawed the statement still stands.

I think this network analogy is useful way of illustrating the how scientific knowledge is accumulated and has certainly helped me in thinking about my work.

Thursday, 10 October 2013

Comparative drug pair screening across multiple glioblastoma cell lines reveals novel drug-drug interactions

I'm the co-author of a newly published paper on drug-pair screening on glioblastoma (brain tumour) cell lines. The bulk of the work was carried out by Linnéa Schmidt in the Nelander lab at Gothenburg University. Below is the abstract, and the full paper can be found here.


Background Glioblastoma multiforme (GBM) is the most aggressive brain tumor in adults, and despite state-of-the-art treatment, survival remains poor and novel therapeutics are sorely needed. The aim of the present study was to identify new synergistic drug pairs for GBM. In addition, we aimed to explore differences in drug-drug interactions across multiple GBM-derived cell cultures and predict such differences by use of transcriptional biomarkers.
Methods We performed a screen in which we quantified drug-drug interactions for 465 drug pairs in each of the 5 GBM cell lines U87MG, U343MG, U373MG, A172, and T98G. Selected interactions were further tested using isobole-based analysis and validated in 5 glioma-initiating cell cultures. Furthermore, drug interactions were predicted using microarray-based transcriptional profiling in combination with statistical modeling.
Results Of the 5 × 465 drug pairs, we could define a subset of drug pairs with strong interaction in both standard cell lines and glioma-initiating cell cultures. In particular, a subset of pairs involving the pharmaceutical compounds rimcazole, sertraline, pterostilbene, and gefitinib showed a strong interaction in a majority of the cell cultures tested. Statistical modeling of microarray and interaction data using sparse canonical correlation analysis revealed several predictive biomarkers, which we propose could be of importance in regulating drug pair responses.
Conclusion We identify novel candidate drug pairs for GBM and suggest possibilities to prospectively use transcriptional biomarkers to predict drug interactions in individual cases.

Wednesday, 9 October 2013

New perspective on metastatic spread

Recently I have, together with collaborators from Moffitt Cancer Center, uploaded a pre-print on arXiv on the topic of metastatic spread. I've touched on this topic previously and the work is in part inspired by a book by Leonard Weiss that I've previously reviewed here, and also previous work by the people at Moffitt.

In the last couple of decades the focus of research on metastases has been on the effect of genes, that when mutated provide the cancer cells with the properties necessary to form distant metastases. The answer to the riddle of metastases is believed to be written in the genome. Indeed this must partly be the case, since we know that cancer cells are full of genetic alterations, but what we currently don't know is how important genetic effects are compared to purely physiological constraints. In order to appreciate this it's worth mentioning that a late-stage solid tumour releases roughly 100 million cancer cells per day(!) into the blood stream, but out of this astronomic number at most a handful cells form detectable metastases in the lifespan of the patient.

It is well known that primary tumours from different anatomical locations have a propensity to form metastases in certain organs. For example, breast tumours are known to metastasise to the adrenal gland and the bone. This is known as the 'seed-soil hypothesis', and suggests in analogy with seeds from plants, that the cancer cells will only flourish if they find the right soil/organ. In opposition to the seed-soil hypothesis stands the 'mechanistic hypothesis' which proposes that metastatic distribution is largely explained by the blood flow to different organs. In our paper we try to reconcile these two views by disentangling the effects of biology and physiology.

In order to do this we have to consider the fate of cancer cells as they reach the blood circulatory system and become circulatory cancer cells (CTCs). The blood is not the native environment for these cells and many of them quickly perish, but the main obstacle they face is the capillary beds where the vessels narrow down to roughly 10 microns, which is the size of the CTCs themselves. Most cells get stuck, are damaged, and die, and we know from animal models that approximately only 1 in 10 000 CTCs pass through a capillary bed unharmed. This means that if 100 million CTCs leave a breast tumour then only 10 000 make it past the capillary bed of the lung, and these remaining CTCs are then distributed to downstream organs in proportion to the blood flow each organ receives. The adrenal gland e.g. receives 0.3% of the total cardiac output, which means that of the 100 million cells leaving the breast on average 30 CTCs reach the adrenal gland.

This framework can be used in order to disentangle seed-soil effects from physiological constraints by normalising (i.e. dividing) the metastatic involvement of each target organ with the relative blood flow it receives. This is known as the metastatic efficiency index (MEI) and was invented by Leonard Weiss. A high MEI suggests beneficial seed-soil effects, while a negative MEI indicates detrimental effects.

What we have done is to extend the MEI to take into account the effect of capillary beds. This extension also makes it possible to investigate the effects of micrometastatic deposits, since these in effect increase the number of CTCs downstream, or in other words, reduce the filtration that occurs within that organ. By posing different scenarios of micrometastatic disease one can then show that the MEI is strongly affected by micrometastases, which in turn suggests that knowledge of micromets could have a strong impact on disease progression, and hence that it would be an important biomarker.

Enough said! Please read the paper.

Friday, 20 September 2013

Increasing my mileage

In my younger years I was never particularly keen on running, and during my military service I was lucky enough to develop shin splints, which kept me away from running distances longer than hundred meters. During my time at university I made a couple of half hearted attempts at running regularly, but they all failed miserably. But during my PhD something happened. All of a sudden I didn't get fed up with being exhausted and feeling like shit both during and after a run. I kept on going for two weeks, three weeks, a full month!

Without me noticing it running had become a habit, and, believe it or not, I was starting to like it. After another couple of months running was turning into an addiction and, following advice from a friend who was into running, I was forcing myself into not increasing my weekly mileage excessively. Despite this I contracted my first injury after about 6 months, and for the first time experienced the feeling of desperately wanting to run, but not being able to. Lesson learned. I recovered from the injury and slowly, very slowly, increased my mileage. Of course this didn't stop me from getting injured again and I think I can count up to four different injuries that I have been struggling with since (stress fracture to the shin, strained calves, non-identifiable pain to the foot, and patellofemoral pain syndrome).

Since 2009 I have been keeping an online diary of my running at a Swedish site called jogg.se, and can therefore view my improvements in retrospect. The below figure shows my yearly mileage (monthly stats are a bit to variable and seasonal), where the blue circle for 2013 is what I've achieved so far this year, and the red circle is my projected mileage. It might seems like a massive increase, but if you instead view 2012 as an anomaly it makes a lot more sense. 3000 km over the course of one years makes roughly 8 km per day, every day of the year, which I must say I'm pretty pleased with.

Thursday, 12 September 2013


The last couple of months I've been preoccupied with two things: translating and running. This post will cover the first topic, and other posts will follow on the second.

The object of my attention has been the translation of a book on scientific modelling that I wrote about two years ago in Swedish together with my colleague Torbjörn Lundh. From the very start we thought that this book would benefit a community larger than the Swedish speaking one, so we always had a translation in mind. The last couple of weeks I finally came around to it, and in the process I decided to keep track of my output. I've now finished a first draft and the below figure shows my progress. I was surprised to find how consistent I've been (the bumps in the curve actually correspond to section that were pre-translated in a previous effort). The average rate turned out to be 790 words/hour, which I'm pretty happy with. As I said this is only a first draft that will be polished further.

We already have deal with a publisher, but considering the copy-editing and pre-production work that needs to be done I don't expect the book to hit the shelves any time soon.

Monday, 19 August 2013

Thinking about complexity

I have in previous blog posts alluded to my interest in philosophy of science. Together with my friend and colleague Henrik Thorén I've previously written a short piece analysing 'weak emergence', the idea that certain systems display properties that cannot be predicted through theory, but only by simulating them. This inevitably lead us down the route of complexity and complex systems, and we have for some time struggled to make sense of, and understand the notions of emergence, complexity and chaos. We decided early on that the best way to achieve this is to get together and write a book about it. A book that tries to disentangle these concepts and also analyses the role of complexity in the sciences today. Or in the words of an abstract for a grant application:
The purpose of this project is to write a book that investigates the concept of complexity from a philosophy of science perspective. This work is motivated by the fact that the concept is used diligently in almost all scientific disciplines, often without a specific meaning or definition. The main question that we hope to answer relates to the basic meaning, or possibly multiple meanings, of the concept: what is complexity, and what consequences does it have? Are there many types, or kinds, of complexity? Is complexity something that can unify disciplines? When is somethingto be considered complex, and when does the concept aid in understanding a system? These questions will be answered by carrying out a thorough conceptual analysis, in close proximity to the disciplines in which it is used. In particular we will focus our study on physics, biology, economics, sociology and sustainability science. Further, we will explore the consequences of complexity on policy making, which is closely related to our ability to describe, regulate and control complex systems. Answering the above questions is of great importance if we are to discuss and handle changes in complex systems such as the climate and global economy in a factual and objective way.
As a matter of fact or application was successful with the Helge Ax:son Johnson Foundation, and we now have some funds to continue our collaboration and realise our idea of a book about complexity. I'll keep posting as the work progresses.

Wednesday, 3 July 2013

The pre-metastatic niche is only half of the story of metastasis (it's the biological one)

Recently, Cancer Research UK posted an article on their blog in which they explain, in layman's terms, recent trends and ideas in research into metastatic spread. The focus of that article is on the concept of a 'pre-metastatic niche', the idea that the primary tumour emits signalling molecules that prime certain organs for the arrival of metastatic cells. We find this line of thought very interesting, as it could, at least in part, explain patterns of metastatic spread, but have strong opinions about how the ideas were presented and the lack of acknowledgment of the other factors that could be at play.  

First, the reader is given a condensed historical background, in which the surgeon Stephen Paget is given credit for having solved the riddle of metastatic patterns 150 years ago. His method of studying metastatic spread in breast cancer is briefly mentioned, however, as is often the case when the seed-soil hypothesis is mentioned, these old 'truths' do not seem to be carefully checked. For example, a much more recent study from by Dr. J Pickren (reported in The Principles of Metastasis by L. Weiss, p. 231, recently reviewed here), which reports a 4:1 ratio between splenic and hepatic metastases (compared to the 14:1 ratio that Paget observed). Another fact not accounted for by Paget in his analysis, is that the liver not only receives arterial blood, but also blood from the gut organs via the portal vein, thereby increasing the chance of it receiving circulating tumour cells (CTCs). If micro-metasases are present in the gut, then these secondary CTCs will most likely lodge in the liver increasing the risk of developing liver metastases. Lastly, Paget only studied a single location of primary tumours, making general conclusions difficult to draw - especially as the connectivity differs greatly between organs. These simple observations should make it clear that Paget's hypothesis is nothing more than an indication of what might be the case in certain circumstances, rather than a settled fact.

From reading the article one also gets the impression that CTCs are drawn to certain organs in the body (e.g. the caption of the 2nd figure reading "Tumour cells are selective about where they end up." or later in text "...which wandering tumour cells find irresistible."). This is not in agreement with what we know today (and have known for the last 30 years) about the dynamics of metastasis formation.

Figure 1: Human vascular system network topology schematic. It is evident by inspection of the network diagram that tumors originating in the gut and lung experience significantly different flow patterns and order in which they experience filtration at capillary beds than tumors originating in other parts of the ‘body’. The alternate pathways (purple) define the fraction of cells which evade arrest (filtration) at a given capillary bed. There are scant measurements of this in the literature, and none for clinical studies.

On the contrary CTCs have little influence over where they end up, instead the correct picture is that of a primary tumour releasing astronomical numbers of CTCs into the blood stream (roughly 100 million cells per day, of which most die in the blood stream), and that these cells are distributed according to physiology of the circulatory system. 

This means that each organ (except the lung and liver) receive a fraction of CTCs in direct relation to their relative blood supply, and only at this point, at which the cancer cells flow through the capillary bed of the organ, can organ specific mechanisms influence the fate of the cancer cell. This means that any explanation of why patterns of metastatic spread look as they do needs to first take into account the characteristics of the circulatory system, and only then the organ specific mechanisms such as the formation of a pre-metastatic niche.

These facts suggest (at least to us) that one should view the formation of the pre metastatic niche from a more passive point of view. The signals secreted by the primary tumour induce a systemic inflammatory response - which may or may not effect all organs. The evidence suggests that some distant sites respond in a way that makes them more hospitable to the CTCs that happen to pass though them and hence these cells are more likely to form overt metastases - but to present this as an active process is to stretch the data and to anthropomorphize to a dangerous extent.

When attempting to synthesize and communicate difficult scientific information to the public, it is always tempting to present a small slice of the story - and indeed, this is good practice as only so much can be communicated effectively at one time.  But when doing this, it is essential to point out where the limits of our understanding are, and not oversell current hypotheses as the 'truth'.  Science is, and always has been, a steady progression toward understanding, paved by models that are (we hope) less and less wrong.  The way we think today is not likely to be the same as the way we think in 10 years time.

Monday, 24 June 2013

A dialogue about evolution

I wrote this dialogue a few years ago in the wake of reading Gödel Escher Bach by Douglas Hofstadter, but I didn't know quite what to do with it. It was originally written in Swedish and when I stumbled across it the other day I realised that I could try to translate it into English and publish it here. So here goes, a dialogue about problem solving, evolution and grass:

In a leafy clearing facing a newly plowed field the Hare and the Fox meet on a fine spring day. The Hare chews away on the light green grass while the Fox bask in the rays of the newly risen sun. He looks over the field, squints and slowly faces the Hare who continues to enjoy his herbaceous snack.

The Fox: Did you know that the grass you're eating might be one of the most intelligent life forms on earth?

The Hare: Well, that's just nonsense. It's only grass, right? Just as simple and stupid as any other plant. Or are you trying to say that all plants are intelligent?

The Fox: I think you are being a bit narrow-minded when it comes to intelligence. Just because the grass is unable to talk, and converse as we do, does not mean that it's unintelligent.

The Hare: Well, then I think you have to explain what you mean by intelligence.

The Fox: Ok. To me it's the ability to solve problems. And by that I mean unexpected and novel situations that present obstacles in ones way to achieve a certain objective.

The Hare: And how on earth do you expect the grass I'm eating to achieve anything close to that? When really all it's capable of is converting carbon dioxide and sunlight into oxygen and water. 

The Fox: Now you're being too simple-minded again. You need to widen you perspectives slightly, let some more light in. Plants have simply chosen another strategy compared to us animals when it comes to staying alive and procreating. Instead of running about looking for food, they simply stay put and let it fall down on them. And in the end it might be a clever move, considering how much energy we spend on moving, you looking for fresh grass and me chasing after you.

The Hare: You might have a point.

The Fox: Of course I have, not to talk about all the energy spent on finding the right mate. The plants instead enter the great lottery and find their mate purely through chance. Actually quite convenient when you think about it.

The Hare: This is starting to make sense. So you saying that plants have chosen a completely different strategy when it comes to surviving?

The Fox: Exactly. And therefore I find it a bit unjust to judge them by our standards.

The Hare:
I agree that it's a different strategy, but still a pretty stupid one. It's not like they're fighting back, like I do when you're trying to catch me. But then again, maybe it's you being stupid for not eating the defenceless plants.

The Fox: I think you're being a bit presumptuous my dear friend. You forgot to take into account the fact that I only need to spend roughly one quarter of my time looking for food and eating compared to herbivores as yourself. But we're loosing our focus now. My point was that we need to wait longer for plants to solve their problems compared animals. The problem solving instead occurs on evolutionary time scales.

The Hare: Now you've lost me again. Or rather, I've lost myself in your complicated reasoning.

The Fox: Ok. Let me explain. What I'm saying is that if a species of grass is faced with a change in its natural environment, be it a higher temperature or an invading species that competes for light and nutrients, then those plants that can stand the heat or somehow outcompete the invader will succeed while those that don't will perish. If these plants produce offspring that resemble their parents then the beneficial trait will become more common and spread in the plant population. Without too much exaggeration one could claim that the species has adapted and learnt how to handle the change, and therefore solved the problem it was faced with. Right?

The Hare: Wow, that's mouthful, or maybe I should say mindful. But are you saying that your hypothetical species of grass is making a conscious change in response to the external environment?

The Fox: No no, not at all. The only things that are required for this to happen is that there is random variation within the grass population, plants with certain properties are better at producing offspring, and lastly that the property in question is heritable.

The Hare: Now I see.

The Fox: All the life forms on planet earth have solved a whole bunch of problems throughout their history, otherwise they would never be what they are today. Giraffes have long necks because the best food on the savannah was (and still is) high up in the trees, and sharks are streamlined because it makes swimming in water a lot more efficient.

The Hare: And I have such strong legs because of dodgy characters like yourself.

The Fox: Precisely. One could say that each species on earth constitutes the solution to one and the same problem. The problem of staying alive, and to that problem it seems like there are quite a few distinct solutions. Algae and deer are very different life forms, but they both manage to stay alive.

The Hare: But my long and strong legs have hardly solved the problem. Not that I have had any difficulties escaping myself, but I know, or rather used to know, quite a few hares who ended up in a fox's belly. 

The Fox: You're absolutely right, but that's because we foxes are trying to solve the opposite problem. That of trying to catch up with you. So it all turns into an arms race where you have to keep moving just to stay in the same place.

The Hare: That sounds an awful lot like the Red Queen in Alice in Wonderland. She says that it takes all the running you can do, to keep in the same place, and if you want to get somewhere else, you must run at least twice as fast as that!

The Fox:
That might be so. I never really cared for that book.

The Hare: Why? I find them really clever and hilarious.

The Fox: Well, you actually get to play a part in the story, but nowhere to be found is a character in the shape of a fox.

The Hare: Now then, I think you are being a bit egotistic. In any case, let's return to the case of problems and solutions. Maybe it's just me, but you're making it sound like every property of every animal or plant had a purpose. The neck of the giraffe, the shape of the shark. How can you be so sure about the problems they correspond to? Is there a problem for each solution? For example take the fact that your nose is black, what problem does that solve?

The Fox:
An acute observation my dear friend. Of course you're right, some things just appear by pure chance, or as a by-product of some other solution. Like the lungs that both you and me carry, they developed a long time ago from the esophagus of a poor choking fish that tried to make a living on land. If the lungs had developed from another structure they might have looked completely different today. What I mean is that the solution depends both on the problem and on the solutions that are already in place. Evolution is a tinkerer who works with whatever material is at hand.

The Hare:
Finally I think I understand what you're getting at. But we started talking about the grass, let's return to that. What's so special about it?

The Fox: Well, there are several things, but for one it has managed to turn a weakness into an advantage.

The Hare: That sounds exciting. Do continue!

The Fox: Grass has the peculiar property of growing from beneath instead of, as most other plants do, with shoots from the top of the plant. This means that if the grass is chewed by characters as yourself it doesn't really hurt the grass, since it's the oldest bits that are being eaten. But that's only half the story.

The Hare: Yes…

The Fox: Now, if the grass is eaten by a herbivore, then there's an obvious chance that the neighbouring plant, that might not be a grass, is also eaten. The other plant, in contrast to the grass, might suffer from the grazing and die, which in the end means more space and nutrients for the grass. And here's the punch line: the better the grass tastes to grazing animals the more it and its neighbouring plants will be eaten. The grass doesn't mind since it's growing from beneath, while the other plants take a beating from the grazing. So in the end it pays off for the grass to be eaten! Talk about a clever solution.

The Hare: Not bad. But you said that there were even more reasons to admire the grass, right?

The Fox: Yes, you're right. There exists even more sophisticated ways of getting rid of competing plants. The smartest thing you can do is actually to get someone else to do the job for you. Most species of grass make use of grazing animals, but some have gone into partnership with a more efficient player.

The Hare: Now you've lost me again. You have to keep in mind that I'm not as clever as you are.

The Fox: Look at this field. What is usually grown here?

The Hare: I believe it's wheat. To me a pretty useless crop, doesn't really taste anything. I prefer old fashioned grass.

The Fox: That's your view, but someone else is obviously of a different opinion.

The Hare:  Ah, the humans!

The Fox: Exactly. But let's start from the beginning of the story. Not so long ago, roughly ten thousand years ago, when foxes already were foxes, and hares already hares, but humans definitely not what they are today, there was a species of grass growing in the middle east whose seeds were large, tasteful and nutritious. The humans who lived in that region really enjoyed them and realised that they could be used for cooking. At this point someone came up with a brilliant idea: let's collect the seeds, plant and grow them at one spot, instead of walking about all day looking for them.

The Hare: So the humans started to grow and harvest the grass.

The Fox: Indeed. But what does that really mean? Well, the grass was kept under strict surveillance. Weeds were removed by the humans, water was supplied regularly, and even though the seeds were eaten by the humans some were planted. The humans were given a steady supply of food, while the grass was properly cared for.

The Hare: That's an interesting view of things.

The Fox:
But the history doesn't end there, it's more like the beginning. The fruitful symbiosis that humans and grass entered into on the plains of the middle east has been refined beyond recognition. The grass has been bred into oats, wheat, rye and barley, and has spread with the help of humans to almost every hospitable corner of the earth. With the aid of airplanes and pesticides the competing plants are held at a safe distance, and the grass thrives as never before.

The Hare:  But who's really benefiting from this whole scheme, the humans or the grass?

The Fox: In one way the humans are using the grass for their own purposes, but at the same time they have helped this particular species of grass to become the most common plant on earth. But let's return to my original question. It seems as if this species of grass has solved the problem of staying alive in a formidable way, so do you agree that it is intelligent?

The Hare: Well actually I think you've convinced me, and won the debate as so many times before. But at least I can still run faster than you!

The Fox: I'm not so sure about that. It must have been two weeks since I tried to chase you down the last time. I have definitely improved my running since then.

The Hare: Ok then, let's give it a go then. Just make sure you give me 10 seconds head start. My life is on the line here, but only a dinner for you.

The Fox: Alright. Don't worry, you can trust me.

The Hare: (starts running across the field)

The Fox: 10…9…8…7…6…5…4…3…2…1

Tuesday, 4 June 2013

Searching for synergies: matrix algebraic approaches for efficient pair screening

I have together with Sven Nelander and Rebecka Jörnsten been working on the problem of experimental design for the screening of pairs of perturbations to biological systems, be it gene knock-outs or drugs or a combination thereof. The problem that arises is that the number of possible pairs grows quadratically with the number of perturbations, and this makes it impractical to test all possible combinations. In answer to this we have developed an algorithm that searches the space of combinations in an adaptive fashion making use of information found during the screen to direct the search.

The results of this work were recently accepted for publication in PLoS ONE. Below is the abstract and here's a link to the paper (not yet available on the PLoS-site).

Functionally interacting perturbations, such as synergistic drugs pairs or synthetic lethal gene pairs, are of key interest in both pharmacology and functional genomics. However, to find such pairs by traditional screening methods is both time consuming and costly. We present a novel computational-experimental framework for efficient identification of synergistic target pairs, applicable for screening of systems with sizes on the order of current drug, small RNA or SGA (Synthetic Genetic Array) libraries (>1000 targets). This framework exploits the fact that the response of a drug pair in a given system, or a pair of genes’ propensity to interact functionally, can be partly predicted by computational means from (i) a small set of experimentally determined target pairs, and (ii) pre-existing data (e.g. gene ontology, PPI) on the similarities between targets. Predictions are obtained by a novel matrix algebraic technique, based on cyclical projections onto convex sets. We demonstrate the efficiency of the proposed method using drug-drug interaction data from seven cancer cell lines and gene-gene interaction data from yeast SGA screens. Our protocol increases the rate of synergism discovery significantly over traditional screening, by up to 7-fold. Our method is easy to implement and could be applied to accelerate pair screening for both animal and microbial systems.

Friday, 31 May 2013

Warburg's Lens: a new preprint initiative in Mathematical Oncology

Recently there has been a lot of discussion on preprints in biology, and a call for a faster dissemination of unpublished research. In response to this I have joined a couple of colleagues in creating a blog that posts and discusses new preprints in mathematical oncology. Our site is inspired by Haldane's Sieve, that serves a similar purpose in the field of population genetics.

The blog was launched only a couple of days ago, but we already have our second preprint posted, which is a paper of mine that I have already mentioned on this blog.

Without further ado I give you Warburg's Lens.

Tuesday, 28 May 2013

How do we best approach the problem of cancer, bottom-up or top-down?

I find it a useful exercise to sometimes take a step away from what one is working on and look at it from a broader (and sometimes historical) perspective. For me as a scientist this means moving from doing actual research to a stance similar to that of philosophers of science. Thinking about how and why we do science in the first place. Actually I have a keen interest in philosophy and have also done some work together with Henrik Thorén, a philosopher, colleague and friend, on the relation between emergence and complexity, concepts that seem to be perennial in the scientific literature. (We are also planning book that in-depth analyses the idea of 'complexity', but more on that in a future blog post).

The question I will try to tackle in this post is how the goals of our research is connected to levels of observation and explanation in science, and I will argue for a top-down approach where understanding "higher" levels should be prior to investigating lower-level phenomena. Or in the terms of cancer research: understand why and how cancers grow before your start sequencing their DNA.

Let start by first clarifing what I mean with research goals, and then move on to the slightly more delicate question of levels in science. The goals that scientists setup for themselves are both short-term (solving this equation/understanding this subsystem) mid-term (finishing a research program) and long-term (solving the "big" problems). I would say that my (highly unrealistic) goal is to cure cancer, or at least to improve the care of cancer patients. If we ideally assume that this goal is prior to my other goals then the structure and characteristics of the "big" problem will dictate my lesser goals. Or, in more concrete terms: Cancer kills. Why does it kill? Because it grows without bounds. Why does it grow without bounds? and how does it achieve this? Answering these questions will then define new sub-problems, and after a few rounds we are down to the questions that thousands of people are struggling with in cancer biology labs all over the world. We break down complicated questions into simpler ones, solve them and then move upwards in the hierarchy.

This account is highly idealised and in many instances it is impossible to dissect a complicated phenomena with precision. Instead we have to make assumptions and reason about how the system ought to function and behave. If we again look at cancer research then two major shortcuts have in the past served this role: firstly "cell lines are a reasonable substitute for in vivo experiments" and secondly "cancer is disease of cell division, therefore blocking cell growth will cure cancer". The latter assumption has served as a motivation for studying signalling pathways that mediate extra-cellular growth signals. If these paths of communication are damaged by genetic or epigenetic events then cells might become malignant (and there's your motivation for sequencing and microarrays).

Let's go back to the other ingredient in this argument: levels of observation and explanation. Levels in nature are so abundant that we in most cases don't even think about them. When you look at a collection of birds you see a flock, and conceptualise it as a whole, and only if you strain your mind can you identify the behaviour of single birds. Intuitively it makes sense to talk about the behaviour of the flock and describe its twists and turns. That this strategy of forming higher level structures works well in everyday life should not come as a surprise, but how does it fare as a scientific strategy? Actually in most cases it works well, and the reason seems to be that objects in the world interact strongly enough to form entities that are coherent in space and time. It makes sense to talk about and study genes since 'the gene' (a specific configuration of base pairs) is roughly constant over time, and its properties are largely independent of its spatial location. Also genes interact in lawful ways (e.g. dominance, epistasis etc.), and the fact that the field of genetics was born in an age when the actual constitution of genes, the DNA, was unknown, shows that many facts about genes can be understood without referring to its constituent parts, the nucleic acids.

The structure of scientific inquiry reflects the 'natural' levels since it makes much more sense to study objects that manifests themselves for free so to speak, instead of forcing an incongruent framework on top of the world. If we again look at cancer we identify it as a disease of the organism (a natural level if there is one in biology), we find it in a specific organ and that it forms in a certain tissue of the organ, and of a given cell type (e.g. epithelial cells). If we continue even further we identify certain cellular behaviours or phenotypes that are disregulated in the malignant cells, these are controlled by regulatory pathway, whose activity are controlled by the expression of genes. When we follow the causal chain of the disease we hence end up at the level of the gene, which is commonly viewed as the causal agent in biology. (This view is again highly idealised and the status of the gene in biology is a hot topic most commonly framed in the gene-centric view of Dawkin's vs. the holistic view of Gould). In cancer biology this chain can never be perfectly traced from patient to gene, and instead a common strategy has become to study the differences between normal and malignant cells in terms of gene expression or mutations. The job is then to trace the consequences of these differences in terms of cellular behaviour and tumour growth dynamics, to tease out the changes that are relevant for the dynamics of the system from the ones that are neutral (c.f. driver vs. passenger mutations).

This is however far from trivial, and often results in a situation where we have massive amounts of data on the genetic level that we cannot interpret or understand. A more sensible approach would be to start at the other end of the causal chain and figure out which tissue and cellular processes are responsible for the malignant growth, and then identify which pathways regulate and control these processes. If we analyse the dynamics of these pathways we can identify the genes that drive their up- or down-regulation, and hence identify the genes we need to target. The reason why we have to land at the genetic of molecular level is because a cure for cancer has to act on this level in the form of radiation, cytotoxic drugs or small-molecule inhibitors. In other words, while the cure is molecular identifying the cause of the disease requires a top-down perspective.

To me this is exactly what mathematical modelling of cancer is about. Building models of systems at different levels - be it of tissues, single cells or pathways - and analysing the behaviour of the system with the aim of identifying ways in which the dynamics can be altered in a favourable direction. An example of this is the view that tissues within our bodies are best seen as ecosystems of interacting cell types, and consequently that cancer should be viewed as a failure within this multi-species context (for a recent review of this look at this paper by +David Basanta and +Alexander Anderson).

So to conclude this post: if we want to make progress in cancer research I believe that we should get our higher level facts right before we start looking for the causal agents at the lower level. 

Thursday, 23 May 2013

Travelling wave analysis of a mathematical model of glioblastoma growth

Spurred by recent discussions with +Jacob Scott about preprints in biology and fed up with the slow review process of some journals I've decided to upload my most recent paper on brain tumour modelling on arXiv (and continue to do so with future papers).

This paper is quite technical (at least by my standards) and contains the mathematical analysis of a model of glioblastoma growth that was published last year in PLoS Computational Biology. In this model the cancer cells switch between a proliferative and migratory phenotype, and it was previously shown that the dynamics of the cell-based model can be captured by two coupled partial differential equations, that exhibit (like the Fisher equation) travelling wave solutions. In this paper I have analysed this PDE-system and shown the following things:

1. With a couple of assumptions on model parameters one can obtain an analytical estimate of the wave speed.
2. In the limit of large and equal switching rates the wave speed equals that of the Fisher equation (which is what you'd expect).
3. Using perturbation techniques one can obtain an approximate solution to the shape of the expanding tumour.
4. In the Fisher equation the wave speed and the slope of the front are one to one (faster wave <--> less steep front). This property does not hold for our system.

Here's a link to the submission at arXiv.

Wednesday, 22 May 2013

Mathematical biology or Bioinformatics

Conversation overheard in an inter-disciplinary research centre of unknown location.

Molecular Biologist: So, what kind of research do you do?
Mathematical Biologist: I do modelling, mathematical modelling of cancer.
Molecular Biologist: I see, interesting. So you mean bioinformatics?
Mathematical Biologist (trying to be polite): Well, not quite. My work is more about building mechanistic models that help our understanding of different steps of tumour progression.
Molecular Biologist: I never quite understood all those statistical methods and hypothesis testing, but I'm glad someone likes it!
Mathematical Biologist (slowly losing patience): Well actually….I'm not very good with statistics either, my work is more about understanding the mechanisms at work in cancer, using mathematics.
Molecular Biologist: Oh, I think I understand now. By the way, I have some microarray-data that maybe you can have a look at.
Mathematical Biologist (squeezing through the opening elevator doors): Ok…..drop me an email.

This dialogue is fictional but draws inspiration from the many encounters and discussions I've had about my research with biologists. Usually the conversations last a bit longer than the above, and end in some sort of understanding of what my work is really about.

It's not that I'm easily offended when people think that I'm a bioinformatician, but mathematical/theoretical biology and bioinformatics are fundamentally different lines of research, with different methods and goals, and I'll try to explain why I think that is the case.

In order to illustrate my point we need to take a step back from biology and look at science from a broader perspective. The process of doing science and producing new knowledge about the world is usually termed the scientific method and can roughly be divided into: Hypotheses, Experiments, Results and Conclusions/Findings (I'm sure many philosophers of science will disagree on this, but this basic subdivision will do for my argument). The process is circular in that we start with some idea about how a certain system or phenomena is structured (i.e. a hypothesis), we then transform that hypothesis to a statement that is experimentally testable, carry out the experiment, and from the data determine if the hypothesis was true or false. This fact is added to our knowledge of the world and from our extended body of knowledge we produce new hypotheses.

In order to structure our knowledge about a phenomenon we construct theories that in a more or less formal manner codify our knowledge within a coherent framework. Mathematics is such a framework that was applied successfully first in physics, and then later in chemistry and most other natural sciences. In the language of mathematics we can transform statements in a rigorous, truth-preserving manner, moving from things that are certainly true (based on observation) to things that are possibly true (to be decided by experiment).

It is in this part of the scientific method that mathematical biology fits in. In a mathematical model we incorporate known facts, and maybe add some hypothetical ones, analyse the model and produce hypotheses that hopefully are testable in experiments (disclaimer: this is highly idealised. A lot of mathematical biology is far disconnected from experiments and more concerned with mathematical analysis, but where to draw the line between mathematical biology and applied analysis is at least to me a pointless exercise). Another equally important task for mathematical biologists is to form new theoretical constructs, and define new properties that are of relevance. An example of this is R0, the 'basic reproduction number' of a pathogen, that quantifies the number of cases one case generates on average over the course of its infectious period. It was defined by Ronald Ross when studying malaria with the aid of mathematical modelling, work that later was awarded with the Nobel Prize in Medicine in 1902.

If the role of mathematical biology is to define new concepts and generate hypotheses, where does bioinformatics fit into the process of scientific discovery? The role of bioinformatics is to structure and make sense of the other side of the scientific method; to design experiments and aid us in interpreting the outcomes. In molecular biology the days of simple experiments when measuring a single quantity was enough to prove or disprove a hypothesis are almost gone. With todays measurement techniques such as microarray, methylation probes or SNP-analysis, one is presented with quantities of data that are far beyond the reach of the human intellect. In order to decode the data, and draw conclusions we need algorithms developed by bioinformaticians. Apart from this they are also involved in the step between hypotheses and experiments, designing the most efficient and accurate ways of carrying out experiments (e.g. determining how much coverage we get with a given sequencing technique).

In my view mathematical biology and bioinformatics serve as two independent and non-overlapping disciplines that both aid the actual biologists (i.e. the experimentalists) in making the scientific method spin.

The inspiration for this post (and the figure) came from +Jacob Scott who came up with the idea when writing a recent review on mathematical modelling of metastases. Thanks!

Monday, 13 May 2013

Technology is in the driver seat and we're heading to petaland

There is little doubt that technology has had a large impact on the course of scientific discovery in the past. For example it was Tycho Brahe's development of more precise measurement techniques for astronomical observation that paved the way for Johannes Kepler's identification of regularity in the motion of the planets (i.e. Kepler's three laws of planetary motion), that eventually led to Newton's formulation of the laws of celestial (and terrestrial) mechanics. While Brahe made his observations with the naked eye, the invention of the telescope by Galileo Galilei, and his subsequent observation of the moons of Jupiter, were important events that, together with other empirical evidence
eventually toppled the Aristotelian world-view.  During the same period there was a improvement in observing not only the very distant, but also the very small. The first microscope appeared at the turn of the 16th and 17th century, and within 50 years the technology had improved to such an extent that single cells were visible to the human eye. In fact the word 'cell' was coined by Robert Hook in his book Micrographia (1665). It would take another 200 years of observation before the dynamics of cell division were observed, and another 50 years until it was understood that the chromosomes, that were being shuffled between the two daughter cells, were the carriers of hereditary information.

Since the days of the scientific revolution in the 17th century technology has advanced enormously and every aspect of human life is influenced by technological artefacts. Most of us don't have the faintest idea of how they are constructed or even how they operate, but this is not really an issue since someone knows how they are built and how to fix them if they fail. More disconcerting is the fact that we are often told that only the latest gadgets are worth owning, and that each epsilon change to a piece of technology will revolutionise its use. The need for the latest gadget might be a fundamentally human capacity, and rather of economical and political interest, but what happens when this need for new technology enters into the scientific method and our ways of doing science?

The last 30 years of biological research has been heavily influenced by advances in technology, which have lead to a massive increase in knowledge about the living world. DNA sequencing, RNA-expression measurements (i.e. microarrays) and DNA methylation measurements, just to mention a few, have allowed biologists to address questions that for a long time remained unanswered. But technology doesn't just answer existing questions it also poses new ones. Microarray measurements made it possible to map out the RNA expression levels of all the genes in the genome at once. Since all processes that occur within a human cell are affected by RNA-expression of this or that gene(s), it soon became standard practice within all branches of molecular biology to perform microarray measurements, and basically required if you wanted to publish your findings in a respected journal. The data that emerged was high-dimensional, complex, and to this date we don't have a precise understanding of how RNA-expression relates to gene regulation and protein expression. Completely ignorant of this lag between measurement, and theory and concept formation the biotech industry has continued to develop techniques with even higher coverage and larger depth. The scientific community has become drawn into this technological whirlwind and today, when we still don't have a full understanding of the microarray-data, using it is basically frowned upon, and we are asked why we didn't make use of RNA-seq, or 'Next generation sequencing' in our study.

New technology lets us probe further and deeper into living matter, and doubtless this has helped our understanding of these systems. However, considering how little we have learned about human biology from the Human Genome Project (sequencing of the entire human genome), it's tempting to speculate about where we would have been today if instead all that effort and money that went into sequencing was spent on dedicated experiments and hypothesis-driven inquiry.

Today we are producing terabytes upon terabytes of data from biological systems, but at the same time we seem to know less and less what that data actually means. I think it's time to focus on the scientific problems at hand, and try to understand and build theories of the data produced by current technology, before we rush into the next next-generation piece of technology, that in the end will just make us forget what we initially were asking. If not, it won't be long until we count our datasets in petabytes.

Monday, 6 May 2013

The principles of metastasis

I have, since the beginning of my scientific career, been under the somewhat naive impression, that science moves relentlessly forward, and that we (as scientists) slowly accumulate more and more knowledge about the world. Things are discovered, communicated and never forgotten within the realms of science. Of course this cannot be true in every single case, but reading this book by Leonard Weiss put me into first-hand contact with the loss of scientific knowledge, which struck me particularly hard since it is about a topic I myself am concerned with. This book was published for everyone to read in 1985, so what I talk about is not a straight denial of scientific fact, but rather a collective amnesia or erosion of knowledge.

Let me be a bit more specific, and mention two things that caught me by surprise when I read this book. Firstly, it was established in the 40's that cancer cells from primary tumours in the breast and prostate can travel to the vertebrae and pelvis without first passing through the capillary bed of the lung. This passage is mediated by a structure known as Batson's venous plexus, which allows for a reversal of flow in the veins, in particular during coughing of sneezing. This, at least in part, explains the predilection for breast and prostate tumours to metastasise to the bone, yet these organ pairs are today often mentioned as prime examples of the seed-soil hypothesis, i.e. the idea that cancer cells thrive in certain organs with favourable 'soil'.  This is often talked about in terms of micro-environmental compatibility, which compared to physical blood flow is quite a far fetched and complex explanation.

My second moment of surprise came when Weiss discussed the literature on cancer cell clump size distribution, meaning the size of the cancer cell clusters that enter the circulation and eventually arrest in capillary beds. I'm far from an expert in the field of metastasis but the view that I have acquired from reading review articles and papers on the topic is that cancer cells travel on their own, never collectively. This mistake from my side (or of the authors I've previously read) is made even more serious since it turns out that the success of cancer cells when they arrest in foreign organs depend very much on the clump size; larger clump of cancer cells means higher chance of forming a metastases.

I sincerely hope that the above mentioned 'surprises' are isolated occurrences, but the impression I get from reading this book is that many fundamental insights into the dynamics of the metastatic process have been lost since the 80's. When did you last hear about the transit times and arrest patterns of circulating cancer cells in different organs, intravascular growth prior to extravasation, the rate at which metastatic cells appear in the primary tumour and the number of CTCs that pass through a capillary bed?

In my view these questions lost their appeal when we entered the 'gene-centric' era with its expectation that the answer to every question in biology lies in the genome, and in deeper and yet deeper sequencing. (On a side note I think the gene-centricity in turn is driven by the love of new technology which implies that any data that is acquired with the previous technology is worthless and does not need any explanation.) When focus moved to the genome a lot of the knowledge about the physical aspects of metastasis was ignored, and when those facts had been forgotten they were re-examined in genetic terms. That said, I'm happy to witness how genetic and proteomic techniques have advanced since the publication of this book, and I can honestly say that the chapters that deal with the biochemistry of cancer have not aged well. Today we know a lot more about the biochemistry of the metastatic cascade, but on the other hand possibly less about its physical aspects.

Apart from providing a new (or rather old) perspective on the process of metastasis the book also contains something that I find lacking in the current scientific literature: the critical review. Dr. Weiss takes the time to dissect the experimental setup of the studies he reviews and identifies loop-holes and mistakes in the deduction and hence conclusions in many of the referenced papers. The typical review article of today tells a pleasant story (via numerous studies) of an important topic and sets the agenda for future research. I think this mainly happens since the author often is an authority on the topic, and hence has much to loose on criticising the techniques or methods used within the field. It is much more tempting then to show off your research field as a successful endeavour where experiments are successfully carried out and theory is steadily improving. What I would like to see is a much more critical stance when writing review articles: most experiments are good, but some are bad, and just repeating the conclusions drawn by the authors does not in my opinion advance science.

These are some of the thoughts that passed through my mind when reading Leonard Weiss' 'The principles of metastasis', an excellent companion for those who are interested in the dynamics of metastatic spread. I'll leave you with a figure from the book that I think summarises the authors view of metastases.

Saturday, 20 April 2013

ESTRO Forum 2013

I'm currently attending the European Society for Radiotherapy & Oncology in Geneva, Switzerland. I feel slightly out of place amongst all radiologists and well-dressed oncologist, not to mention the salesreps trying to sell me massive MRI-machines and devices for brachytherapy (I admit I had to google that one). In any case I've been invited to speak about my work on modelling of metastatic spread and delivered the talk this morning. I focused on my recent work with Jacob Scott et al. from Moffitt on self-seeding, and I believe it was well received (at least I got a bunch of questions and comments).

Wednesday, 27 March 2013

Principles of Metastasis

I have recently become interested in the dynamics of metastatic spread, and together with colleagues at Moffitt Cancer Center, I have started to work on mechanistic models that look at the impact of the topology of the vascular network (e.g. here and here). When coming into this field I was surprised by the lack of work along these lines, and found that most people were delving deep into the genome of cancer cells to find the answers of why, where and how metastases appear.

I was therefore positively surprised to hear of the work of the late Dr. Leonard Weiss, who did a lot of work on metastases throughout his long career. In his work we find the physical perspective of metastatic spread that is almost completely absent in this gene-centric era. The other day I finally received his book "Principles of Metastases" from 1985, which I still believe to be highly relevant. A review will be posted when I've finished reading it, which should be soon.

The dynamics of cross-feeding

Many functions carried out by microbes, such as degradation of man-made toxic compounds, require the joint metabolic effort of many bacterial species. In such bacterial communities the success of each species depends on the presence or absence of other species and chemical compounds, effectively joining the components of the community into a microbial ecosystem. A common mode of interaction in such ecosystems is cross-feeding or syntrophy, whereby the metabolic output of one species is being used as a nutrient or energy source by another species.

I have together with my colleague Torbjörn Lundh formulated and analysed a mathematical model of cross-feeding dynamics. We show that under certain assumptions about the system (e.g., high flow of nutrients and time scale separation), the governing equations reduce to a second-order series expansion of the replicator equation. By analysing the case of two and three species we derive conditions for co-existence and show under which parameter conditions one can expect an increase in mean fitness.

The paper was recently published in Bulletin of Mathematical Biology:


Thursday, 21 February 2013

Tumour self-seeding

The topic of tumour self-seeding, i.e. the idea that tumour cells leaving the primary via the blood stream come back and boost its growth has been widely debated and investigated (here, here and here to mention a few). I have recently, together with collaborators from Moffitt Cancer Center in Tampa, developed and analysed a model of this phenomenon. Our findings were published the other day in Royal Society Interface although a preprint has been on the arxiv for a while. To summarise: the phenomenon of tumour self-seeding is highly unlikely to contribute to the growth of the primary unless the circulating tumour cells form micrometastases which allow for an expansion in cell number sufficient to balance the severe losses that this hematic round trip incurs.

Monday, 11 February 2013

The model muddle

I've written a Perspective-piece on the basics of cancer modelling that was recently published Cancer Research. It can be found here, or here (if you don'y have access to CR). Enjoy the read!