Cancer Reproducibility Project

At a recent meeting at a medical health faculty, researchers were asked to nominate their favourite papers. One person instead of nominating a paper nominated a whole project website, The Reproducibility Project in Cancer Biology, see here. This person was someone who had left the field of systems biology to re-train as a biostatistician.  In case you might be wondering it wasn’t me!  In this blog-post we will take a look at the project, the motivation behind it and some of the emerging results.

The original paper which sets out the aims of the project can be found here. The initiative was a joint collaboration between the Center of Open Science and Science Exchange. The motivation behind it is likely to be quite obvious to many readers, but for those who are unfamiliar it relates to the fact that there are many incentives given to exciting new results, much less for verifying old discoveries.

The main paper goes into some detail about the reasons why it is difficult to reproduce results. One of the key factors is openness, which is why this is the first reproducibility attempt that has extensive documentation. The project’s main reason for choosing cancer research was due to previous findings published by Bayer and Amgen, see here and here. In those previous reports the exact details regarding which replication studies were attempted were not published, hence the need for an open project.

The first part of a reproducibility project is to decide which articles to pick. The obvious choices are the ones that are cited the most and have had the most publicity.  Indeed this is what the project did.  They chose 50 of the most impactful articles in cancer biology published between 2010 and 2012. The experimental group used to conduct the replication studies was not actually a single group.  The project utilised the Science Exchange, see here, which is a network that consists of over 900 contract research organisations (CROs). Thus they did not have to worry about finding the people with the right skills.

One clear advantage of using a CRO over an academic lab is that there is no reason for them to be biased either for or against a particular experiment, which may not be true of academic labs. The other main advantage is time and cost – scale up is more efficient. All the details of the experiments and power calculations of the original studies were placed on the Open Science Framework, see here.  So how successful has the project been?

The first sets of results are out and as expected they are variable.  If you would like to read the results in detail, go to this link here.  The five projects were:

  • BET bromodomain inhibition as a therapeutic strategy to target c-Myc.
  • The CD47-signal regulatory protein alpha (SIRPa) interaction is a therapeutic target for human solid tumours.
  • Melanoma genome sequencing reveals frequent PREX2 mutations.
  • Discovery and preclinical validation of drug indications using compendia of public gene expression data.
  • Co-administration of a tumour-penetrating peptide enhances the efficacy of cancer drugs.

Two of the studies (1) and (4) were largely successful ,  and one (5) was not. The other two replication studies were found to be un-interpretable as the animal cancer models showed odd behaviour: they either grew too fast or exhibited spontaneous tumour regressions!

One of the studies which was deemed un-interpretable has led to a clinical trial: development of an anti-CD47 antibody. These early results highlight that there is an issue around reproducing preclinical oncology experiments, but many already knew this. (Just to add, this is not about reproducing p-values but size and direction of effects.)  The big question is how to improve the reproducibility of research; there are many opinions on this matter.  Clearly one step is to reward replication studies, which is easier said than done in an environment where novel findings are the ones that lead to riches!

The Money Formula – New Book By Paul Wilmott And David Orrell

The Money Formula: Dodgy Finance, Pseudo Science, and How Mathematicians Took Over the Markets

OUT NOW!!!

BUY ON AMAZON.COM                                 BUY ON AMAZON.CO.UK

Explore the deadly elegance of finance’s hidden powerhouse

The Money Formula takes you inside the engine room of the global economy to explore the little-understood world of quantitative finance, and show how the future of our economy rests on the backs of this all-but-impenetrable industry. Written not from a post-crisis perspective – but from a preventative point of view – this book traces the development of financial derivatives from bonds to credit default swaps, and shows how mathematical formulas went beyond pricing to expand their use to the point where they dwarfed the real economy. You’ll learn how the deadly allure of their ice-cold beauty has misled generations of economists and investors, and how continued reliance on these formulas can either assist future economic development, or send the global economy into the financial equivalent of a cardiac arrest.

Rather than rehash tales of post-crisis fallout, this book focuses on preventing the next one. By exploring the heart of the shadow economy, you’ll be better prepared to ride the rough waves of finance into the turbulent future.

  • Delve into one of the world’s least-understood but highest-impact industries
  • Understand the key principles of quantitative finance and the evolution of the field
  • Learn what quantitative finance has become, and how it affects us all
  • Discover how the industry’s next steps dictate the economy’s future

How do you create a quadrillion dollars out of nothing, blow it away and leave a hole so large that even years of “quantitative easing” can’t fill it – and then go back to doing the same thing? Even amidst global recovery, the financial system still has the potential to seize up at any moment. The Money Formula explores the how and why of financial disaster, what must happen to prevent the next one.

PRAISE FOR THE MONEY FORMULA

“This book has humor, attitude, clarity, science and common sense; it pulls no punches and takes no prisoners.”
Nassim Nicholas Taleb, Scholar and former trader

“There are lots of people who′d prefer you didn′t read this book: financial advisors, pension fund managers, regulators and more than a few politicians. That′s because it makes plain their complicity in a trillion dollar scam that nearly destroyed the global financial system. Insiders Wilmott and Orrell explain how it was done, how to stop it happening again and why those with the power to act are so reluctant to wield it.”
Robert Matthews, Author of Chancing It: The Laws of Chance and How They Can Work for You

“Few contemporary developments are more important and more terrifying than the increasing power of the financial system in the global economy. This book makes it clear that this system is operated either by people who don′t know what they are doing or who are so greed–stricken that they don′t care. Risk is at dangerous levels. Can this be fixed? It can and this book full of healthy skepticism and high expertise shows how.”
Bryan Appleyard, Author and Sunday Times writer

“In a financial world that relies more and more on models that fewer and fewer people understand, this is an essential, deeply insightful as well as entertaining read.”
Joris Luyendijk, Author of Swimming with Sharks: My Journey into the World of the Bankers

“A fresh and lively explanation of modern quantitative finance, its perils and what we might do to protect against a repeat of disasters like 2008–09. This insightful, important and original critique of the financial system is also fun to read.”
Edward O. Thorp, Author of A Man for All Markets and New York Times bestseller Beat the Dealer

Survival prediction (P2P loan profitability) competitions

In a previous blog entry, see here, we discussed how survival analysis methods could be used to determine the profitability of P2P loans.  The “trick” highlighted in that previous post was to focus on the profit/loss of a loan – which in fact is what you actually care about – rather than when and if a loan defaults.  In doing so we showed that even loans that default are profitable if interest rates are high enough and the period of loan short enough.

Given that basic survival analysis methods shed light on betting strategies that could be profitable, are there more aggressive approaches that exist in the healthcare community that the financial world could take advantage of? The answer to that question is yes and it lies in using crowdsourcing as we shall now discuss.

Over recent years there has been an increase in prediction competitions in the healthcare sector.  One set of organisers have aptly named these competitions as DREAM challenges, follow this link to their website. Compared to other prediction competition websites such as Kaggle here, the winning algorithms are made publicly available through the website and also published.

A recurring theme of these competitions, that simply moves from one disease area to the next, is survival. The most recent of these involved predicting the survival of prostate cancer patients who were given a certain therapy, results were published here.  Unfortunately the paper is behind a paywall but the algorithm is downloadable from the DREAM challenge website.

The winning algorithm was basically an ensemble of Cox proportional hazards regression models, we briefly explained what these are in our previous blog entry.  Those of you reading this blog who have a technical background will be thinking that doesn’t sound like an overly complicated modelling approach.  In fact it isn’t – what was sophisticated was how the winning entry partitioned the data for explorative analyses and model building.  The strategy appeared to be more important than the development of a new method.  This observation resonates with the last blog entry on Big data versus big theory.

So what does all this have to do with the financial sector? Well competitions like the one described above can quite easily be applied to financial problems, as we blogged about previously, where survival analyses are currently being applied for example to P2P loan profitability. So the healthcare prediction arena is in fact a great place to search for the latest approaches for financial betting strategies.

Big data versus big theory

The Winter 2017 edition of Foresight magazine includes my commentary on the article Changing the Paradigm for Business Forecasting by Michael Gilliland from SAS. Both are behind a paywall (though a longer version of Michael’s argument can be read on his SAS blog), but here is a brief summary.

According to Gilliland, business forecasting is currently dominated by an “offensive” paradigm, which is “characterized by a focus on models, methods, and organizational processes that seek to extract every last fraction of accuracy from our forecasts. More is thought to be better—more data, bigger computers, more complex models—and more elaborate collaborative processes.”

He argues that our “love affair with complexity” can lead to extra effort and cost, while actually reducing forecast accuracy. And while managers have often been seduced by the idea that “big data was going to solve all our forecasting problems”, research shows that even with complex models, forecast accuracy often fails to beat even a no-change forecasting model. His article therefore advocates a paradigm shift towards “defensive” forecasting, which focuses on simplifying the forecasting process, eliminating bad practices, and adding value.

My comment on this (in about 1200 words) is … I agree. But I would argue that the problem is less big data, or even complexity, than big theory.

Our current modelling paradigm is fundamentally reductionist – the idea is to reduce a system to its parts, figure out the laws that govern their interactions, build a giant simulation of the whole thing, and solve. The resulting models are highly complex, and their flexibility makes them good at fitting past data, but they tend to be unstable (or stable in the wrong way) and are poor at making predictions.

If however we recognise that complex systems have emergent properties that resist a reductionist approach, it makes more sense to build models that only attempt to capture some aspect of the system behaviour, instead of reproducing the whole thing.

An example of this approach, discussed earlier on this blog, relates to the question of predicting heart toxicity for new drug compounds, based on ion channel readings. One way to predict heart toxicity based on these test results is to employ teams of researchers to build an incredibly complicated mechanistic model of the heart, consisting of hundreds of differential equations, and use the ion channel inputs as inputs. Or you can use a machine learning model. Or, most complicated, you can combine these in a multi-model approach. However Hitesh Mistry found that a simple model, which simply adds or subtracts the ion channel readings – the only parameters are +1 and -1 – performs just as well as the multi-model approach using three large-scale models plus a machine learning model (see Complexity v Simplicity, the winner is?).

Now, to obtain the simple model Mistry used some fairly sophisticated data analysis tools. But what counts is not the complexity of the methods, but the complexity of the final model. And in general, complexity-based models are often simpler than their reductionist counterparts.

I therefore strongly agree with Michael Gilliland that a “defensive” approach makes sense. But I think the paradigm shift he describes is part of, or related to, a move away from reductionist models, which we are realising don’t work very well for complex systems. With this new paradigm, models will be simpler, but they can also draw on a range of techniques that have developed for the analysis of complex systems.

Misapplication of statistical tests to simulated data: Mathematical Oncologists join Cardiac Modellers

In a previous blog post we highlighted the pitfalls of applying null hypothesis testing to simulated data, see here.  We showed that modellers applying null hypothesis testing to simulated data can control the p-value because they can control the sample size. Thus it’s not a great idea to analyse simulations using null hypothesis tests, instead modellers should focus on the size of the effect.  This problem has been highlighted before by White et al.  which is well worth a read, see here.

Why are we blogging about this subject again? Since that last post, co-authors of the original article we discussed there have repeated the same misdemeanour (Liberos et al., 2016), and a group of mathematical oncologists based at Moffitt Cancer Center has joined them (Kim et al., 2016).

The article by Kim et al., preprint available here, describes a combined experimental and modelling approach that “predicts” new dosing schedules for combination therapies that can delay onset of resistance and thus increase patient survival.  They also show how their approach can be used to identify key stratification factors that can determine which patients are likely to do better than others. All of the results in the paper are based on applying statistical tests to simulated data.

The first part of the approach taken by Kim et al. involves calibrating a mathematical model to certain in-vitro experiments.  These experiments basically measure the number of cells over a fixed observation time under 4 different conditions: control (no drug), AKT inhibitor, Chemotherapy and Combination (AKT/Chemotherapy).  This was done for two different cell lines. The authors found a range of parameter values when trying to fit their model to the data. From this range they took forward a particular set, no real justification as to why that certain set, to test the model’s ability to predict different in-vitro dosing schedules. Unsurprisingly the model predictions came true.

After “validating” their model against a set of in-vitro experiments the authors proceed to using the model to analyse retrospective clinical data; a study involving 24 patients.  The authors acknowledge that the in-vitro system is clearly not the same as a human system.  So to account for this difference they perform an optimisation method to generate a humanised model.  The optimisation is based on a genetic algorithm which searched the parameter space to find parameter sets that replicate the clinical results observed.  Again, similar to the in-vitro situation, they found that there were multiple parameter sets that were able to replicate the observed clinical results. In fact they found a total of 3391 parameter sets.

Having now generated a distribution of parameters that describe patients within the clinical study they are interested in, the authors next set about generating stratification factors. For each parameter set the virtual patient exhibits one of four possible response categories. Therefore for each category a distribution of parameter values exists for the entire population. To assess the difference in the distribution of parameter values across the categories they perform a students t-test to ascertain whether the differences are statistically significant. Since they can control the sample size the authors can control the standard error and p-value, this is exactly the issue raised by White et al. An alternative approach would be to state the difference in the size of the effect, so the difference in means of the distributions. If the claim is that a given parameter can discriminate between two types of responses then a ROC AUC (Receiver Operating Characteristic Area Under Curve) value could be reported. Indeed a ROC AUC value would allow readers to ascertain the strength of a given parameter in discriminating between two response types.

The application of hypothesis testing to simulated data continues throughout the rest of the paper, culminating in applying a log-rank test to simulated survival data, where again they control the sample size. Furthermore, the authors choose an arbitrary cancer cell number which dictates when a patient dies. Therefore they have two ways of controlling the p-value.  In this final act the authors again abuse the use of null hypothesis testing to show that the schedule found by their modelling approach is better than that used in the actual clinical study.  Since the major results in the paper have all involved this type of manipulation, we believe they should be treated with extreme caution until better verified.

References

Liberos, A., Bueno-Orovio, A., Rodrigo, M., Ravens, U., Hernandez-Romero, I., Fernandez-Aviles, F., Guillem, M.S., Rodriguez, B., Climent, A.M., 2016. Balance between sodium and calcium currents underlying chronic atrial fibrillation termination: An in silico intersubject variability study. Heart Rhythm 0. doi:10.1016/j.hrthm.2016.08.028

White, J.W., Rassweiler, A., Samhouri, J.F., Stier, A.C., White, C., 2014. Ecologists should not use statistical significance tests to interpret simulation model results. Oikos 123, 385–388. doi:10.1111/j.1600-0706.2013.01073.x

Kim, E., Rebecca, V.W., Smalley, K.S.M., Anderson, A.R.A., 2016. Phase i trials in melanoma: A framework to translate preclinical findings to the clinic. Eur. J. Cancer 67, 213–222. doi:10.1016/j.ejca.2016.07.024

 

The changing skyline

Back in the early 2000s, I worked a couple of years as a senior scientist at the Institute for Systems Biology in Seattle. So it was nice to revisit the area for the recent Seventh American Conference on Pharmacometrics (ACoP7).

A lot has changed in Seattle in the last 15 years. The area around South Lake Union, near where I lived, has been turned into a major hub for biotechnology and the life sciences. Amazon is constructing a new campus featuring giant ‘biospheres’ which look like nothing I have ever seen.

Attending the conference, though, was like a blast from the past – because unlike the models used by architects to design their space-age buildings, the models used in pharmacology have barely moved on.

While there were many interesting and informative presentations and posters, most of these involved relatively simple models based on ordinary differential equations, very similar to the ones we were developing at the ISB years ago. The emphasis at the conference was on using models to graphically present relationships, such as the interaction between drugs when used in combination, and compute optimal doses. There was very little about more modern techniques such as machine learning or data analysis.

There was also little interest in producing models that are truly predictive. Many models were said to be predictive, but this just meant that they could reproduce some kind of known behaviour once the parameters were tweaked. A session on model complexity did not discuss the fact, for example, that complex models are often less predictive than simple models (a recurrent theme in this blog, see for example Complexity v Simplicity, the winner is?). Problems such as overfitting were also not discussed. The focus seemed to be on models that are descriptive of a system, rather than on forecasting techniques.

The reason for this appears to come down to institutional effects. For example, models that look familiar are more acceptable. Also, not everyone has the skills or incentives to question claims of predictability or accuracy, and there is a general acceptance that complex models are the way forward. This was shown by a presentation from an FDA regulator, which concentrated on models being seen as gold-standard rather than accurate (see our post on model misuse in cardiac models).

Pharmacometrics is clearly a very conservative area. However this conservatism means only that change is delayed, not that it won’t happen; and when it does happen it will probably be quick. The area of personalized medicine, for example, will only work if models can actually make reliable predictions.

As with Seattle, the skyline may change dramatically in a very short time.

Time-dependent bias of tumour growth rate and time to tumour re-growth

The title of this blog entry refers to a letter published in the journal entitled, CPT: Pharmacometrics & Systems Pharmacology. The letter is open-access so those of you interested can read it online here.  In this blog entry we will go through it.

The letter discusses a rather strange modelling practice which is becoming the norm within certain modelling and simulation groups in the pharmaceutical industry. There has been a spate of publications citing that tumour re-growth rate (GR) and time to tumour re-growth (TTG), derived using models to describe imaging time-series data, correlates to survival [1-6]. In those publications the authors show survival curves (Kaplan-Meiers) highlighting a very strong relationship between GR/ TTG and survival.  They either split on the median value of GR/TTG or into quartiles and show very impressive differences in survival times between the groups created; see Figure 2 in [4] for an example (open access).

Do these relationships seem too good to be true? In fact they may well be. In order to derive GR/TTG you need time-series data. The value of these covariates are not known at the beginning of the study, and only become available after a certain amount of time has passed.  Therefore this type of covariate is typically referred to as a time-dependent covariate. None of the authors in [1-6] describe GR/TTG as a time-dependent covariate nor treat it as such.

When the correlations to survival were performed in those articles the authors assumed that they knew GR/TTG before any time-series data was collected, which is clearly not true. Therefore survival curves, such as Figure 2 in [4], are biased as they are based on survival times calculated from study start time to time of death, rather than time from when GR/TTG becomes available to time of death.  Therefore, the results in [1-6] should be questioned and GR/TTG should not be used for decision making, as the question around whether tumour growth rate correlates to survival is still rather open.

Could it be the case that the GR/TTG correlation to survival is just an illusion of a flawed modelling practice?  This is what we shall answer in a future blog-post.

[1] W.D. Stein et al., Other Paradigms: Growth Rate Constants and Tumor Burden Determined Using Computed Tomography Data Correlate Strongly With the Overall Survival of Patients With Renal Cell Carcinoma, Cancer J (2009)

[2] W.D. Stein, J.L. Gulley, J. Schlom, R.A. Madan, W. Dahut, W.D. Figg, Y. Ning, P.M. Arlen, D. Price, S.E. Bates, T. Fojo, Tumor Regression and Growth Rates Determined in Five Intramural NCI Prostate Cancer Trials: The Growth Rate Constant as an Indicator of Therapeutic Efficacy, Clin. Cancer Res. (2011)

[3] W.D. Stein et al., Tumor Growth Rates Derived from Data for Patients in a Clinical Trial Correlate Strongly with Patient Survival: A Novel Strategy for Evaluation of Clinical Trial Data, The Oncologist.  (2008)

[4] K. Han, L. Claret, Y. Piao, P. Hegde, A. Joshi, J. Powell, J. Jin, R. Bruno, Simulations to Predict Clinical Trial Outcome of Bevacizumab Plus Chemotherapy vs. Chemotherapy Alone in Patients With First-Line Gastric Cancer and Elevated Plasma VEGF-A, CPT Pharmacomet. Syst. Pharmacol. (2016)

[5] J. van Hasselt et al., Disease Progression/Clinical Outcome Model for Castration-Resistant Prostate Cancer in Patients Treated With Eribulin, CPT Pharmacomet. Syst. Pharmacol. (2015)

[6] L. Claret et al., Evaluation of Tumor-Size Response Metrics to Predict Overall Survival in Western and Chinese Patients With First-Line Metastatic Colorectal Cancer, J. Clin. Oncol. (2013)

Complexity v Simplicity, the winner is?

I recently published a letter with the above title in the journal of Clinical Pharmacology and Therapeutics; unfortunately it’s behind a paywall so I will briefly take you through the key point raised. The letter describes a specific prediction problem around drug induced cardiac toxicity mentioned in a previous blog entry (Mathematical models for ion-channel cardiac toxicity: David v Goliath). In short what we show in the letter is that a simple model using subtraction and addition (pre-school Mathematics) performs just as well for a given prediction problem as a multi-model approach using three large-scale models consisting of 100s of differential equations combined with machine learning approach (University level Mathematics and Computation)! The addition/subtraction model gave a ROC AUC of 0.97 which is very similar to multi-model/machine learning approach which gave a ROC AUC of 0.96. More detail on the analysis can be found on slides 17 and 18 within this presentation, A simple model for ion-channel related cardiac toxicity, which was given at an NC3Rs meeting.

The result described in the letter and presentation continues to add weight within that field that simple models are performing just as well as complex approaches for a given prediction task.

When is a result significant?

The standard way of answering this question is to ask whether the effect could reasonably have happened by chance (the null hypothesis). If not, then the result is announced to be ‘significant’. The usual threshold for significance is that there is only a 5 percent chance of the results happening due to purely random effects.

This sounds sensible, and has the advantage of being easy to compute. Which is perhaps why statistical significance has been adopted as the default test in most fields of science. However, there is something a little confusing about the approach; because it asks whether adopting the opposite of the theory – the null hypothesis – would mean that the data is unlikely to be true. But what we want to know is whether a theory is true. And that isn’t the same thing.

As just one example, suppose we have lots of data and after extensive testing of various theories we discover one that passes the 5 percent significance test. Is it really 95 percent likely to be true? Not necessarily – because if we are trying out lots of ideas, then it is likely that we will find one that matches purely by chance.

While there are ways of working around this within the framework of standard statistics, the problem usually gets glossed over in the vast majority of textbooks and articles. So for example it is typical to say that a result is ‘significant’ without any discussion of whether it is plausible in a more general sense (see our post on model misuse in cardiac modeling).

The effect is magnified by publication bias – try out multiple theories, find one that works, and publish. Which might explain why, according to a number of studies (see for example here and here), much scientific work proves impossible to replicate – a situation which scientist Robert Matthews calls a ‘scandal of stunning proportions’ (see his book Chancing It: The laws of chance – and what they mean for you).

The way of Bayes

An alternative approach is provided by Bayesian statistics. Instead of starting with the assumption that data is random and making weird significance tests on null hypotheses, it just tries to estimate the probability that a model is right (i.e. the thing we want to know) given the complete context. But it is harder to calculate for two reasons.

One is that, because it treats new data as updating our confidence in a theory, it also requires we have some prior estimate of that confidence, which of course may be hard to quantify – though the problem goes away as more data becomes available. (To see how the prior can affect the results, see the BayesianOpionionator web app.) Another problem is that the approach does not treat the theory as fixed, which means that we may have to evaluate probabilities over whole families of theories, or at least a range of parameter values. However this is less of an issue today since the simulations can be performed automatically using fast computers and specialised software.

Perhaps the biggest impediment, though, is that when results are passed through the Bayesian filter, they often just don’t seem all that significant. But while that may be bad for publications, and media stories, it is surely good for science.

The exponential growth effect

A common critique of biologists, and scientists in general, concerns their occasionally overenthusiastic tendency to find patterns in nature – especially when the pattern is a straight line. It is certainly notable how, confronted with a cloud of noisy data, scientists often manage to draw a straight line through it and announce that the result is “statistically significant”.

Straight lines have many pleasing properties, both in architecture and in science. If a time series follows a straight line, for example, it is pretty easy to forecast how it should evolve in the near future – just assume that the line continues (note: doesn’t always work).

However this fondness for straightness doesn’t always hold; indeed there are cases where scientists prefer to opt for a more complicated solution. An example is the modelling of tumour growth in cancer biology.

Tumour growth is caused by the proliferation of dividing cells. For example if cells have a cell cycle length td, then the total number of cells will double every td hours, which according to theory should result in exponential growth. In the 1950s (see Collins et al., 1956) it was therefore decided that the growth rate could be measured using the cell doubling time.

In practice, however, it is found that tumours grow more slowly as time goes on, so this exponential curve needed to be modified. One variant is the Gompertz curve, which was originally derived as a model for human lifespans by the British actuary Benjamin Gompertz in 1825, but was adapted for modelling tumour growth in the 1960s (Laird, 1964). This curve gives a tapered growth rate, at the expense of extra parameters, and has remained highly popular as a means of modelling a variety of tumour types.

However, it has often been observed empirically that tumour diameters, as opposed to volumes, appear to grow in a roughly linear fashion. Indeed, this has been known since at least the 1930s. As Mayneord wrote in 1932: “The rather surprising fact emerges that the increase in long diameter of the implanted tumour follows a linear law.” Furthermore, he noted, there was “a simple explanation of the approximate linearity in terms of the structure of the sarcoma. On cutting open the tumour it is often apparent that not the whole of the mass is in a state of active growth, but only a thin capsule (sometimes not more than 1 cm thick) enclosing the necrotic centre of the tumour.”

Because only this outer layer contains dividing cells, the rate of increase for the volume depends on the doubling time multiplied by the volume of the outer layer. If the thickness of the growing layer is small compared to the total tumour radius, then it is easily seen that the radius grows at a constant rate which is equal to the doubling time multiplied by the thickness of the growing layer. The result is a linear growth in radius. This  translates to cubic growth in volume, which of course grows more slowly than an exponential curve at longer times – just as the data suggests.

In other words, rather than use a modified exponential curve to fit volume growth, it may be better to use a linear equation to model diameter. This idea that tumour growth is driven by an outer layer of proliferating cells, surrounding a quiescent or necrotic core, has been featured in a number of mathematical models (see e.g. Checkley et al., 2015, and our own CellCycler model).  The linear growth law can also be used to analyse tumour data, as in the draft paper: “Analysing within and between patient patient tumour heterogenity via imaging: Vemurafenib, Dabrafenib and Trametinib.” The linear growth equation will of course not be a perfect fit for the growth of all tumours (no simple model is), but it is based on a consistent and empirically verified model of tumour growth, and can be easily parameterised and fit to data.

So why hasn’t this linear growth law caught on more widely? The reason is that what scientists see in data often depends on their mental model of what is going on.

I first encountered this phenomenon in the late 1990s when doing my D.Phil. in the prediction of nonlinear systems, with applications to weather forecasting. The dominant theory at the time said that forecast error was due to sensitivity to initial condition, aka the butterfly effect. As I described in The Future of Everything, researchers insisted that forecast errors showed the exponential growth characteristic of chaos, even though plots showed they clearly grew with slightly negative curvature, which was characteristic of model error.

A similar effect in cancer biology has again changed the way scientists interpret data. Sometimes, a straight line really is the best solution.

References

Collins, V. P., Loeffler, R. K. & Tivey, H. Observations on growth rates of human tumors. The American journal of roentgenology, radium therapy, and nuclear medicine 76, 988-1000 (1956).

Laird A. K. Dynamics of tumor growth. Br J of Cancer 18 (3): 490–502 (1964).

W. V. Mayneord. On a Law of Growth of Jensen’s Rat Sarcoma. Am J Cancer 16, 841-846 (1932).

Stephen Checkley, Linda MacCallum, James Yates, Paul Jasper, Haobin Luo, John Tolsma, Claus Bendtsen. Bridging the gap between in vitro and in vivo: Dose and schedule predictions for the ATR inhibitor AZD6738. Scientific Reports, 5(3)13545 (2015).

Yorke, E. D., Fuks, Z., Norton, L., Whitmore, W. & Ling, C. C. Modeling the Development of Metastases from Primary and Locally Recurrent Tumors: Comparison with a Clinical Data Base for Prostatic Cancer. Cancer Research 53, 2987-2993 (1993).

Hitesh Mistry, David Orrell, and Raluca Eftimie. Analysing within and between patient patient tumour heterogenity via imaging: Vemurafenib, Dabrafenib and Trametinib. (Working paper)