Monthly Archives: August 2015

Complexity versus simplicity in relating tumour size change to survival in oncology drug development

Every pharmaceutical company would like to be able to predict the survival benefit of a new cancer treatment compared to an existing treatment as early as possible in drug development.  This quest for the “holy grail” has led to tremendous efforts from the statistical modelling community to develop models that link variables related to change in disease state to survival times.  The main variable of interest, for obvious reasons, is tumour size measured via imaging.  The marker derived from imaging is called the Sum of Longest Diameters (SLD).  It represents the sum of longest diameters of target lesions, which end up being large lesions that are easy to measure.  Therefore the marker is not representative of the entire tumour burden within the patient.  However, a change within the first X weeks of treatment in SLD is used within drug development to make decisions regarding whether to continue the development of a drug or not.  Therefore, changes in SLD have been the interest of most, if not all, statistical models of survival.

There are two articles that currently analyse the relationship between changes in SLD and survival in quite different ways across multiple studies in non-small cell lung cancer.

The first approach (http://www.ncbi.nlm.nih.gov/pubmed/19440187) by the Pharmacometrics (pharmaco-statistical modelling) group within the FDA involved quite a complex approach.  They used a combination of semi-parametric and parametric survival modelling techniques together with a mixed modelling approach to develop their final survival model.  The final model was able to fit to all past data but the authors had to generate different parameter sets for different sub-groups.  The amount of technical ability required to generate these results is clearly out of the realms of most scientists and requires specialist knowledge.  This approach can quite easily be defined as being complex.

The second approach (http://www.ncbi.nlm.nih.gov/pubmed/25667291) by the Biostatistics group within the FDA involved a simple plotting approach!  In the article the authors categorise on-treatment changes in SLD using a popular clinical approach to create drug response groups.  They then assess whether the ratio of drug response between the arms of clinical studies related to the final outcome of the study.  The outcomes of interest were time to disease progression and survival.  The approach actually worked quite well!  A strong relationship was found between ratio of drug response and the differences in disease progression.  Although not as strong, the relationship to survival was also quite promising.  This approach simply involved plotting data and can be clearly done by most if not all scientists once the definitions of variables are understood.

The two approaches are clearly very different when it comes to complexity: one involved plotting while the other required degree-level statistical knowledge!  It could also be argued that the results of the plotting approach are far more useful for drug development than the statistical modelling approach as it clearly answers the question of interest.  These studies show how sometimes thinking about how to answer the question through visualisation and also taking simple approaches can be incredibly powerful.

When is a model a black box?

One of the issues which comes up frequently with mathematical modelling is the question of whether a model is a “black box”. A model based on machine learning, for example, is not something you can analyse just by peering under the hood. It is a black box even to its designers.

For this reason, many people feel more comfortable with mechanistic models which are based on causal descriptions of underlying processes. But these come with their problems too.

For example, a model of a growing tumour might incorporate a description of individual cells, their growth dynamics, their interactions with each other and the environment, their access to nutrients such as oxygen, response to drugs, and so on. A 3D model of a heart has to incorporate additional effects such as fluid dynamics, electrophysiology, and so on. In principal, all of these processes can be written out as mathematical equations, combined into a huge mathematical model, and solved. But that doesn’t make these models transparent.

One problem is that each component of the model – say an equation for the response of a cell to a particular stimulus – is usually based on approximations and is almost impossible to accurately test. In fact there is no reason to think that complex natural phenomena can be fit by simple equations at all – what works for something like gravity does not necessarily work in biology. So the fact that something has been written out as a plausible mechanistic process does not tell us much about its accuracy.

Another problem is that any such model will have a huge number of adjustable parameters. This makes the model very flexible: you can adjust the parameters to get the answer you want. Models are therefore very good at fitting past data, but they often do less well at predicting the future.

A complex mechanistic model is therefore a black box of another sort. Although we can look under its hood, and see all the working parts, that isn’t very useful, because these models are so huge – often with hundreds of equations and parameters – that it is impossible to spot errors or really understand how they work.

Of course, there is another kind of black box model, which is a model that is deliberately kept inside a black box – think for example of the trading algorithms used by hedge funds. Here the model may be quite simple, but it is kept secret for commercial reasons. The fact that it is a closely-guarded secret probably just means that it works.