Monthly Archives: July 2017

Oxford Quantitative Systems Pharmacology (QSP): Is there a case for model reduction?

The title of this blog-post refers to a meeting which I attended recently which was sponsored by the UK QSP network. Some of you may not be familiar with the term QSP, put it simply it describes the application of mathematical and computational tools to pharmacology. As the title of the blog-post suggests the meeting was on the subject of model reduction.

The meeting was split into 4 sessions entitled:

  1. How to test if your model is too big?
  2. Model reduction techniques
  3. What are the benefits of building a non-identifiable model?
  4. Techniques for over-parameterised models

I was asked to present a talk in the first session, see here for the slide-set. The talk was based on a topic that has been mentioned on this blog a few times before, ion-channel cardiac toxicity prediction. The latest talk goes through how 3 models were assessed in their ability to discriminate between non-cardio-toxic and cardio-toxic drugs across 3 data-sets which are currently available. (A report providing more details is currently being put together and will be released shortly.)  The 3 models used were a linear combination of block (simple model – blog-post here) and 2 large scale biophysical models of a cardiac cell, one termed the “gold-standard” (endorsed by FDA and other regulatory agencies/CiPA – [1])and the other forming a key component of the “cardiac safety simulator” [2].

The results showed that the simple model does just as well and in certain data-sets out-performs the leading biophysical models of the heart, see slide 25.  Towards the end of the talk I discussed what drivers exist for producing such large models and should we invest further in them given the current evidence, see slides 24, 28 and 33. How does this example fit into all the sessions of the meeting?

In answer to the first session title: How to test if your model is too big?  The answer is straightforward: if the simpler/smaller model outperforms the larger model then the larger model is too big.  Regarding the second session on model reduction techniques – in this case there is no need for these. You could argue, from the results discussed here, that instead of pursuing model reduction techniques we may want to consider building smaller/simpler models to begin with. Onto the 3rd session, on the benefits of building a non-identifiable model, it’s clear that there was no benefit in this situation in developing a non-identifiable model. Finally regarding techniques for over-parameterised models – the lesson learned from the cardiac toxicity field is just don’t build these sorts of models for this question.

Some people at the meeting argued that the type of model depends on the question, this is true, but does the scale of the model depend on the question?

If we now go back to the title of the meeting, is there a case for model reduction? Within the field of ion-channel cardiac toxicity the response would be: Why bother reducing a large model when you can build a smaller and simpler model which shows equal/better performance?

Of course, within the (skeptical) framework of Green and Armstrong [3] point out (see slide 28), one reason for model reduction is that for researchers it is the best of both worlds: they can build a highly complex model, suitable for publication in a highly-cited journal, and then  spend more time extracting a simpler version to support client plans. Maybe those journals should look more closely at their selection criteria.

References

[1] Colatsky et al. The Comprehensive in Vitro Proarrhythmia Assay (CiPA) initiative — Update on progress.  Journal of Pharmacological and Toxicological Methods. (2016) Volume 81 pages 15-20.

[2] Glinka A. and Polak S. QTc modification after risperidone administration – insight into the mechanism of action with use of the modeling and simulation at the population level approach.  Toxicol. Mech. Methods (2015) 25 (4) pages 279-286.

[3] Green K. C. and Armstrong J. S. Simple versus complex forecasting: The evidence. Journal Business Research.  (2015) 68 (8) pages 1678-1685.

PDXdata – the gene-drug patient derived xenograft (PDX) app

There is growing interest, within the pharmaceutical industry, in using a patients’ own cancer material to screen the effect of numerous treatments within an animal model. The reason for shifting from standard xenografts, based on historical immortalised cell-lines, is that those models are considered to be very different to the patients’ tumours of today. Thus PDX models are considered to be a more relevant model as they are “closer” to the patient population in which you are about to test your new treatment.

Typically only a handful of PDX models are used, but recently there has been a shift in focus to perform population PDX studies which mimic small scale clinical trials. One of these studies by Gao et al. also published, as an excel file in the supplementary information, the raw data which included not only the treatment effect growth curves, but also genomic data consisting of DNA copy number, gene expression and mutation. Using this data it is possible to explore correlations between treatment response and genomic features.

We at Systems Forecasting are always appreciative of freely available data sets, and have designed an equally free and available PDXdata app to browse through this data.
The app can be used to read excel files in the same form as the Novartis file “nm.3954-S2.xlsx”. It translates volume measurements to diameter, and computes a linear fit to each tumour growth time series. The user can then plot time series, organised by ID or by treatment, or examine statistics for the entire data set. The aim is to explore how well linear models can be used to fit this type of data.

The “Diameters” page shown in the figure below is used to plot time series for data. First read in the excel data file; this may take a while, so a progress bar is included. Data can be grouped either by ID(s) or by treatment(s). Note the Novartis data has several treatments for the same ID, and the data is filtered to include only those IDs with an untreated case. If there is only one treatment per ID, one can group by treatment and then plot the data for the IDs with that treatment. In this case the untreated fit is computed from the untreated IDs.

 

As shown in the next figure, the “Copy Number” and “RNA” tabs allow the user to plot the correlations between copy number or RNA and treatment efficacy, as measured by change in the slope of linear growth, for individual treatments (provided data is available for the selected treatment).

Finally, the “Statistics” page plots a histogram of data derived from the linear models. These include intercept, slope, and sigma for the linear fit to each time series; the difference in slope between the treated and untreated cases (delslope); the growth from initial to final time of the linear fit to the untreated case (lingr); and the difference delgr=diamgr-lingr, which measures diameter loss due to drug.

This app is very much a work-in-progress, and at the moment is primarily a way to browse, view and plot the data. We will add more functionality as it becomes available.

Hierarchical oncology combination therapy: which level leads to success?

The traditional preclinical combination experiment in Oncology for two drugs A and B is as follows. A cancer cell-line is exposed to increasing concentrations of drug A alone, drug B alone and also various concentrations of the combination for a fixed amount of time. That is we determine what effect drug A and B have as monotherapies which subsequently helps us to understand what the combination effect is. There are many articles which describe how mathematical/computational models can be used to analyse such data and possibly predict the combination effect using information on monotherapy agents alone. Those models can be either based on mechanism at the pathway or phenotype level (see CellCycler for an example of the latter) or they could be machine learning approaches. We shall call combinations at this scale cellular as they are mainly focussed on analysing combination effects at that scale. What other scales are their?

We know that human cancers contain more than one type of cell-population so the next scale from the cellular level is the tissue level.  At this level we may have populations of cells with distinct genetic backgrounds either within one tumour or across multiple tumours within one patient. Here we may find for example that drug A kills cell type X and drug B doesn’t, but drug B kills cell-type Y and drug A doesn’t. So the combination can be viewed as a cell population enrichment strategy as it is still effective even though the two drugs do not interact in any way.

Traditional drug combination screening, as described above, are not designed to explore these types of combinations. There is another scale which is probably even less well known, the human population scale …

A typical human clinical combination trial in Oncology can involve combining new drug B with existing treatment A and comparing that to A only. It is unlikely that a 3rd arm in this trial looking at drug B alone is likely to occur.  The reason for this is that if an existing treatment is known to have an effect then it’s unethical to not use it. Unless one knows what effect the new drug B has on its own, it is difficult to assess what the effect of the combination is. Indeed the combination may simply enrich the patient population. That is, if drug A shrinks tumours in patient population X and drug B doesn’t, but drug B shrinks tumours in patient population Y and drug A doesn’t, then if the trial contains both X and Y there is still a  combination effect which is greater than drug A alone.

Many people reading this blog are probably aware that when we see positive combination affects in the clinic that it could be due to this type of patient enrichment. At a meeting in Boston in April of this year a presentation from Adam Palmer suggests that two thirds of marketed combinations in Oncology can be explained in this way, see second half (slide 27 onwards) of this presentation here. This includes current immunotherapy combinations.

We can now see why combinations in Oncology can be viewed as hierarchical. How appreciative the research community is of this is unknown.  Indeed one of the latest challenges from CRUK (Cancer Research UK), see here, suggests that even they may not be fully aware of it. That challenge merely focusses on the well-trodden path of the first level described here. Which level is the best to target? Is it easier to target the tissue and human population level than the cellular one? Only time will tell.