The title of this blog-post refers to a meeting which I attended recently which was sponsored by the UK QSP network. Some of you may not be familiar with the term QSP, put it simply it describes the application of mathematical and computational tools to pharmacology. As the title of the blog-post suggests the meeting was on the subject of model reduction.
The meeting was split into 4 sessions entitled:
- How to test if your model is too big?
- Model reduction techniques
- What are the benefits of building a non-identifiable model?
- Techniques for over-parameterised models
I was asked to present a talk in the first session, see here for the slide-set. The talk was based on a topic that has been mentioned on this blog a few times before, ion-channel cardiac toxicity prediction. The latest talk goes through how 3 models were assessed in their ability to discriminate between non-cardio-toxic and cardio-toxic drugs across 3 data-sets which are currently available. (A report providing more details is currently being put together and will be released shortly.) The 3 models used were a linear combination of block (simple model – blog-post here) and 2 large scale biophysical models of a cardiac cell, one termed the “gold-standard” (endorsed by FDA and other regulatory agencies/CiPA – )and the other forming a key component of the “cardiac safety simulator” .
The results showed that the simple model does just as well and in certain data-sets out-performs the leading biophysical models of the heart, see slide 25. Towards the end of the talk I discussed what drivers exist for producing such large models and should we invest further in them given the current evidence, see slides 24, 28 and 33. How does this example fit into all the sessions of the meeting?
In answer to the first session title: How to test if your model is too big? The answer is straightforward: if the simpler/smaller model outperforms the larger model then the larger model is too big. Regarding the second session on model reduction techniques – in this case there is no need for these. You could argue, from the results discussed here, that instead of pursuing model reduction techniques we may want to consider building smaller/simpler models to begin with. Onto the 3rd session, on the benefits of building a non-identifiable model, it’s clear that there was no benefit in this situation in developing a non-identifiable model. Finally regarding techniques for over-parameterised models – the lesson learned from the cardiac toxicity field is just don’t build these sorts of models for this question.
Some people at the meeting argued that the type of model depends on the question, this is true, but does the scale of the model depend on the question?
If we now go back to the title of the meeting, is there a case for model reduction? Within the field of ion-channel cardiac toxicity the response would be: Why bother reducing a large model when you can build a smaller and simpler model which shows equal/better performance?
Of course, within the (skeptical) framework of Green and Armstrong  point out (see slide 28), one reason for model reduction is that for researchers it is the best of both worlds: they can build a highly complex model, suitable for publication in a highly-cited journal, and then spend more time extracting a simpler version to support client plans. Maybe those journals should look more closely at their selection criteria.
 Colatsky et al. The Comprehensive in Vitro Proarrhythmia Assay (CiPA) initiative — Update on progress. Journal of Pharmacological and Toxicological Methods. (2016) Volume 81 pages 15-20.
 Glinka A. and Polak S. QTc modification after risperidone administration – insight into the mechanism of action with use of the modeling and simulation at the population level approach. Toxicol. Mech. Methods (2015) 25 (4) pages 279-286.
 Green K. C. and Armstrong J. S. Simple versus complex forecasting: The evidence. Journal Business Research. (2015) 68 (8) pages 1678-1685.