Monthly Archives: October 2016

Complexity v Simplicity, the winner is?

I recently published a letter with the above title in the journal of Clinical Pharmacology and Therapeutics; unfortunately it’s behind a paywall so I will briefly take you through the key point raised. The letter describes a specific prediction problem around drug induced cardiac toxicity mentioned in a previous blog entry (Mathematical models for ion-channel cardiac toxicity: David v Goliath). In short what we show in the letter is that a simple model using subtraction and addition (pre-school Mathematics) performs just as well for a given prediction problem as a multi-model approach using three large-scale models consisting of 100s of differential equations combined with machine learning approach (University level Mathematics and Computation)! The addition/subtraction model gave a ROC AUC of 0.97 which is very similar to multi-model/machine learning approach which gave a ROC AUC of 0.96. More detail on the analysis can be found on slides 17 and 18 within this presentation, A simple model for ion-channel related cardiac toxicity, which was given at an NC3Rs meeting.

The result described in the letter and presentation continues to add weight within that field that simple models are performing just as well as complex approaches for a given prediction task.

When is a result significant?

The standard way of answering this question is to ask whether the effect could reasonably have happened by chance (the null hypothesis). If not, then the result is announced to be ‘significant’. The usual threshold for significance is that there is only a 5 percent chance of the results happening due to purely random effects.

This sounds sensible, and has the advantage of being easy to compute. Which is perhaps why statistical significance has been adopted as the default test in most fields of science. However, there is something a little confusing about the approach; because it asks whether adopting the opposite of the theory – the null hypothesis – would mean that the data is unlikely to be true. But what we want to know is whether a theory is true. And that isn’t the same thing.

As just one example, suppose we have lots of data and after extensive testing of various theories we discover one that passes the 5 percent significance test. Is it really 95 percent likely to be true? Not necessarily – because if we are trying out lots of ideas, then it is likely that we will find one that matches purely by chance.

While there are ways of working around this within the framework of standard statistics, the problem usually gets glossed over in the vast majority of textbooks and articles. So for example it is typical to say that a result is ‘significant’ without any discussion of whether it is plausible in a more general sense (see our post on model misuse in cardiac modeling).

The effect is magnified by publication bias – try out multiple theories, find one that works, and publish. Which might explain why, according to a number of studies (see for example here and here), much scientific work proves impossible to replicate – a situation which scientist Robert Matthews calls a ‘scandal of stunning proportions’ (see his book Chancing It: The laws of chance – and what they mean for you).

The way of Bayes

An alternative approach is provided by Bayesian statistics. Instead of starting with the assumption that data is random and making weird significance tests on null hypotheses, it just tries to estimate the probability that a model is right (i.e. the thing we want to know) given the complete context. But it is harder to calculate for two reasons.

One is that, because it treats new data as updating our confidence in a theory, it also requires we have some prior estimate of that confidence, which of course may be hard to quantify – though the problem goes away as more data becomes available. (To see how the prior can affect the results, see the BayesianOpionionator web app.) Another problem is that the approach does not treat the theory as fixed, which means that we may have to evaluate probabilities over whole families of theories, or at least a range of parameter values. However this is less of an issue today since the simulations can be performed automatically using fast computers and specialised software.

Perhaps the biggest impediment, though, is that when results are passed through the Bayesian filter, they often just don’t seem all that significant. But while that may be bad for publications, and media stories, it is surely good for science.