This blog entry will focus on a rather long standing debate around model complexity and predictivity for a specific prediction problem from drug development. A typical drug project starts off with 1000’s of drugs for a certain idea. All but one of these drugs is eventually weened out through a series of experiments, which explore safety and efficacy, with the final drug being the one that enters human trials. The question we will explore is around a toxicity experiment performed rather early in the development (weening out) process, which determines the drug’s effect on the cardiac system.
Many years of research has identified certain proteins, ion-channels, which if a drug were to affect could lead to dire consequences for a patient. In simple terms, ion-channels allow ions, such as calcium, to flow in and out of a cell. Drugs can bind to ion-channels and disrupt their ability to function, thus affecting the flow of ions. The early experiment we are interested in basically measures how many ions flow across an ion-channel with increasing amount of drug. The cells used in these experiments are engineered to over-express the human protein we are interested in and so do not reflect a real cardiac cell. The experiment is pretty much automated and so allows one to screen 1000s of drugs a year against certain ion-channels. The output of the system is an IC50 value, the amount of drug needed to reduce the flow of ions across the ion-channel by 50 percent.
A series of IC50 values are generated for each drug against a number of ion-channels. (We are actually only interested in three.) The reason why a large screening effort is made is because we cannot test all the compounds in an animal model nor can we take all of them into man! So we can’t measure the effect of these drugs in real cardiac systems but we can measure their effect on certain ion-channel proteins which are expressed in the cardiac system we are interested in. The question is then: given a set of IC50 values against certain ion-channels for a particular drug can we predict how this drug will affect a cardiac system?
As mentioned earlier, drug development involves performing a series of experiments over time. The screening experiment described above is one of many used to look at cardiac toxicity. The next experiment in the pipeline, which could occur one or maybe two years later, is exploring the remaining drugs in an intact cardiac system. This could be a single cardiac cell taken from a dog, a portion of the ventricular wall, or something else entirely. After which, even less compounds are taken into dog studies before entering human trials. So the prediction question could be related to any one of these cardiac systems. The inputs into the prediction problem are the set of IC50 values, three in the cases we will look at, whereas the output, which we want to predict, are certain measures from the cardiac systems described.
At this point some of you may be thinking, well if we want to predict what will happen in a real cardiac system then why don’t we build a virtual version of the system using a large mathematical model (biophysical model)? Indeed people have done this. However, others (especially those who follow this blog) might also be thinking, I have three inputs and one output and given we screen lots of these compounds surely the dynamics are not that difficult to figure out, such that I can do something simpler and more cost effective! Again people have done this too. If I were to refer to the virtual system (consists of >100 parameters) as Goliath and the simple model (3 parameters) as David some of you can guess what the outcome is! A paper documenting the story in detail can be found here and the model used is available online here. I will just give a brief summary of the findings in the main paper.
The data-sets explored in the article involve making predictions in both animal studies and human. Something noticeable about the biophysical models used in the original articles was that a different structural model was needed for each study. This was not the case for the simple model which uses the same structure across all data sets. Given that the simple model gave the same if not better performance than the biophysical models it raises a question: why do the biophysical modelling community need a different model for different studies? In fact for two human studies, A and B, different human models were used, why? The reason may be that the degree of confidence in those models by people using them is actually quite low, hence the lack of consistency in the models used across the studies. Another issue not discussed by any of the biophysical modeling literature is the reproducibility of the data used to build such models. Given the growing skepticism of the reproducibility of preclinical data in science this adds further doubt to the suitability of such models for industrial use.
Given the points raised here (as well as a previous blog entry highlighting the misuse of these models by their own developers) can the biophysical modelling community be trusted to deliver a modelling solution that is both trustworthy and reliable? This is an important question as regulatory agencies are now also considering using these biophysical models together with some quite exciting new experimental techniques to change the way people assess the cardiac liability of a new drug.