Monthly Archives: May 2017

Cancer Reproducibility Project

At a recent meeting at a medical health faculty, researchers were asked to nominate their favourite papers. One person instead of nominating a paper nominated a whole project website, The Reproducibility Project in Cancer Biology, see here. This person was someone who had left the field of systems biology to re-train as a biostatistician.  In case you might be wondering it wasn’t me!  In this blog-post we will take a look at the project, the motivation behind it and some of the emerging results.

The original paper which sets out the aims of the project can be found here. The initiative was a joint collaboration between the Center of Open Science and Science Exchange. The motivation behind it is likely to be quite obvious to many readers, but for those who are unfamiliar it relates to the fact that there are many incentives given to exciting new results, much less for verifying old discoveries.

The main paper goes into some detail about the reasons why it is difficult to reproduce results. One of the key factors is openness, which is why this is the first reproducibility attempt that has extensive documentation. The project’s main reason for choosing cancer research was due to previous findings published by Bayer and Amgen, see here and here. In those previous reports the exact details regarding which replication studies were attempted were not published, hence the need for an open project.

The first part of a reproducibility project is to decide which articles to pick. The obvious choices are the ones that are cited the most and have had the most publicity.  Indeed this is what the project did.  They chose 50 of the most impactful articles in cancer biology published between 2010 and 2012. The experimental group used to conduct the replication studies was not actually a single group.  The project utilised the Science Exchange, see here, which is a network that consists of over 900 contract research organisations (CROs). Thus they did not have to worry about finding the people with the right skills.

One clear advantage of using a CRO over an academic lab is that there is no reason for them to be biased either for or against a particular experiment, which may not be true of academic labs. The other main advantage is time and cost – scale up is more efficient. All the details of the experiments and power calculations of the original studies were placed on the Open Science Framework, see here.  So how successful has the project been?

The first sets of results are out and as expected they are variable.  If you would like to read the results in detail, go to this link here.  The five projects were:

  • BET bromodomain inhibition as a therapeutic strategy to target c-Myc.
  • The CD47-signal regulatory protein alpha (SIRPa) interaction is a therapeutic target for human solid tumours.
  • Melanoma genome sequencing reveals frequent PREX2 mutations.
  • Discovery and preclinical validation of drug indications using compendia of public gene expression data.
  • Co-administration of a tumour-penetrating peptide enhances the efficacy of cancer drugs.

Two of the studies (1) and (4) were largely successful ,  and one (5) was not. The other two replication studies were found to be un-interpretable as the animal cancer models showed odd behaviour: they either grew too fast or exhibited spontaneous tumour regressions!

One of the studies which was deemed un-interpretable has led to a clinical trial: development of an anti-CD47 antibody. These early results highlight that there is an issue around reproducing preclinical oncology experiments, but many already knew this. (Just to add, this is not about reproducing p-values but size and direction of effects.)  The big question is how to improve the reproducibility of research; there are many opinions on this matter.  Clearly one step is to reward replication studies, which is easier said than done in an environment where novel findings are the ones that lead to riches!