cytocentric visionaries ei1

Cytocentric Visionaries: Elizabeth Iorns

Measuring Scientific Reproducibility: The Only Way to Check It Is to Run the Studies


Elizabeth iorns is a Co- Founder and CEO of Science Exchange, a scientific service provider network and outsourcing management software platform.

Here Alicia Henn, CSO of BioSpherix, interviews Dr. Iorns about the Science Exchange and how it is working to improve Scientific Reproducibility. The transcript was edited for length.

Alicia Henn: We see you as a Cytocentric Visionary because you are working to measure Scientific Reproducibility.  What made you want to establish the Science Exchange?

Elizabeth Iorns: The underlying mission of Science Exchange is to enable the quality and efficiency of scientific research. We have pre-qualified and pre-contracted over 3,000 contract research organizations and manufacturers, academic core facilities, and government facilities. These are specialized service providers of scientific services as fee-for-service work.  They’re available through our software platform which has a search interface and a project management component. All of the intellectual property belongs to the requester and there’s a confidentiality agreement in place. This drives significant efficiency, because an organization no longer has to manage hundreds of different service provider contracts. They have a central platform for managing their outsourced R&D.


AH: Why would somebody go to an outside research organization rather than just do the study themselves?

EI: Organizations are increasingly moving to an outsourced model. In the pharmaceutical industry around 40% of R&D dollars are spent at external facilities. There are two major factors that drive this; one is cost efficiency. Working with external partners means that you have somebody on demand and you don’t have to bring expertise and instrumentation in-house. That’s obviously very cost effective.

The second reason is that research has become much more specialized. To work with the most innovative or the highest quality researchers for specific types of experiments, using an external network is often a better option than trying to bring everything in-house. So as organizations shift to this outward model, it’s got a lot of benefits.

However, going through, compiling qualifications, and contracting often takes months. This drives inefficiency and transaction costs into outsourcing. Science Exchange basically removes all of the downside with a single central platform, so the company can track everything that’s happening.


AH: How does this improve scientific reproducibility?

EI: We’ve worked on many of the most well known projects related to addressing reproducibility through trying to actually understand the replication rate of biomedical studies. Because the Science Exchange network is a very efficient way to identify a particular service provider who can conduct a certain experiment, this created a unique opportunity. Science Exchange has run The Reproducibility Project Cancer Biology studies in collaboration with the Center for Open Science. We’ve also run many private replication studies for pharmaceutical companies for projects.

The actual definition of a reproducible result is that you obtain a similar result through a replication study, but very few replication studies are ever actually conducted or published. One of the arguments against doing reproduction studies was related to the time and cost involved. As we’ve run these projects it has become apparent that actually doing replication studies is very efficient because we can run them very cost effectively. The first five studies that were published from the Reproducibility Project cost $27,200 each.


AH: Is there an advantage for academic researchers to use this platform?

EI: Most researchers don’t really have any incentives to do these. There are more incentives for venture capitalists who might be investing in a bio tech company or pharmaceutical companies who are considering building a program based on published results. They have a large incentive to check that the result is robust and can be reproduced before they invest.

However, if anybody’s to actually examine the rates of reproducibility and design a system for measuring reproducibility by replication studies, it will have to come from the funders.


AH: Why should big funders like NIH or NSF be willing to help pay for reproduction studies?

EI: There’s significant return on investment if you're only spending $20-30,000 to replicate key experimental results, but we have not seen funders really willing to fund replication studies. This despite there being intense scrutiny and focus on irreproducibility, particularly for the NIH.

The NIH has actively said that they are going to fund education programs and that they're going to fund a number of different initiatives to attempt to improve the rate of reproducibility. However, the only measurement that scientifically determines the rate of reproducibility is by conducting replication studies. So without measuring, I'm not actually sure how they will be able to determine whether any of their interventions have been successful.



AH: So individual researchers aren’t coming to you to replicate studies?

EI: We definitely haven’t seen individual researchers invest funding to run replication studies. It would be quite difficult for them to do that because of the funding environment around how much is included in the grant application. If you're using grant funding for those studies you're not using the funding for what you originally had written into exploratory research grant. Until the funders actively say “here is the money for replication studies,” I don’t think researchers will be able to conduct them.

I definitely struggle with the fact that Science Exchange has now run and published more replication studies than the NIH. We’re a big opportunity for the funders who actually invest in running these types of studies given how much interest there is in improving the quality of reproducibility of published work.


AH: So if funders could set aside separate funds specifically for replicating studies this would be a much more doable thing for the average researcher?

EI: Exactly. For the average researcher there is a real benefit to do things like validating reagents, to recording full protocols, to publishing raw data. These things are kind of obvious. They definitely improve not only on our ability to replicate published results but also to build upon those results.


AH: Every time the results from a couple of these Reproducibility Studies are released, it seems like the media jumps on that one set and wants to make a judgment. Thumbs up or down, is Science okay or not? Do you get that impression?

EI: Yeah, I think definitely. The project team has really tried to stay away from that. I mean the goal of the Reproducibility Project Cancer Biology is to build the largest public data set of replication studies so that we can actually understand more about how difficult is it to reproduce published results, and then have a data set that other people can build upon.

Focusing on individual results and saying, “This result is right or wrong,” is absolutely not the goal of the project. Even the way we interpret replication studies is still being determined. One of the things we've done is instead of comparing the two studies and saying one is right and one is wrong, we combine the results that are obtained. So as you're sampling from the population you get one estimate of the effects and then you sample from it again you get another estimate of the effect. Then you can combine those results and get more confidence in the true effect size that you're observing. That’s a much more useful way to look at the results.


AH: Is the project fulfilling its initial goals?

EI: The project is incredibly interesting, actually. We learn more and more as the results come in. Some of the areas that are particularly interesting are what happens if you just do the experiment exactly how you originally wrote it in the protocol and you publish all of those results. What do those results really look like? They look quite different than most published papers.

This tells you that there's a lot of variability in biological assays which is not unexpected, but people don’t really talk about it. If I ran a sample in an assay in my lab and then you ran it in your lab would you expect the results of the actual control to be very similar?  The answer to this is “No.” That, in itself, tells us that our assays have more significant variability than we may be aware of and as a result sometimes the effects might not really be related to a true effect. It might instead be related to the variability of the assay.

The way that we publish results is often related as stories of, “We did this because we saw this result”. I'm not sure if that’s particularly helpful because as when you try to tell the story, you leave out information. At the Science Exchange, we actually publish everything that we've collected. That’s not the case obviously for research laboratories; the vast majority of their experiments are not published.

So the actual data sets are interesting because they give you a real shot of what it is like when you just ran experiments. How variable are the results? What are the issues that come up during the data collection process? All of that is available to the project.

 A publication is really a story rather than a way to communicate exact results from a laboratory. That’s a big difference. That, in itself, is part of the issue related to public publication. There is removal of results that might not fit with the story, but those results might be really important for actually understanding the true phenomenon.


AH: So you are getting interesting results and everyone could benefit from these reproduction studies, but no one wants to be the one to fund it?

EI: Yeah, well its more just nobody wants to fund it. Funders would like researchers to work on the exciting results. Researchers want to be the first to publish, but if 80% of it can’t be translated because it’s not reproducible, then it’s useless.  

We've talked a lot about reproducibility issues for quite a long time now. But we don’t have any systematic investigation of what is the rate of reproducibility. The only way to check reproducibility is to run replication studies.

We have, particularly the funders, an obligation to try to improve the ability to translate these results that are being generated. They have to look at the return on their investments and how to best use those funds to move the needle on taking these advancements forward. It makes me frustrated that they are unwilling to invest in things like replication studies that clearly take enormous benefit to everyone.


AH: Thank you so much for your time and insight. We will do our best here to keep these issues in the forefront and look forward to seeing more from your efforts as well.           


If you would like to be featured in our Cytocentric Visionary Series, This email address is being protected from spambots. You need JavaScript enabled to view it.. We would love to hear about your work.