Publication Bias and Outcome Switching: Threats to Evidence Assessment

Publication bias is a growing problem in evidence-based practice. In the hierarchy of evidence, systematic reviews and meta-analyses lie at the top of the evidence pyramid because they are regarded as the most rigorous form of evidence for possible clinical decision-making. But publication bias can color the results of those reviews and meta-analyses in ways not easily seen nor understood.

Publication bias occurs when the results of published trials differ from those of unpublished trials. But, wait, you ask, what do you mean by unpublished trials? Well, consider this example: Perhaps an institution conducts a trial on a clinical question, but it does not obtain a sufficient number of participants to properly power the study. Or perhaps the study is examining a new investigational drug that a company just spent $100,000,000 developing, but the results show no difference between the new drug and the existing standard-of-practice medication. Or the study shows much higher rates of side effects. Because of these results, the company may decide not to publish the study. There is also the possibility that a journal editor will simply decide to reject a paper because it does not provide exciting findings. That can happen if no difference is found between the study drug and the comparison.

Please consider the ramifications. Say that we are conducting a systematic review. However, we are not able to access some data that run counter to data that are easier to find. What happens then is that when we provide our assessment of the literature, it will be skewed in favor of the more accessible data on the study drug or intervention; that is, it is not really depicting the reality of clinical effectiveness. And if we are doing a meta-analysis, we are not using those hard-to-find data in our aggregate dataset, meaning our analysis of the effect size will also be skewed in favor of the experimental intervention. This is a real and highly significant issue.

Shining Light on Unpublished Trials

One answer to the problem is to develop clinical trial registries. This has been done, and chiropractic clinical trials are registered in the https://clinicaltrials.gov/ database. Literally all trials should be registered. The idea behind that thought is that once a trial is registered, there is at least a record of it that a clinician could go to in order to see that it was actually carried out. Thus, we could then seek information about its results. But even in the United States, not all trials wind up registered.

Ben Goldacre has been quite outspoken about this. Here is a link to his TedMed talk on the topic.

I raise this issue here because it is a threat to evidence-based practice. It means that despite our best efforts, we do not get access to all the information we need to make an informed decision; some of that information is hidden from us for benign, or not-so-benign, reasons.

Goldacre leads the AllTrials campaign (www.alltrials.net), which is designed to let people know that this problem may affect patient care. The AllTrials website states that about half of all clinical trials have never been reported. Goldacre has asked for trial registration, trial summary results and a clinical study report for every study that is completed, and he wants all of these placed in the public domain. This would certainly increase transparency in publication and in data analysis.

Goldacre also leads a second effort, known as COMPARE (http://compare-trials.org/). This project tracks outcomes in clinical trials, looking for what are termed secondary or switched outcomes. This is a subtle problem easily missed by readers not steeped deeply in research methodology. In a trial where the outcomes are switched, the authors report their findings, but those findings are not the specific findings outlined in the trial registry or directly related to the research question. As you can imagine, studies collect reams of data and much of it can be interesting to look at. Only some if it, however, is designed to specifically answer the research question.

COMPARE looked at every trial published in five top medical journals to see if the outcomes noted in the trial registry were the ones reported in the final paper. In short, COMPARE looked at 67 trials. What COMPARE found was that only nine reported perfectly their outcomes. Three hundred outcomes were not even reported. And 357 new outcomes were added. In essence, any papers are potentially selecting different outcomes than the ones they set out to look at, perhaps because they got better results with those outcomes than the ones they designed the study for.

As a follow-up, the COMPARE group then wrote to the medical journals in question, raising this issue in letters to the editors. In some cases, editors admitted they had made a mistake. In others, they admitted nothing, and arguments continue.

Again, I bring this issue up because you need to know the issues that affect what you read. You need to understand the threats to that information, and thus take those threats into account because for you, the issue revolves around making a clinical decision regarding a patient’s care.

This is important, don’t you think?

Dr. Lawrence is senior director for the Center for Teaching and Learning and Continuing Education at Parker University, Dallas, Texas. He also serves as chairman of the ACA Editorial Advisory Board.