Journals Library

News from the Journals Library

Clinical trial data and reports linked for the first time

Date: 26 May 2016

The NIHR Journals Library is one of a small number of publishers who have begun depositing clinical trial registration numbers related to their publications to Crossref. This data appears on the new CrossMark box, which has the linked clinical trials feature integrated. In a section called “Clinical Trials” this new feature to CrossMark will be using new metadata fields to link together all of the known publications that reference a particular clinical trial.

Daniel Shanahan of Crossmark explains in his blog: ‘By adapting the existing CrossMark standard to capture the TRN in the metadata of an article, the Linked Clinical Trials project has made it possible to link all publications related to an individual clinical trial using their DOIs. This means that readers will be able to pull a list of both clinical trials relating to that article, and all other articles related to those clinical trials, all at the click of a button.’

Providing easy access to all publications resulting from a trial aids the transparent and full reporting of the research. It gives researchers greater context for the results and means that research can be more easily reproduced. The Linked Clinical Trials initiative is representative of the Journals Library's commitment to publishing comprehensive accounts of its research.

Daniels' blog - The need for transparency in research reporting - originally posted on the NIHR blog website,  is reproduced here;

 

The need for transparency in research reporting

Science should be testable, falsifiable and reproducible. It’s a pithy one-liner, but what does it actually mean? Well, in practice, this means that the quality of research lies in the question it asks and the processes it uses, not in the outcome seen.

However, you certainly wouldn’t get that impression from reading the literature. Sensationalism often trumps science, with a marked bias towards significant results. This has led to many problems with the evidence base, including publication bias, selective reporting, HARKing (Hypothesising After the Results and Known) and significance chasing, among others. Rather than investigating carefully-constructed hypotheses, it more gives the impression of “shoot first and whatever you hit, call it the target.”

There are those who might protest that this shouldn’t matter – after all you still saw what you saw – but the simple truth is that if you look long and hard enough, you’ll always find something. Science is a game of averages; even with a vanishingly small chance of a false positive, if you run enough tests you are all but guaranteed to see one. That is, unless anyone truly believes that per capita cheese consumption is causally related to bed sheet strangulations?

See-through science

Problems like publication bias and selective reporting reduce the usability of the literature, as other researchers are unable to replicate the results, meaning they cannot build on them or use them in practice. This rather defeats the purpose.

Pre-registration of clinical trials was first suggested as a possible way of counteracting these in the 1980s and it is becoming increasingly commonplace, with evidence it reduces publication bias. However, while prospective trial registration seems to have had a positive impact on publication bias, its uptake remains limited, with up to 67% of published clinical trials registered retrospectively.

Similarly, the impact of trial registration on selective reporting remains unclear. The COMPare project set out to evaluate selective reporting in trials published in the top five medical journals compared with the trial records. They found a large number of discrepancies.

Methods in the madness

This issue is not limited to clinical research. John Ioannidis and colleagues set out to repeat a number of microarray gene expression analyses, to see if they could replicate the result. However, for nearly 60% they were unable to – primarily due to lack of availability of the data and methods.

Traditionally, word limits in journals meant authors presented a précised version of the methods used, with only summary data provided in the results. They often simply cited a previous paper where the technique was used, which cited a previous paper and so on. But in order to replicate a study, a researcher needs to know exactly what you did; the smallest variation could lead to huge differences in the results.

Providing the raw data allows readers to see if the analysis and conclusions are accurate for that dataset, as well as presenting the opportunity for reuse and extension of the research results. However, it is the methods themselves that are fundamental to the usability of the research. Before a researcher can trust your conclusions, even if they seem supported by the data, they need to know that your approaches were methodologically sound.

Facilitating reproducibility

Reporting guidelines, such as the CONSORT statement, attempt to address this by setting out a minimum set of items required for a complete and transparent account of the research, and have seen some success. But this still comes too late in the process – well reported bad research is still bad research.

Instead, full, detailed reports of the methods and analyses used in a study need to be evaluated and published prospectively, and then linked to the full data from the study, so that readers can reliably evaluate any potential bias. Recognising this, journals such as Trials have been calling for the prospective publication of study protocols for nearly 15 years, a concept that has since been extended to statistical analysis plans, as well as for other study types, such as systematic reviews. There are even reporting guidelines available specifically for this type of publication.

While this concept of evaluating research based on the methods is gaining traction, it has led to a new issue in the context of the current model of publication. A single clinical trial can result in multiple publications – the trial record, study protocol, traditional results paper and data, as well as secondary analyses and, eventually, systematic reviews – often published in different journals, years apart.

Bringing it all together

Researchers need access to all of these if the research is to be of most use, but actually finding them all is like looking for a needle in a haystack, when you don’t know how many needles there are and the haystack is growing.

We need to bring all these outputs together, linking them centrally at a study level so that they are easily identifiable and accessible. The advent of trial registration has presented the opportunity to do just that, by providing a unique identifier associated with every clinical trial – the trial registration number (TRN).

By adapting the existing CrossMark standard to capture the TRN in the metadata of an article, the Linked Clinical Trials project has made it possible to link all publications related to an individual clinical trial using their DOIs. This means that readers will be able to pull a list of both clinical trials relating to that article, and all other articles related to those clinical trials, all at the click of a button.

There is a lot of focus currently on the ‘reproducibility crisis’ and what can be done to ‘fix’ it. While not a panacea, promoting transparency and reuse of research facilitates reproducibility, and is perhaps the more pragmatic aim. At least in the current system.

The views and opinions expressed in this blog are those of the authors and do not  necessarily reflect those of the NIHR, NHS or the Department of Health.’

 

Daniel Shanahan
Associate Publisher at BioMed Central