Leicester academic exposes reporting bias flaws using novel methods

Posted by pt91 at Feb 10, 2011 11:45 AM |
New report illustrates methods of identifying research bias

Issued by University of Leicester Press Office on 10 February 2011

Influential studies into subjects such as the safety and effectiveness of medicines or class size in schools could be called into question by a new report into ways of identifying research bias.

The report by a leading statistician identifies the danger of relying solely on published work during systematic reviews of literature – a common approach to research worldwide, which is often used to inform public policy.

Such literature reviews may be flawed because research with positive findings is more likely to  be published than work that is inconclusive or disproves a hypothesis, says Alex Sutton, the Professor of Medical Statistics at the University of Leicester.

Using extensive data on trials into the efficacy of anti-depressants, Professor Sutton has evaluated statistical tools he developed to  identify and compensate for the missing findings.

Each year, millions of pounds are spent on systematic reviews of evidence on a wide range of issues of interest to policy makers and academics.  But according to Professor Sutton, from Leicester’s Department of Health Sciences, the selection of material for publication and the way it is presented can lead to bias in  such studies.

“Perhaps the greatest threat to the validity of a systematic review is the threat of publication bias,” he says.

In his recent Inaugural Lecture entitled Analysing the data you haven’t got, Professor Sutton illustrated how he developed tools for identifying and quantifying bias in systematic reviews through work done on anti-depressants.

When he compared the research findings submitted to the US Food and Drug Administration (FDA) with results of the same trials published in scientific journals, he found many trials with less favourable results that had not been published.  In other trials, the reporting of findings was biased towards positive results.

The statistical tools Professor Sutton developed performed very well in identifying and correcting the bias in the journal data.

“This gives confidence that the tools will be beneficial in topics where gold standard data is not available,” he said.

“My work has been in the field of pharmaceutical products because I am in the School of Medicine, but the same bias can affect systematic reviews of published material in any sphere, be it the effect of class size in schools or the impact of divorce on children.”

For more information, please contact:

Ather Mirza
Press Office
Division of Corporate Affairs and Planning
University of Leicester
tel: 0116 252 3335
email: pressoffice@le.ac.uk
Twitter: @UniofLeicsNews

Notes to editors

According to Professor Sutton, publication bias can occur in at least three ways:
Firstly, positive findings are more likely to be selected for publication than negative ones, with the latter then excluded from any future systematic reviews.

Secondly, there is selective reporting in published outcomes. For example, sometimes when the evidence for one aspect was found to be statistically insignificant it is strongly suspected that researchers changed the emphasis of the study to concentrate on aspects with more positive findings.

Thirdly, data “massaging” through the exclusion of some patients from analysis and the drawing up of timelines to exclude certain material can occur.

Professor Sutton’s report suggests that the bias in published literature could be “corrected” by the use of novel contour enhanced funnel plots and a regression based adjustment method, which would improve the reliability of systematic reviews of research findings in a given field.

Share this page: