Study highlights difficulties in comparing patient safety data
Published in the leading US policy journal Milbank Quarterly, the research found huge variability in how English hospitals collected, recorded and reported their rates of central line infections to a patient safety programme. Researchers found most variability arose for what they called “mundane” reasons. These included challenges in setting up data collection systems, different practices in sending blood samples for analysis, and difficulties in deciding the source of infections.
The study dismissed ‘gaming’ as the explanation for the variations the researchers found. Previous studies have reported deliberate manipulation of performance data to ‘look good’, but the study led by Professor Mary Dixon-Woods of our Department of Health Sciences found little evidence of this.
Dr Elaine Maxwell, Assistant Director at the Health Foundation, suggested that the research highlights the very real difficulties in measuring safety in healthcare; adverse events such as infections are increasingly being used in performance management. This study demonstrates that accurate data collection is more complex than may have at first been imagined.
- What Counts? An ethnographic study of infection data reported to a patient safety program in the Milbank Quarterly