By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Its performance was in fact remarkably good, show death statistics that have otherwise been used to condemn its hospitals and the NHS.
Mid-Staffs maintained and then bettered its rate death even while it bore a large increase in admissions of patients with serious illnesses.
This sudden increase occured between 2005 and 2007, the controversial period that became subject to the Francis Inquiry. Fewer of Mid-Staffs patients had actually died than might have reasonably been expected.
The blue line in the graph below shows the number of patients admitted to Mid-Staffs with a serious illness every year between 1996 and 2013. It shows admissions rapidly increased after 2004.
The red line shows predicted deaths, derived from national averages by Imperial College and used by the Francis Inquiry to show that Mid-Staffs had performed poorly.
The black line shows the actual number of seriously-ill people who died at Mid-Staffs between 1996 and 2013.
The graph reveals two important things: as patient numbers went up, deaths did not; and the difference between average and recorded deaths was mostly indistinguishable at Mid-Staffs for the last 17 years.
These two revelations are important because they further undermine the conclusions that have led to the widespread and grossly misleading conclusion that there was something horribly amiss at Mid-Staffs – that it was some sort of dirty hovel where patients went in their hundreds to die needlessly in the hands of cold-hearted nurses. It was rather a hospital that was succeeding against the odds.
Mid-Staffs received 20 per cent more patients with serious conditions between 2004 and 2007. Yet deaths increased only 7 per cent.
This was clear in evidence submitted to the Francis Inquiry, where Computer Weekly obtained its data.
Mid-Staffs admitted 13,780 seriously-ill patients in 2004. By 2007, such admissions had increased by 2,654 cases, to 16,434. Yet Mid-Staff’s death-rate for these high-risk patients actually fell in that time, from 6.4 per cent to 5.8 per cent.
Remarkably, such deaths continued to fall at Mid-Staffs even while admissions of high-risk patients carried on rising. Such admissions rose to higher levels than ever before. And Mid-Staff’s death-rate kept falling.
Deaths of seriously-ill patients decreased 36 per cent at Mid-Staffs in the following years, between 2007 and 2010. That amounted to 340 people who did not die after the statisticians said they would.
Actual deaths fell from 946 to 606 in that time, even while admissions increased another 15 per cent, to 19,237 patients. This was the time when Mid-Staffs was subject to regulatory scrutiny and the first public inquiry.
In all, in the years between 2004 and 2010 that were relevant to the public inquiry and its followup, Mid-Staffs received 40 per cent more seriously-ill patients. Yet deaths fell 31 per cent. 277 fewer seriously-ill patients died than statisticians predicted, in a period when the hospital admitted 5,457 more high-risk patients.
When on 9 June 2010 Andrew Lansley MP, the then newly elected Conservative Secretary of State for Health, announced a second public Inquiry into deaths at Mid-Staffs, the hospital had already got deaths well below the national average, while treating high-risk patients in record numbers.
The truth was that the stats showed how close Mid-Staffs kept to the national average for deaths of seriously-ill patients.
It was a gap that appeared between Mid-Staffs’ death rate and the national average before 2007.
Statisticians at Imerial College magnified this gap using the now infamous Hospital Standardised Mortality Ratio (HSMR), a statistical indicator.
The HSMR magnifies the differences in a way that can make them look terrifying, like bugs and insects under a microscope. It arguably distorts the data, and frightens little children and politicians. This is in essence what the the death statistics at the centre of the NHS scandal have done.
Deaths at mid-staffs were about 50 to 200 people a-year more than average between 1996 and 2007. It is a difficult number to swallow. But these were less than one per cent of 154,661 patients admitted with serious conditions in that time.
There was really not much difference between Mid-Staffs and the national average. Seriously ill-patients died at a rate of between 2.8 per cent and 7 per cent at Mid-St affs between 1996 and 2013. The national average was between 3.5 per cent and 7.1 per cent.
Between 2004 and 2007, when the difference was highest, Mid-Staffs’ death rate was 5.9 per cent. This was a mere 1 percentage point higher than statisticians had forecast it would be. Their calculations, based on the national average, had predicted it would be 4.9 per cent.
They predicted that of 60,837 patients admitted to Mid-Staffs with a serious illness in those four years, about 2,967 would die. The number of actual deaths was 3,561.
That is still 594 more people who died than forecast. Their significance was nevertheless but a vague bellwether. They constituted only 0.9 per cent of seriously-ill patients admitted to Mid-Staffs in their time.
Even in the decade between 1997 and 2007 when Mid-Staffs’ death-rate was higher than average, it amounted to only 0.8 per cent of 143,573 seriously-ill patients who attended the hospital. Statisticians reckoned 7,904.5 patients should have died in a decade. Actual deaths were 9,097. They were 1,192.5 above average.
In the years after 2007, when Mid-Staffs was doing better-than-average, its margin of fewer deaths amounted to 0.5 per cent of admissions. The difference for the last 17 years (the entire period for which records are available) was 0.3 per cent of admissions.
Looked at this way, it is easier to imagine a hospital battling against the odds to save people’s lives than it is to shout fire and brimstone about a failed National Health Service.
It is from the perspective of a 0.3 per cent difference easier to imagine a hospital’s day-to-day struggle to save lives having made more successes on one day than another, but perhaps accumulating more failures one year or another, and adding further over a whole 17-years to a total that might have been better, perhaps.
But at a hospital whose successes fell below the national average by 0.3 per cent of all seriously-ill patients it treated, perhaps staff did as well as could be expected. Perhaps now with hindsight granted by data tools like those developed by the sober, benevolent academics at Imperial College, even that narrow margin might be cut further.
This sort of death-rate leaves a lot of room to imagine the day-to-day complexities of human strife.
What we have instead is the HSMR indicator – the thing that turned death at Mid-Staffs into a frightening, microscopic image.
It did this by expressing the difference not in relation to actual numbers of seriously-ill people. It did it by expressing the difference as a proportion of itself – as a proportion of the forecast death-rate – a statistic of a statistic.
So for example, statisticians forecast there would be 721.2 deaths among 14,139 seriously-ill patients admitted to Mid-Staffs in 2005.
Looked not as an HSMR but as a simple death-rate, forecast deaths amounted to 5.1 per cent of actual admissions in 2005. Actual deaths amounted to 879, at 6.2 per cent of admissions. The difference of 157.8 people – the amount by which Mid-Staffs was worse than average that year, was 1.1 per cent of admissions.
To get the HSMR, you show actual deaths as a percentage of forecast deaths. So in 2005, 879 actual deaths equated to 122 per cent of those 721.2 deaths that had been predicted. Actual deaths were 22 per cent more than forecast. Or they were 1.1 per cent of admissions. It depends how you look at it.
The point of the HSMR is to show how well an individual performance compares to the average. It therefore magnifies the differences between hospitals.
It can be understood just how much the indicator magnifies reality when you look at it from another perspective. Take those 1.1 per cent of admissions that in 2005 constituted Mid-Staffs’ above-forecast deaths.
1.1 per cent was the difference between the actual deaths, which were 5.8 per cent of admissions, and predicted deaths, which were 4.9 per cent of admissions. The difference is meaningful only in relation to admissions. But consider it as a proportion of the forecast death-rate anyway and you again get 22 per cent.
Magnified this way and then considered in cretinous isolation, the Mid-Staffs death-rate sounds horrifying.
Link the death-rate unscrupulously to isolated incidents of poor care or medical error such as those highlighted by the Francis Inquiry (isolated incidents gathered over ten years and, roughly speaking, numbering about 300 cases among 500,000), and you get a concoction more harmful than would normally be allowed near any hospital, let alone academic institution.
Let us meanwhile not forget the living – those couple of hundred thousand patients who went to Mid-Staffs with a serious illness in the last decade and were cured or, as is more likely with the seriously ill, had their deaths put off a while.
Of 242,688 patients with serious illnesses who were taken to Mid-Staffs with serious illnesses between 1996 and 2013, 229,800 survived. That was 95 per cent survival rate. Not bad when you think about it.