US Hospital Ranking Wars: The Beat Goes On…

As mentioned in an earlier post, we will delve a little more deeply into the topic of Hospital Rankings and what purpose they serve, how they are created and whether they can be trusted. We start with an introduction produced by the Healthcare Association of New York State (HANYS), who produced a report on report cards with an apt subtitle: “Are You Confused Yet?”

Checklist“Health care providers and patients face a proliferation of publicly available reports rating the quality of health care provided in hospitals. Supporters of hospital “report cards” promote them as a means to improve the overall quality of care and help people make more informed health care choices. However, these goals are thwarted by multiple reports with conflicting information and dramatically different ratings. Despite the confusion that contradictory reports create, hospital report cards continue to garner attention from consumers and hospitals engaged in quality improvement efforts. The Healthcare Association of New York State (HANYS) developed the Report on Report Cards as an educational resource for hospital leaders and their boards; it serves as a primer for evaluating and responding to publicly available consumer report cards. Building on academic research and the recommendations of the National Priorities Partnership convened by the National Quality Forum (NQF), HANYS developed a set of guiding principles to which report cards should adhere. They include the use of:

1. A transparent methodology;

2. Evidence-based measures;

3. Measure alignment;

4. Appropriate data source;

5. Most current data;

6. Risk-adjusted data;

7. Data quality;

8. Consistent data; and

9. Hospital preview.


HANYS supports the availability of hospital quality and safety information to help patients make choices and assist providers in improving care. However, the information must be based on a standard set of measures that have been proven to be valid, reliable, and evidence-based.”

We agree with the HANYS Key Recommendation and will use it as a jump off point to some further discussion. First of all:

What is the Objective of Creating the Rankings in the First Place?

There are three parties who have an interest in the Scorecards; first, patients want to know where they can find the appropriate level of quality and safety, the definition of which has increasingly moved to the an exclusive clinical domain (more on this later) – second, hospitals who want to convince patients that they are safer and have higher quality, and third, the producers of the rankings, who have a financial or other motive for creating and releasing the scorecards. We feel the overriding objective should be to measure safety and quality to serve patient-centered care first, through long term improvement in process, measurement and evidence based reporting, which was well covered in the HANYS report.

What is missing?

Ironically, it the US News and World Report Rankings that seem to be touted most widely by hospitals, while simultaneously US News receives the lowest ratings itself on Methodology from HANYS. As mentioned in our earlier post on the subject, we approached US News and were directed to RTI to answer questions on methodology to keep us at arm’s length. We wrote to RTI with some of the same concerns raised by HANYS concerning methodology and suggested adding some criteria be considered when determining the “best” and “top”.  A round of emails produced nothing here.

Aside from the dubious methodology and ulterior motives for producing Safety and Security rankings for hospitals, we have three main contentions with the status quo:

1)      Lack of any social indicators in the determination of which facilities are “best” and “safest” re-enforces and rewards bad behavior on the part of medical facilities and groups,

2)      Lack of adequate Emergency Preparedness creates a false sense of public security that can be changed by reality in the blink of an eye,

3)      Existing deeming mechanisms, which have been accepted as adequate standards for the level of All Hazards Readiness preparation, have demonstrably and repeatedly failed, causing unnecessary death and destruction. Serious efforts to improve the situation have been blocked by industry lobbyists, maintaining a voluntary compliance framework that does not reflect the increasing threat levels of modern life.

We will discuss these issues in detail in our following posts, along with some recommendations on what might be done to improve the current situation.