Quality - three concepts for radiology reports by Timothy Myers

Introduction

Quality is something we put into each interpretation and each report.

Radiologists who put a lot of time and careful thought into rendering their interpretations are many times not taking that same level of care in presenting those thoughts.

Regardless of how carefully we review the images or how difficult a diagnosis we make, if we do not then put that information into an actionable format by creating a document that is understandable, the patient and the clinician will not benefit from our thoughts or reasoning.

 

Three concepts to adhere to:

  1. Relevance
  2. Simplicity
  3. Brevity

Relevance

Everything we say should have some meaning for the clinician. Pertinent positive findings as well as pertinent negatives need to be stated and discussed as needed. Many times it is the negative that is most important as the clinician tries to evaluate the patient. We frequently do not know what is causing the symptoms be we can definitively say what is NOT causing the symptoms.

Example: Not describing a right ovarian cyst on a CT may lead to a patient immediately having an unnecessary ultrasound to continue to followup right lower quadrant pain where the CT is otherwise normal.  The ultrasound is not necessary in this case. The CT answers the clinical question of, "Why is this female patient without an elevated white count having right lower quadrant pain ?", with a potential etiology for the patient's condition.

Simplicity

If something can be said simply in a declarative sentence, it should be said that way.

Example: An impression that says, “There do not appear to be areas of consolidation or infiltrate.”, does not have the same meaning or carry the same weight with the clinician as, “There is no evidence of pneumonia.”, in a patient with a cough and a fever.

Brevity

We need to be appropriately brief.

Example: A report that says, “Nonspecific abdomen.”, tells the clinician nothing and leaves them with a patient that has symptoms and no help is received from the interpretation provided. 

I used to argue with a staff radiologist who hated this type of interpretation for an abdomen series. He would say that every film or study was specific in varying degrees, even though it may be limited in its essence e.g. a plain film vs. a CT, as to what it may tell us.

A ‘nonspecific bowel gas pattern’ on a film does tell us there are no dilated bowel loops, there is no significant accumulation of stool and there is no overtly obstructive pattern. With these findings we know there is a differential that can still include ileus or enteritis depending on the exact picture before us and the clinical findings.

Appropriately brief should be determined by the examination and the need to help the clinician know what we see, not by the radiologist who simply says something that allows them to move on to the next study. When we say "nonspecific" what we mean is, there is no evidence of obstruction or other potentially specific diagnosis.  When the clinician hears "nonspecific" they are left with doubt as to what may or may not be seen on the images and what clues there may be as to the etiology of the patient's symptoms.

summary - Quality is our value added input to patient care.

Quality is not accuracy.  Quality is our value added input to patient care.  It includes accuracy but accuracy alone does not help the clinician.  If we are to remain relevant in the patient care chain, we must maintain the highest quality reporting and interpretations.  We must tell the clinicians not just what we see but also what we think.  And, we have to include information that will help them most easily determine an etiology for the patient's problems or symptoms so that treatment can begin promptly with the lowest cost and the highest likelihood of success.

Who Took the ‘Quality’ Out of Quality Assurance? by Timothy Myers

Introduction

Healthcare is changing. Programs are being developed and put into action based on the industry’s movement from a quantity structure to quality structure. We have all heard of this shift from fee-for-service to fee-for-quality in radiology, and on the face of it, these concepts are simple. Truly enacting them ; however, is proving to be problematic at best.

At the present time, radiology quality assurance programs are designed to measure error. Deriving an error rate for a radiologist does nothing to improve their performance or accuracy. Knowing the percentage of errors does no more to improve quality than knowing a batting average can improve the percentage of home runs in baseball.

Radiology interpretations have two components: accuracy and effectiveness. In measuring an error rate, only accuracy is evaluated. The other, potentially more important piece of the puzzle is the effectiveness with which the report delivers its message.

If a radiologist finds an abnormality, yet doesn’t describe it effectively, the clinician may make a mistake at the bedside that can be as devastating to patient care as acting on inaccurate information. Being clear, addressing the clinical situation and providing recommendations for next steps are important factors in being a functioning member of the patient care team, rather than just being a reporter of findings.

To be viable, clinically and patient-oriented, quality assurance programs must be able to evaluate report quality as well as interpretation accuracy. To be maximally effective, quality assurance programs need to be designed to measure and evaluate quality and accuracy separately and an objective process for review must be put in place.

Peer Review 

Within quality assurance systems, the concern of bias and subjectivity are always present. There are always reviewers who want to reform or protect the system and/or radiologist by being overly passive or aggressive in their reviews. To remove concerns of bias, the reviewer must be unknown to the radiologist being reviewed and vice versa.  

Reviewers should also be considered experts in their area of review to ensure respect for their opinion when a dispute over a discrepancy arises. Reviewers need to focus on areas where they are proficient to the level of being considered experts. Finally, reviews must be performed by individuals who are dispassionate about any individual outcome. There must be no perceived or real stake in the effects of the reviews, regardless of the findings.

Report Accuracy Assessment

TABLE 1 - REPORT ACCURACY
Reports are assigned an accuracy category of 1-5 based on the effect of patient care.
1. No variance | Expert reviewer agrees with the primary interpretation
2. Interpretation variance | Variance is unlikely to affect patient care and/or outcome
3. Interpretation variance | Variance may affect patient care and/or outcome or require further assessment
4. Interpretation variance | Variance is, or would be expected to, directly affect patient care
5. Sentinel event | Event that may cause a serious unexpected or unanticipated outcome, death or seriously physical or psychological injury, or the risk thereof

In order to create a program that is designed to evaluate the accuracy and not an error rate with an emphasis on patient care, the focus must be on patient outcomes and not on the ability of a radiologist to perceive an abnormality. The specific effect of an error on a particular patient may be difficult to prospectively determine; however, generalizations about effects can be made based on known results from similar cases, reference and performance standards. The actual patient outcome can also be known and the real effects on patient care can then be evaluated e.g. the patient went home, the patient was admitted or went to surgery. (Table 1)

 

Report Effectiveness Assessment

The main issue in addressing this area is that the concept of “quality” is difficult to assess. Quality is defined as an inherent feature or distinguishing characteristic of an individual, product or process. Within radiology, as in all cases, those defining features and characteristics are determined by the end-user and not the creator, or in other words, they are defined by the clinician and patient, not the radiologist. This view is completely foreign to current radiology quality assurance programs.

George Bernard Shaw once said, “The single biggest problem in communication is the illusion that it has taken place.” From an end-user perspective effective communication within a report is more than just describing the findings. Creating an interpretation or impression based on the findings, the history and the current indications for the examination is about communicating with the clinician and participating in patient care. Successful communication includes the findings, as well as recommendations for follow-up or next steps.

TABLE 2 - REPORT QUALITY
Each report is evaluated with a report quality criterion ranging from 1-5
1. Does the report include a conclusion or impression?
2. Does the report address clinical history and/or symptoms or provide differential diagnosis?
3. Does the report address the relevant positive and/or negative findings?
4. Is the report organized in a structured format?
5. Does the report text contain no errors that would impact patient care or outcome?

It is possible to generate an evaluation on/for the report based on these concepts. (Table 2) Report effectiveness can be reviewed at the time of the report accuracy review. The same reviewers suggested above—the experts in their field—are uniquely qualified to determine the effectiveness of the communication provided. These specialists have developed insights through their years of experience in working with, and talking to clinicians that are focused on their areas of expertise (i.e., they know what the clinicians want to hear).

 

Conclusion

It is sometimes difficult to demonstrate the value-add of radiologists to patient care. Dr. Lawrence Muroff puts it this way: “The future of radiology is bright. The future for radiologists is far less certain.” To be of value in patient care, our reports have to be about more than just speed (i.e. efficiency). Our reports have to be accurate, and they must also communicate our thoughts and conclusions; they have to provide an effective bridge between the images and the patient.

These changes have to be instituted by practicing radiologists, though they must also be taught. Our residency and fellowship programs frequently do not teach effective communication. Providing relevant information in a clear and concise manner—above and beyond the use of terms like “negative” or normal”—must be emphasized for medical students.

Quality in radiology involves the combination of accuracy and communication. Quality assurance programs should be designed to ensure these are foremost in the reports and interpretations we provide. Patients and clinicians have no yardstick to measure quality in radiology, still we have to remember it is the end-user, the patient and clinician, who will define what quality looks like. It is the responsibility of radiologists to deliver that quality. 

About Fortis Qualitas ...

Fortis Qualitas is a quality assurance program designed to help hospitals, radiology groups and radiologists shift from quantity to quality.

Fortis Qualitas:

  • Is an industry-leading quality assurance program that is designed with patient care as its focus.
  • Is a quality assurance program that puts measurable quality first.

Fortis Qualitas uses proprietary technology, peer review experts and a leading edge process that reviews radiology interpretations for five different levels of both accuracy and quality ensuring complete and effective communication between the radiologist and the clinician.