I know: a SHOCKING revelation. Someone had to say it.
Let’s walk through one simple example of this situation that I recently encountered, so you can understand the crazy-making effect it had on me — and more important, so you can avoid making the same or a similar mistake.
Let’s take a look at what this data is (or may not be) telling us.
- In the “Current Month” displayed, 125,000 patients were seen at the clinic; of these, 30,000 (24%) were new. (I do wonder whether125,000 unique patients really were seen, or if this metric in fact represents visits (perhaps more than one per patient in some cases), not patients.
- Of the 30,000, 45% were seen in 15 days and 58% were seen in 30 days, which percentages total 103. Logic suggest, however, that it’s pretty much impossible to see 900 more scheduled patients than are actually scheduled (see what I mean?). Cue the question “What’s the denominator here?” (I’m guessing that some patients have been included in both the 15- and the 30-day categories.)
- The last section block of data seems to show that 23% of patients had their appointments cancelled; 5% were bumped; and 7% were no-shows, which accounts for 35% of some group of patients. (If this were Boston, I’d think these were the folks trying to find parking.) In any case, what happened to the other 65% of the patients in this category, and can I have them back?
By adding a few additional pieces of information and changing the labels a bit, I’ve created a new and more useful display:
- First, because one patient may have several appointments, I changed the term “patients” to “visits” to more accurately reflect what’s being counted. If I felt it was important to display a number for unique patients, I could of course add it.
- For the Current Month, 168,750 clinic visits were scheduled, of which patients were seen in 125,000 (74%) visits.
- Of those 125,000 visits, 30,000 (24%) were new visits to the clinic. 45% of patients were seen within 1-15 days; 55%, within 16-30 days.
- There were, however, 43,750 scheduled visits (26%) in which patients either never arrived or were never seen. What happened? 23% of the visits were cancelled; 5% were bumped; 7% were no-shows-and an astonishing 65% were for myriad other reasons.
The problems with the original display were easy to identify and address thanks to a few simple questions I asked and a couple of steps I added; you can do the same.
- When I see a rate on any report, I look for the value used to calculate it, and quickly check the math. I ask, “What’s the number used in this calculation? Can I quickly replicate it?”
- When I see any distribution of data (i.e., multiple parts of some whole), I check to see if the percentages add up to 100%. If they don’t, I know something is missing or wrong.
- I tell out loud — using complete sentences — the story I am trying to convey using the values displayed. Does it flow easily, or do I hear myself stumbling over the metrics? (Yes, some folks worry about the crazy lady talking to herself all the time. I don’t care. It works.)
- I ask my report recipients to use this same method. Can they explain to me, in complete sentences, what the metrics are telling them? Do I see the light-bulb of understanding appear over their heads without an enormous struggle and intense coaching? I really listen to what I hear from them, and use the information to improve my displays.
Here’s the bottom line. Charts and tables exist to tell a story, so the viewer won’t have to wade through masses of raw data looking for patterns and understandable information. If your data presentation raises more questions than it answers, you’ve got some work to do. Use the simple steps I’ve outlined here to stop, look, think, and test — to see if you can understand and easily explain the story your data reveals.
And always, always ask, “What’s the damn denominator?” You’ll be amazed what that simple question will tell you.
0 Comments