Dear Crucial Skills,
How can I use the Six Sources of Influence to improve patient satisfaction scores at our hospital? Our scores average in the 70s, but when I talk to patients they tell us we are wonderful. I’d like patients to put that on the survey. We often see scores of 8 but get comments that suggest they think we deserve a 10. Any ideas?
Signed,
Seeking a Source
Dear Seeking a Source,
If it’s alright with you, I’d like to think through this together, and I invite our readers to think through it with us in the comments.
First, let’s consider the survey. I’m going to assume that the purpose of your survey is to gather reliable data. Reliable data is uncorrupted, which means it’s free of errors and bias.
Second, let’s consider the Six Sources of Influence. They are personal, social, and structural levers that can be used to affect motivation and ability.
So, there’s a bit of irony in your question. In essence you’re asking how you can influence people in a situation where, if you want reliable data, they shouldn’t be influenced at all—unless it is to tell the truth as they see it.
And that is what I hear you asking—how can I get patients to report their feedback honestly? But you also said you want to “improve patient satisfaction scores” using influence, if possible.
Do you see the difference? I’m not accusing you of wrongdoing. I’m simply trying to think through this with you.
And here’s what I’m thinking.
Check Your Assumptions
There’s nothing wrong with having an assumption—that’s your hypothesis. But remember it’s a hypothesis only.
You hypothesize that patients are more satisfied with treatment than they’re reporting. You have two data sources: the verbal feedback, and the numerical feedback.
Given that they don’t match, it’s natural to conclude one source doesn’t accurately reflect patient perspectives. You assume the faulty data is the numerical feedback, and it may be. But I encourage you to consider the possibility that the faulty data is the verbal feedback.
Do you see how that’s possible? Why might a patient tell a nice healthcare professional his or her staff is wonderful and then mark up an anonymous survey with 7s?
Seek Better Data, Not Better Scores
Given the wording of your question, it sounds like you want both reliable data and better scores—who doesn’t! But when it comes to your survey, make only reliable data your goal.
The cardinal sin of any survey is bias. And while that’s obvious on paper, it’s not always clear in practice. Surveys should be used to gather data that allow us to make a reasonable inference about a population, but frequently they are used to gather data to support a belief or proposition, which results in bias.
So, commit to improving the survey, not the survey results. Truth in the data makes it possible to do the work of improving results.
Define Your Terms (Source 6: Structural Ability)
One of the first things I learned as a philosophy undergraduate was “define your terms.” The point of all communication is to convey meaning. We only confirm whether or not meaning is accurately conveyed through feedback. And, when it isn’t, that feedback can be anything from a scrunched eyebrow to a survey anomaly.
The fact that you’re encountering discrepancies between verbal and numerical feedback could mean your survey questions are unclear. For you, a 10 may mean “medical care provided without error by qualified healthcare professionals.” For some of your patients, a 10 may mean “attractive nurses, plenty of morphine, and 200 cable channels on a big-screen TV.”
So, define your terms. This is particularly important with ratings. Make it clear what you mean by an 8 or a 10 or a 5 by providing examples of the care those ratings represent. In your definitions, avoid interpretations like “timely care” and provide factual examples like “nurses responded to calls within 10 minutes on average.”
In short, your respondents should never feel confused. You might add a question to your survey: “Did you find any questions in this survey confusing or difficult to answer? If so, please explain.” Use that feedback to continually improve the clarity of your survey and your respondents’ ability to answer accurately.
Remind Respondents of the Impact (Sources 1 and 3)
While I’m inclined to think that the discrepancies you cite have to do with ability and not motivation, here’s an idea just in case.
Remind patients of the personal and social value of complete and reliable data. You might do so on the survey itself and verbally when administering it.
“While this survey is optional, your honest and anonymous feedback help us provide quality care for the community. When you take the time to answer objectively, you help us provide better care for you and other patients.”
Hire An Agency (Source 6: Structural Ability)
Finally, consider hiring an agency. We sometimes do so with our own surveys. Professionals in survey design and analysis understand how various factors can contribute to discrepancies or bias, from wording to question types to the order of the questions themselves.
I hope this has sparked a few ideas. Certainly there are other ways you could apply the Influencer Model to your situation, and I invite readers to share their ideas.
Good luck,
Ryan
I have often said, I can get the results I want from a survey I author by what questions are asked and how they are asked. This includes the order the questions are in. However, when doing a customer satisfaction survey, I don’t want to see the results I want. I want to see the results that are true and accurate. When designing a survey, ask yourself, what is my goal with this survey. Then write questions that ONLY work toward that goal. For example: If my goal is to improve customer satisfaction, I ask my self what comprises customer satisfaction? The answers may be response time on a call; how long customers wait to be served; did the service result in the customer receiving the products or services they need; etc. Then ask questions designed to get to these needs:
> How long did you wait before your call was answered? Was this time excessive?
> How long did you wait in the waiting room before your name was called? Was this time excessive?
> Did your agent answer your question?
Etc
Ryan mentioned hiring a agency to do your surveys. I agree with this, unless you are prepared to hire someone to your staff that is trained survey development and execution.
Several thoughts from nearly 4 decades of putting up with surveys:
1. You are not obligated to use a commercial survey service. Generic surveys aren’t very useful for specialized applications. Consider making a quality improvement project. For instance, on a six-month rotating basis you could query waiting time, phone courtesy, speed of getting follow-ups, adequacy of explanations and educational material, utility of consultants, structure of your place (comfort, furniture, parking lot, etc.)
2. From a statistics perspective, expecting or getting a 10 on all surveys makes the survey useless. It’s much better to have a survey that gives you a fairly Gaussian curve with the skewness to the right. Unfortunately, most of us in medicine aren’t accustomed to this; We think we always have to be 11 on a 10 scale. That makes scales useless.
I think this is a thoughtful reply with a laudable push towards integrity, which makes me grin a little because on more than one occasion I’ve been asked to fill out a survey from this group which seems to be angling for a particular press-worthy result about a problem they could address, as opposed to collecting unbiased data. Don’t get me wrong – I think Crucial Learning is great. But worth remembering, physician – heal thyself!
🙂
When you look at the sources of influence, wanting to being nice/being liked (source 3, social motivation) could play a role and explain the discrepancy between a direct question during a live interaction and an anonymous filled out survey. Timing might play a role too: we humans want to reciprocate kindness after we’ve just been helped, but when we reflect on the overall care a while later, we might adjust our opinion.
The survey Seeking a Source is probably referring to is likely a CAHPS survey. CAHPS stands for Consumer Assessment of Healthcare Providers and Systems and conducted under strict rules established by AHRQ – Agency for Healthcare Research and Quality. A company that I used to work at administered millions of these surveys. We investigated many reports where they suspected a problem because a rating and a verbal comment seemed to disagree. Respondents/patients did give what appeared to be inconsistent ratings and responses. Now what?
You rightfully point the reader to the goal of getting good information rather than good scores. That’s great advice. Going a step further to say work on improving the survey is where the advice is less on point. The likelihood that a reader will be able to make any change to the survey is exceedingly unlikely. The uniformity of the survey is what gives it it’s power. Everyone is being assessed by the same yardstick. The CAHPS surveys do change in respond of suggestions but the process is long and involved and unlikely to address the issue of apparently inconsistent respondents.
Instead of focusing on the survey why not focus on improving customer/patient satisfaction? If you improve patent satisfaction you will likely improve scores. One way to improve scores is to look to institution that do satisfy their patients. As a patient at Johns Hopkins Hospital it was obvious I was at a place that took my satisfaction seriously. Everywhere you looked there were signs on the walls letting you know they wanted to hear from you if there was any problem. I had a colonoscopy there twenty years ago and I remember to this day receiving a card signed by all the people in the clinic thanking me for giving them the opportunity to care for me. Time well spent.
Am I the only one who gets texts and emails from the car dealership after a service visit that tells me that “anything less than a 10 is considered a failure”? I have refused to provide feedback because that kind of statement just means that they don’t want honest feedback. I think the issue here is that if you are using a survey to judge someone’s performance, it will be biased and people will try to manipulate the results. But if you are genuinely looking for ways to improve, you spend the time and effort to try to remove bias and get honest feedback. The last thing you want when you are looking for ways to improve is people telling you that you are perfect!
Surveys are surveys. Too many factors to consider regarding “feedback” from un-trained, unprofessional survey-takers, especially when the survey / response is voluntary. Perspectives on Feedback Results should only be used as a data point… and used with a huge grain of salt.
I used to work for a company who believed 0 events was the only acceptable safety rate – regardless of how many people were involved, or number of entries were made, or number of times the activity was performed. This is pretty much a ridiculous perspective or expectation. No one, or no thing is 100% perfect – ever.
Surveys …are only voluntary. Time and effort, attitudes and experiences will influence participation. With that I’d submit that a higher percentage of people that experienced a “less than perfect encounter” would be more likely to submit feedback/survey. Meaning the 100% satisfied would be less likely to respond to a survey – they’re happy. Eschewing results to trend lower automatically rather than higher (.e.g., 7 instead of 10). Also, people who respond to the visual 0-10 scale questions aren’t always going to respond to extremes, 0 or 10, but reasonably somewhere in between. How was the environment for your visit? And the one noisy kid in the waiting room (which is way beyond anyones control) results in a score of 7, and everything else about the environment was perfect. My advice would be don’t put too much weight on a survey, don’t use a 0-10 scale, consider face to face feedback with every client and if you must use a Survey, do what some of the people here say, ask the correct and reasonable questions.
Well written, Ryan! I’m don’t recall seeing your name before. Bravo, Crucial Skills, for your pantheon of contributors!
Thank you. 😊