Motivating Learners to Respond to Learner Surveys
This Post is from a chapter of Will Thalheimer’s Forthcoming Book, Performance-Focused Learner Surveys: A Radical Rethinking of a Dangerous Art Form (2nd Edition). It is made available now only to members of the Learning Development Accelerator.
This chapter provides the first research-based recommendation
for response rates for the learning-and-development field.
Please Do Not Share. Thank you!
NOTE: This is a rough draft, so please excuse misspellings, grammar issues, etc.
Chapter 12 – Motivating Learners to Respond
If you create the best learner survey ever, but nobody answers your questions, you’ve got nothing! That’s obvious. But what if you get 20% or 50% or 80% of people responding? Are these percentages good enough? This is an extremely difficult question so it’s important to examine this issue in more depth.
Ideally, we’d get everybody to respond. And let’s not forget, we want them not just to respond but to respond to every question with full attention and candor. In some sense, we don’t want to maximize the number of people who answer our questions. We want to maximize the number of people who answer our questions with genuine thoughtfulness.
In the research world, survey experts are worried about “nonresponse bias,” the situation where people who don’t respond to a survey are different in some way from those who do respond. Let me illustrate the problem with a trivial example. Imagine surveying people about their hair color. The people who respond generate the following results:
· 66% —Dark Black/Brown
· 18% — Brown
· 7% — Blonde
· 5% — Red
· 3% — Gray
· 1% — Bald
If we had surveyed 99% of the whole population, we could be very confident in our overall results. Even if all the non-respondents had been bald, it wouldn’t change our results very much, except for the bald category, which we would have undercounted by half. But what if the numbers above were from only 50% of everyone we surveyed? What if the other 50%—the non-respondents—had all been bald? Our survey results would have been way off. Where our survey had found 1% of people with bald heads, the actual number would be about 50%. Where our survey found Dark Black/Brown hair at 66% the actual number would be near 33%.
But what if we got a 50% response rate and the non-respondents had the exact same percentages as those who did respond? All would be good because our conclusions from our survey results would not change with the addition of the non-responding 50%.
Note what I did there. I used the extremes to illustrate a very critical point. If our non-respondents are like our respondents, the response rate doesn’t matter. But if our non-respondents are different from our respondents, we are likely to have a problem.
So back to our example. What if we surveyed people about hair color but only people under 45 years of age answered our survey? Obviously, we would undercount gray hair and bald heads. Look through the table to see how far off we might have been.
Are low response rates always a problem? No! They are not! Indeed, in political polling, response rates on telephone surveys in the United States have dropped from 35% in 1997 to 6% or less today[1]—and yet polling has become more and more accurate over time.[2] But let’s be careful here! Political pollsters have the resources and expertise to make statistical adjustments that help correct for survey biases. We do not have the same capability, nor is it likely we will ever have access to resources that would enable us to do the same sort of analyses. More importantly, there is no reason to think we need to invest in advanced statistical techniques to get the data we need. More about this later.
A 6% response rate is very low. At the higher end, let’s use the U.S. Bureau of Labor Statistics as an example. Most of their surveys range from 50% to 90% response rates.[3] Here are some examples. The American Time Use Survey (ATUS) averaged about 50% response rates during 2010 to 2012, but now it averages closer to 40%. The Current Employment Statistics Survey (CES) has averaged about 60% response rates since 2010. The Consumer Price Index on Housing (CPI-Housing) usually falls between 60 and 70%.
But Dr. Thalheimer, none of this is even remotely related to learning and development! Yes! You are right dear reader! What are our numbers? The truth is that we just don’t have good enough data to say. I’ve talked with training-evaluation vendors who claim to get at least 70% on average, and I’ve talked with hundreds of practitioners who complain about getting 20% or less. The best current research on learner-survey response rates comes from university course evaluations—where a lot of research is done—but even this research is unlikely to be directly relevant for us, and it is not comprehensive either. One recent review of university student evaluations found response rates ranged from 17% to 83%, looking at both online and paper surveys.[4]
Before I get to my recommendations, we should first look more skeptically at the importance of response rates. There is evidence from survey researchers that response rates do not sufficiently influence nonresponse bias. Indeed, in their review of the literature, Hendra and Hill (2019) found “little relationship between response rate and nonresponse bias” and they concluded that there was no optimal response rate to target—that is, no minimum threshold response rate. But again, we should be careful in interpreting these results for our purposes. The surveys they reviewed were focused on employee retention and advancement—not learning and development—and they were focused on surveys that purported to aim for an 80% response rate. Still, their conclusions should open our minds to the possibility that a low response rate for us may not be the end of the world.
Let’s consider also other complications regarding response rates. How do we count a completed survey? Is a survey complete only if a person answers every question, or every required question? What if people answer 75% of the questions or 50% or 25%? Response rates can differ between online surveys and paper surveys. The U.S. Census—done every ten years to count the number of people in the United States—offers a response rate for mailed census forms (about 60-70%) and then an overall response rate of 99.9% after people have been tracked down through phone calls and door-to-door canvassing.[5] Finally, response rates of surveys in general seem to be falling over the last few decades as people are less willing to engage surveys and as they disperse their attention over multiple communication channels, making it harder for survey providers to reach them with survey requests.
How to Think about Response Rates
Please Will Thalheimer! Tell us what all this means for learner surveys! Dear reader, let me say that my head is exploding too! There is simply not—at this time—definitive research about response rates for learner surveys. Still, there is enough wisdom from research and practice to make recommendations.
First, there is no magic threshold for response rates. This point has been emphasized by researchers focused on survey methodology in general,[6] and by researchers focused on university teaching evaluations.[7] Researchers focused on university teaching evaluations generally recommend that more be done to increase response rates, yet very few have offered clear recommendations for a minimum response rate. Some have made recommendations, but their recommendations are inconsistent, varying from about 45% to 80%[8]—and these thresholds, as designed for university teaching, may not relate directly to our work in learning and development.
Second, we ought to do what we can to improve our response rates. Even if our data is subject to zero nonresponse bias, it doesn’t look great if we only get 10% of people completing our surveys. Low response rates are a failure on our part! Below I offer a whole host of things we can do to increase our response rates.
Third, we should look at our results even when we get a low response rate. Almost always we can find some nugget of interest—something to investigate further. Written responses are especially valuable in situations where there are low response rates. Where results from a forced-choice question might be skewed due to the bias of our sample, several comments highlighting a problem or a success factor are usually illuminating taken as a whole. Also, there is some evidence from university teaching that lower rated courses get lower response rates.[9] So, it could be that a low response rate is indicative—in and of itself—of a problem in the learning design or facilitation. We can’t assume this causal directionality, but it may be useful to investigate low-response courses for signs of problems.
Fourth, we should be open to using online surveys. While it’s true that online surveys have historically gotten lower response rates than paper surveys, online surveys have several advantages, and we can increase their response rates even if they start our being low. In a recent university study, online survey response rates averaged about 64%.[10] Online surveys prompt learners to provide longer written responses.[11] Learners, at least university students, often prefer online surveys and feel they convey more confidentiality.[12]
There are also logistical advantages to online surveys, especially in data collection. On the other hand, when we have our learners in a training room and we tell them to fill out the evaluations, it’s very unlikely that they will avoid responding. They are a captive audience. They may not pay full attention. They may not respond with sincere thoughtfulness, but they are responding. This response-rate advantage fades somewhat when we teach online, but there are ways to increase rates of responding.
Paper surveys’ response-rate advantage is not because people prefer pencils to keyboards, but because paper surveys are delivered to a captive audience. Some trainers have figured this out and ask their learners to complete their online surveys while still in the classroom or online meeting. My guess is that this negates the advantage of getting longer more thoughtful comments—and I wouldn’t recommend this unless learners have computers with keyboards and are given a relatively long time to complete their surveys.
Fifth, we must consider the purposes our survey data will be used for. If we are using our learner surveys for improvement—to improve our training program or improve an instructor’s performance—then a robust response rate is important but not critical. On the other hand, if we are evaluating an instructor or a learning architect who designed the training—and we use the results to impact their employability—then a much higher response rate is warranted, and perhaps legally required. It is simply unacceptable to use invalid or unreliable data for job reviews, employment status, promotions, pay grades, or career trajectories. For any of these personnel decisions a much higher response rate is required.
I know what you’re thinking. You’re thinking, Damn you Thalheimer! Just give us a number! Please dear goodness! Make a recommendation! But reader, I am worried! The research is not definitive and so I’m not fully confident! Also, pushing up response rates may lead to funkier data as people comply but do so with less attention and less forthrightness. I honestly don’t know what the right balance is. Also, I am unqualified to render legal advice. Dr. Thalheimer, we don’t care. We want a number! I know. I know, but should I give you a number or will that cause more damage than help? Sigh!
Okay, I’m going to conjecture some recommendations. I’m basing my recommendations on faint whiffs of intuition from reading many studies reporting response rates for learner surveys, from years collecting my own data and working with clients who collect data from their learners, from the work of expert survey masters like those who work at the U.S. Bureau of Labor Statistics, from Hendra and Hill’s (2019) article on nonresponse bias, and from Nulty’s (2008) thoughtful if imperfect conjectures.[13] I am providing these recommendations not because I think they are indisputable but rather because I think they will motivate us to aspire to raise our expectations. I guess, ultimately, I feel that leaving you without recommendations is a worse sin than leaving you with feeble recommendations. These are tentative until more research is available. Have I been clear and honest enough that these are based on my weak and flimsy intuitions?
Let me add some more clarifications. The recommendations above—even flimsy as they are—assume we are attempting to survey 100% of our learners, not a subset of our learners. If we tried to sample from the full population of our learners to get a subset, we would introduce more noise into our data—and we would then likely need even higher response rates. Fortunately, for most of us in learning and development, we are interested in surveying all our learners.
Let me be clear about something else. If learners drop out, we should survey them too! If we can’t survey them, we should count them as part of our learner population and calculate our response rate with those dropouts included. Here’s an example. Assume that 100 learners start our MOOC workshop and 20 drop out along the way and disappear and can’t be contacted. We survey the 80 remaining learners and 50 of them complete our survey. 30 don’t. What’s our response rate? Our response rate is then 50%. We lost 20% to dropping out and 30% to survey nonresponse.
Worse yet, our dropouts are probably different in some way from our non-dropouts—adding bias to our data. Ideally, we’d figure out a way to survey them. If we can’t, we should expect that our survey data is suffering from nonresponse bias. We can’t be sure of the direction of that bias. A likely hypothesis is that the dropouts would have rated the workshop more harshly. A competing hypothesis—one we’d like to believe—is that dropouts were perfectly happy and got what they needed from the learning.[14] Here’s the point: When we can’t survey our learners, we can’t know for sure what they thought—so we ought to do what we can to increase our survey response rate.
There is also evidence that learners may be most likely to opt out of responding when the learning experience is seen as average or within the normal range of expectations. In other words, they may be most likely to respond if they’ve had a great experience or a bad one. This has been found in at least one research study, which asked learners why they didn’t respond.[15] It is also a common refrain from learning and development professionals. I have heard this from hundreds of learning professionals over the years, though I can’t verify whether it’s actually true. Why might we believe that it’s mostly the happy and unhappy learners who respond? Probably because we remember the most salient comments from our open-ended questions—comments that praise and comments that criticize. Here’s the difficulty. Even if this is true, it’s hard to know whether it’s a problem or not. We can make the case that it’s most important to hear from the extremes, but at the same time we can make the case that it’s the folks who felt the learning met expectations that we should hear from. My conclusion is that we should do what we can to raise our response rates—so we can worry less about these issues.
Let me add one other complication. All this response rate complexity assumes that learners have the flexibility to decide for themselves whether they want to complete the learner surveys they are asked to complete. Some organizations get almost full compliance by requiring learners to complete their learner surveys. Of course, completions and full survey engagement may be two different things. It’s possible that forcing people to complete a learner survey creates other problems. Forcing compliance has two major risks. First, learners may act with haste, inattentiveness, or outright sabotage—answering questions willy-nilly, creating aberrant data.
Second, learners may come to see their learner-survey efforts as compelled, as compared to freely given. We may thus be training them to see learner surveys not as a constructive practice that supports improved learning for everyone, but as a useless drudgery to be tolerated or avoided. On the other hand, perhaps if we require our learners to complete learner surveys, they will be more likely to identify themselves as people who virtuously give feedback to help maintain and improve their organizations’ learning interventions. While I know of no direct research on this relating to learner surveys, my guess, based on the Ryan and Deci’s seminal research on motivation,[16] is that enticing people rather than requiring compliance will produce better results. Learner surveys freely engaged will engender more people to see learner surveys as valuable and worthy of their attention—and thus create long-term benefits.
How to Get Better Responses and Higher Response Rates
Use questions that are well designed—ones that learners can answer with confidence, that ask about important information, and that provide clear and meaningful answer choices (not Likert-like or numeric scales). Also avoid obviously biased questions—as these may alienate your learners.
Where possible, have a trusted person make a personal request to the learners to complete the survey (ideally an instructor who the learners have come to trust and respect). Just below this list is a separate list that includes some useful talking points.
Write a brief introduction to the survey using a conversational tone. Avoid formal impersonal language.
Early in the training (and more than once) mention that the learner survey is coming—so that learners know it is an integral part of the learning, not just an add-on.
Tell learners that the data will be confidential, that the data is aggregated so that an individual’s responses are never shared.
Tell learners how feedback from previous learners has improved the current learning program—or previous learning programs if this is the first run of the program.
Tell learners how and when you’ll let them know the results of their feedback.
After data is compiled and analyzed, share with learners the results of everyone’s feedback, telling them the themes in the data and what improvements you and the learning team are considering making.
Don’t have demographic questions at the beginning of a survey. This has two negative impacts. First, it can scare people away from answering the survey. Second it may prompt people to be less forthright as they answer.
Ensure that survey distribution and collection are independent and free from tampering or review by interested parties like instructors. Learners may be hesitant to be forthright if they think their responses can be linked directly to themselves.
Monitor instructor response rates, perhaps even acknowledging or rewarding instructors who hit the response-rate goal, and/or punishing or sanctioning those who are below the goal.
Design online course flows that include the learner surveys as a regular learning task, rather than as an add-on. Don’t put the survey as the very last item on the learning agenda.
Ask questions that encourage constructive criticism. Ask for both praise and criticism directly. Provide both positive and negative answer choices. Avoid using question stems that assume positive answers.
Let learners make some decisions about what questions to answer. You can do this simply by making some questions optional or by having learners choose which of several questions to answer.
Keep the learner survey relatively brief and aligned with the length or importance of the learning. For example, for a five-day leadership-development intensive, a 20-minute survey may be acceptable, whereas, for a one-hour short awareness training, probably 5 minutes is a maximum.
If you’ve got a long learning program, consider breaking your learner survey into shorter interim surveys.
Options for Making a Personal Appeal to Learners
Some learning programs have an instructor, facilitator, or coach. Other programs are self-study programs—for example, some elearning programs are unfacilitated, some learning programs simply provide a reading list, a video list, and a recommended set of exercises, reflections, or tasks to complete. When there is an instructor, facilitator, coach, or manager, a personal appeal to learners can be very helpful. The list below lists talking points that may be useful.
As the survey is being introduced and delivered, make a modestly impassioned request to the learners to complete the survey.
Tell learners what you and the organization will do with the data—for example, how the data will be used to improve the weakest parts and maintain the strongest parts of the learning.
Tell learners how personally valuable the feedback is to you, your future performance, and your continual improvement, as well as to others on the learning team.
Acknowledge the effort that they—your learners—will be making, maybe even commiserating with them that you know how hard it can be to give their full attention when it’s the end of the day or when they are back at work.
Put the time devoted to the survey in perspective, for example, “We spent 7 hours today in learning, that’s 420 minutes, and now we’re asking you for 10 more minutes.”
Tell learners that their feedback will be part of a whole cycle of feedback and improvement that began as the course was being developed.
Remind learners that providing feedback to create improvement is part of the organizational culture—"it’s what we do here, it’s who we are.”
Note that while each of these talking points may be potent, it’s probably not smart to go on too long. Don’t use all of these talking points. Instead, prioritize and select messaging that will be most persuasive with your audience.
Special Instructions for Online Surveys
Ensure that the survey technology is stable, easy to access, and easy to use.
Send your survey request immediately after the learning event or during it. Don’t wait more than an hour after the learning to send the survey.
Write a short survey introduction that motivates learners to fully engage the survey. Include the value of their feedback, a promise of confidentiality, and an appreciation for their efforts. Keep it short!
To maximize the benefits of comment questions, push the use of computers with keyboards rather than phones or tablets. This assumes that phones and tablets will discourage comments—an assumption that may be worth testing if you think your audience is especially comfortable using their devices to write comments.
Let your learners know the percentage of people like them who typically complete the survey (caveat: if it’s relatively high).
On the survey itself, use a progress bar or similar mechanism to discourage learners from dropping out as they complete the survey.
Where necessary, give learners a preview of the survey technology, making them comfortable with access, navigation, answering.
Send personalized follow-up reminders. If possible, send these only to folks who have not yet completed the survey. You may need two or more reminders.
When sending reminders, share with your learners a sense of the number of their fellow learners who have already completed the survey. “Already, 46% of your fellow learners have completed the survey, with some intriguing tentative results. Please add your perspective so we have a full picture.” In addition, consider sharing the job titles of some of the people who have already completed the survey. Or if you can do it, share with your learners the percentage of people from his/her unit who have responded already or share a comparison across units.
Special Instructions for Follow-up Learner Surveys
Sometimes, we’ll want to survey our learners well after a learning event, for example two to four weeks later. Follow-up learner surveys are perfectly positioned to find out more about how the learning is relevant to the actual work or to our learners’ post-learning application efforts. Unfortunately, prompting action—that is getting learners to engage our follow-up learner surveys—can be particularly difficult when asking for this favor well after learning. Still, there are some things we can do—in addition to the list above—that can make a difference.
During the learning, tell the learners that a follow-up survey is coming to gauge how useful the learning is to their work, that different questions will be asked than those on the regular learner survey, and that their feedback is especially valuable because it’s about the ultimate goal of the learning—whether the learning helps them back on the job.
Send out the follow-up surveys two to four weeks after the learning ends and expect that people will respond within a week or so. This ensures a nice balance between too little and too much time. We want enough time to have elapsed that learners have had a chance to start using what they’ve learned. On the other hand, we want the time delay to be short enough that learners will still be motivated to fulfill obligations to the learning event or instructor.
Tell learners what you learned from the end-of-learning learner survey they previously completed.
Where applicable, ask instructors—or others who bonded with the learners—to send the request (instead of an unknown person from the learning unit).
Send multiple requests, preferably using a mechanism that only sends these requests to those who still need to complete the survey.
Select an official end date for the learning experience at a time AFTER the follow-up learner survey is completed, even if that is largely just a perception. Note that multiple-event learning experiences lend themselves to this approach, whereas single-event learning experiences do not.
What about INCENTIVES?
When I ask audiences for their ideas for improving responses and increasing response rates, they often mention some sort of incentive, usually based on some sort of lottery or raffle. “If you complete the survey, your name will be submitted to have chance to win the latest tech gadget, a book, time off, lunch with an executive, etc.”
I’m a skeptic. I’m open to being wrong, but I’m still skeptical about the cost/benefit calculation. Certainly, for some audiences an incentive will increase rates of completion. Also, for some audiences, the harms that come with incentives may be worth it.
What harms you might ask? When we provide an external incentive, we might be sending a message to learners that we know the task has no redeeming value or is tedious or difficult. People who see their own motivation as caused by external incentives are potentially less likely to seriously engage our questions, producing bad data. We’re also not just influencing the current learner survey. When we incentivize people today, they may be less willing next time to engage in answering our questions. They may also be pushed into believing that learner surveys are difficult, worthless, or worse.
Ideally, we’d like our learners to want to provide us with data, to see answering our questions as a worthy and helpful exercise, one that is valuable to them, to us, and to our organization. Incentives push against this vision.
Summary: How to Motivate Learners to Respond
Our main goal in surveying learners is to get data we can use to evaluate and improve our learning interventions. If the data is biased, our decision making will be flawed. If our response rates are low, we are at a greater risk of surveying an unrepresentative sample of our learners. For this reason, we should endeavor to increase our response rates using better questions, positive appeals to our learners, and behavioral prompts that promote survey engagement. At the same time, we should avoid punitive measures, forced compliance, extrinsic incentives, and other mechanisms that may skew results or color our learner surveys as unimportant drudge work.
In this chapter, I’ve provided over thirty concrete steps you and your organization can take to improve your learner-survey response rates. Of course, as always, you must use your own wisdom about which steps will work best in your organization, with different types of learning interventions, with different learners. Don’t follow a recipe! Instead, take a learning approach. First use what’s written here and take your best guess as to which methods will work to increase your response rates. Then, do a whole series of A-B tests to determine what will work best for you. Be careful about short and long-term effects. You don’t want to increase response rates for your current course at the expense of the next five courses. Take a long-term approach, experimenting, tweaking, improving.
Feedback is at the heart of every improvement effort. We should take pride in our learner surveys and work to imbue learners with that same sense of pride and accomplishment.
===========================================
This Post is from a chapter of Will Thalheimer’s Forthcoming Book, Performance-Focused Learner Surveys: A Radical Rethinking of a Dangerous Art Form (2nd Edition). It is made available now only to members of the Learning Development Accelerator.
This chapter provides the first research-based recommendation
for response rates for the learning-and-development field.
Please Do Not Share. Thank you!
When will the book be published?
Not sure, probably before July 2021.
Thanks for your patience!
[1] Kennedy and Hartig (2019).
[2] Deane, Kennedy, & Keeter (2019).
[3] See BLS response rates webpage: https://www.bls.gov/osmr/response-rates/
[4] Ahmad (2018).
[5] Available as of November 2020: https://www.2020census.gov/en/response-rates/nrfu.html
[6] Hendra & Hill (2019).
[7] Berk (2012).
[8] Luo (2020); Nulty (2008).
[9] Luo (2020).
[10] Luo (2020).
[11] Ahmad (2018).
[12] Khorsandi, Kobra, Ghobadzadeh, Kalantari, Seifei (2012).
[13] Nulty’s conjectures in his 2008 article are thoughtful and intriguing, but I don’t buy into them fully because (1) they are based on statistical musings the author admits are not warranted and (2) they represent only a subset of the full range of considerations that impact the integrity, validity, and reliability of the data we are collecting.
[14] Of course, if we remember that learners are not always good judges of learning, we might worry that their assessments of the value of learning might be inaccurate. For example, given that learners are often overconfident in their ability to remember, they tend to avoid the repetitions and practice they need to reinforce memory. Dropouts may think they’ve gotten what they need but leave without strong enough memories for what they learned.
[15] Guder & Malliaras (2013). Note of course, that learner’s reasoning could be post-hoc and inaccurate. They could say they didn’t respond because the learning was not exceptionally bad or good, but that might not be the primary causal impetus for their nonresponse.
[16] Ryan & Deci (2017, 2019).