Good Surveys Come With Subtraction, Not Addition: How Questionnaire Length Leads to Breakoffs

by | Oct 18, 2021 | Survey Methodology, Survey Operations, Survey Research

We often hear—so how long can our questionnaire be? While we love surveys, the reality is most respondents don’t share our enthusiasm, which means long questionnaires can be bad for data quality. Today we are going to look at how long surveys can lead to breakoff, and in turn increase potential item nonresponse error (more on this below).

What is breakoff?

Breakoff is exactly what it sounds like—it occurs when a respondent begins, but doesn’t complete, a survey. This can happen in both interview administered and self-administered surveys. However, without the social pressure (or presence) of an interviewer, breakoffs are more likely to plague self-administered surveys, such as web-based surveys.  

Let’s look closer at the causes and consequences of breakoffs in web-based surveys, then we will offer some expert tips on ways to reduce breakoffs due to survey length.

Nobody likes a long survey

Breakoff usually happens when a respondent decides they don’t want to complete a survey; perhaps they don’t like the topic, are concerned about confidentiality (given the topic), don’t like the questionnaire layout, or lose interest in the questionnaire because it’s taking too long to complete.1 It can also be triggered by external technological issues (power outage) or technical issues related to the survey (programming errors, or browser incompatibility).

How is breakoff measured?

Breakoff is measured by completion rate, which is the total number of participants who complete the survey divided by the total number of eligible participants who start the survey. Completion rates for most of our surveys hover around 90%, which means of all responders who start the survey, 90% of them make it to the end.

With a solid data collection effort including multiple contacts, we like to see completion rates over 90%. Any completion rate under 80% indicates that there may be a problem to explore.

So someone broke up with us, now what?

While we aim to reduce breakoff, it happens in every survey, so when it does, you have a decision to make—do you value “partial” data or not? Most researchers do. Missing data in a survey can be recorded in item non-response error

Item non-response is when a respondent does not complete or answer a specific question (i.e. item) in the survey but completes other parts of the survey. 

While some analyses may require a complete set of responses, often it is fine to use a response that has some missing data as a result of a breakoff. This is why many researchers load important questions near the start of their questionnaire—the earlier they are, the less likely they are to be missed due to breakoff.

Statisticians can do things to help reduce the impact of missing data, like imputation of missing responses, which is a process of predicting data for missing values dependent on other data we do have. This can be complicated and we recommend discussing these options with people who have the skills to do it.

Tips to reduce breakoffs

First and foremost, reduce the length of the questionnaire. I know, it isn’t easy, but it is the most direct way to reduce breakoffs that are caused by long questionnaires. Reduce. Shorten. Make your survey more efficient, and effective! 

Ask yourself: what will I do with the results of this question? If you don’t have a specific need for that data, consider removing it.

While the lowest hanging fruit is length, here are a few additional suggestions that could improve the quality of your next survey:

1. Try to set the right expectation for length.

We know that people will take very long surveys if they have a realistic expectation of the length. When participants expect a survey to be shorter than it actually is, you see an increase in breakoff.2 

2. Generally keep web-based surveys to under 15 minutes.

While we routinely conduct longer surveys, they often include additional design features, such as incentives, that make the longer engagement palatable to the participant.

3. Improve the design of the survey with the intent to keep the survey burden down.
4. Avoid widgets that may lead to an incorrect expectation of the length and burden.

In a previous blog post concerning web survey progress indicators, we discussed how using progress bars can actually be counterproductive and lead to breakoff. 

5. Reduce repetitive text and long instructions.

Part of the length in surveys is in the explanatory text. If you are tempted to say TL/DR (too long/didn’t read), your respondent will too. 

6. Consider a study design that breaks up the survey over multiple data collections instead of all at once.

You may find that your population is willing to take three separate 5 minute questionnaires spread over three weeks instead of one 15 minute questionnaire. 

7. Consider auxiliary data.

If you have data already collected about the respondents, consider merging data so you don’t have to repeat questions in multiple waves that would have static responses, such as “Date of Birth”.

So what do we do about breakoffs?

Here at SoundRocket, during survey development we can test the survey with a small representative sample and monitor total survey completes. We can track at what point in the survey respondents breakoff and then adjust the survey accordingly. If we track the time spent taking the survey and find it’s taking respondents longer than we intended, we can reword the survey instructions or consider removing parts of the survey. We can use survey design platforms that will measure how much time respondents spend on the survey and even how much time is spent on each question. If the survey is longitudinal in nature, tracking breakoff is important and can give us an opportunity to update the study for future waves.

We’d love to hear your experiences with questionnaire length. How long are you willing to engage with a questionnaire when you are asked to do so? Drop us a line.

KEY LITERATURE

 1Peytchev, Andy. “Survey Breakoff.The Public Opinion Quarterly, vol. 73, no. 1, [Oxford University Press, American Association for Public Opinion Research], 2009, pp. 74–97.

 2Crawford, Scott D., Mick P. Couper, and Mark J. Lamias. “Web surveys: Perceptions of burden.Social science computer review 19.2 (2001): 146-162.

About the Author

Robert A. Schultz

Robert Schultz is currently a graduate student in the University of Michigan's Program in Survey and Data Science. Prior to Michigan, he attained a MA in Economics at Wayne State University. When not busy studying, you can find him trekking across Ann Arbor and Detroit MI, in search of the best sushi.