Introducing Error Through Use of Web Survey Progress Indicators

by | Aug 24, 2021 | Survey Methodology, Survey Research

In 1998, fascinated with the potential for the web to serve as a data collection platform, I wondered what sources of error may crop up with this new technology. Not surprisingly, we noticed early on that if participants did not finish a web-based survey in one sitting, they were not very likely to return and complete it later. It was clear we needed tools to maintain a participant’s engagement during a web-based survey experience. This inspired me and colleagues to conduct a series of experiments to better understand the quality implications of web-based survey design elements. Among those were progress indicators. 

Knowing where you are

Progress indicators are design elements that provide the responder with context of where they are in the survey. Sometimes framed as a count of pages (i.e.: Page 3 of 10), often presented as percentages (i.e.: 30% complete), and frequently supported by a graphical element displaying one’s progress.

I recall thinking (at the time) that progress indicators were an obvious solution—they would most certainly keep study participants engaged.

This turned out to be one of the most incorrect assumptions I have made since starting my career in survey methodology. 

Too much information

Unexpectedly, our study found that progress indicators gave study participants too much information—and ultimately led to a higher breakoff rate1 when they were used. In our first experiments, we found that the presence of progress indicators meant an almost 6% lower survey completion rate than when we did not display progress at all. This effect was even stronger (i.e.: the completion rate was even worse) if the participant was told at the outset that the survey would take 20 minutes versus 8-10 minutes. 

The research suggested progress indicators were manipulating the perception of the burden of completing a survey. If the progress indicator suggested that the burden was limited (or at least less than expected), then it may help keep participants engaged. However, if the progress indicator suggested that the burden was greater than expected, it could backfire. We believed that participants would mentally estimate the total time based on their experience so far, combined with the progress shown.

Playing with pace and perception

In follow-up studies,2 my colleagues completed a series of experiments manipulating the pace of the progress indicator itself (regardless of actual progress). They designed three versions of a progress indicator: a) one that communicated a linear pace through the questionnaire (i.e.: real-time progress indicator), b) one that showed a rapid initial pace that slowed down as the survey progressed (i.e.: rapid-to-slow progress indicator), and c) one that showed a slow initial pace that sped up as the survey progressed (i.e.: slow-to-rapid progress indicator). 

This experiment was designed to test our belief that participants will use the information provided in the progress indicator to estimate the overall burden of the survey. 

After 10 questions, the linear speed progress indicator suggested that the participant was about 25% complete; the rapid-to-slow progress indicator suggested that the participant was about 65% complete; and the slow-to-rapid progress indicator suggested that the participant was just over 5% complete. If the participant did mental calculations using that information and assuming it had taken two minutes to get to the 10 question mark, the participant would come up with very different estimates of remaining time to complete the survey: six minutes, just over one minute, or nearly 40 minutes, respectively.

The results supported their hypothesis: about 14% of participants in the linear group broke off before finishing, compared to nearly 22% in the slow-to-rapid group and just over 11% in the rapid-to-slow group. Interestingly, the control group with no progress indicator broke off at a rate of just under 13%. So even the linear group performed worse than no progress indicator.

A neutral role 

These results have been supported by numerous additional studies3,4 conducted by a range of researchers since these seminal experiments. While the evidence does not always point towards progress indicators increasing survey break-off, there is no evidence to suggest that progress indicators help improve study engagement either. At best, progress indicators play a neutral role in some situations. These later studies also identified that the longer questionnaires and early placement of complex or difficult questions may increase the negative effect of the progress indicator.

For these reasons, I do not recommend the use of detailed progress indicators.

What we do in practice

We have found, however, that providing some level of context for a web survey can at least give the study participant some feeling of knowing where they are, without triggering the burden estimation math and ultimate break-off. We do this through the use of sectional tabs within a web survey. Sectional tabs are visual labels (i.e.: survey topics) that display at the top of the screen, allowing participants to see the topics (and how many) yet to come. We have experimented extensively with these to understand their impact on data quality, and while we do not find any evidence that they help, they do not appear to hurt. We do have anecdotal and qualitative evidence to suggest that study participants like it when they are used. Our hope is that in using them, we are positively impacting some unseen measure of quality.

Beware: web survey product features may contaminate

Despite the lack of evidence to support it, survey software continues to advertise and offer the progress indicator as a “feature” available in survey platforms. As someone who cares about data quality, this feels much like a petri dish that comes with preloaded streaks of bacteria on the edge of the agar—sold as a “feature” to encourage your bacteria to grow. 

The research has been around now for over 20 years and remains consistent—maybe it’s time for the feature to be retired. It certainly is for any high-quality survey organization.


KEY LITERATURE

1Crawford, Scott D., Mick P. Couper, and Mark J. Lamias. “Web surveys: Perceptions of burden.” Social science computer review 19, no. 2 (2001): 146-162.

2Conrad, Frederick G., Mick P. Couper, Roger Tourangeau, and Andy Peytchev. “The impact of progress indicators on task completion.” Interacting with computers 22, no. 5 (2010): 417-427.

3Villar, Ana, Mario Callegaro, and Yongwei Yang. “Where am I? A meta-analysis of experiments on the effects of progress indicators for web surveys.” Social Science Computer Review 31, no. 6 (2013): 744-762.

4Liu, Mingnan, and Laura Wronski. “Examining completion rates in web surveys via over 25,000 real-world surveys.” Social Science Computer Review 36, no. 1 (2018): 116-124.

About the Author

Scott D. Crawford

Scott D. Crawford is the Founder and Chief Vision Officer at SoundRocket. He is also often found practicing being a husband, father, entrepreneur, forever-learner, survey methodologist, science writer & advocate, and podcast lover. While he doesn’t believe in reincarnation, he’s certain he was a Great Dane (of the canine type) in a previous life.