Satisficing: Why Survey Participants Give Quick, Inaccurate Answers

by | Dec 2, 2021 | Survey Methodology, Survey Research

In a previous post we discussed how to minimize survey breakoff. Surveys are a great way to collect data but when designing a survey it’s important to consider how much is too much. Today we are going to discuss satisficing, how to keep respondents engaged, and in some instances, how the way we ask personal questions can affect data reliability. 

I can’t get no….satisficing

In survey science, satisficing is when respondents complete just enough of the survey to get by. In other words, they do the bare minimum to satisfy the demands of the question, usually in an effort to race through to the end and collect their prize. As you might imagine, this isn’t good for data quality.

Many have studied respondent satisficing, and while there are several theories on satisficing (some of which we will explore), they are generally the result of respondents replying to survey questions using quick recall and not investing the time to ensure accurate comprehension and provide a quality response.1

How do you know when someone is satisficing?

There are multiple ways a respondent may satisfice and the following examples are not exhaustive. The respondent may simply skip over questions, thus creating item non-response. Respondents may also select the same answer for every question, without carefully considering each independently, including choosing “I don’t know” or “NA” for every answer. This is called straightlining, and the visual nature of grid-style questions makes them the most susceptible to this pattern-driven response.  

Here at SoundRocket, we thoroughly test questions and question layout to explore and minimize potential burden, and use strategies to reduce the risk of straightlining.

One way is to be mindful when formatting the survey layout; it is best to avoid having multiple matrix-style questions back to back on the same page. Using page breaks is another simple method which can help; it requires the respondent to make a small effort to continue on and may reset their focus as they move to a new page. 

Other methods in longer surveys include placing screens in a web survey that remind the respondent how important and valued their responses are. 

Another type of satisficing that is a little harder to notice, is speeding or racing through the survey so fast that it is unlikely the respondent took the time needed to: read each question, the response options, consider their answer, and reply. 

All of these methods are used by respondents to get to the next question or stage of the survey. These examples can introduce error into the data, because the actual true value of what the respondent thinks or does is different from the response they share with the researchers. 

Let’s get personal.

While satisficing is oftentimes employed to simply complete the survey, other times it’s done intentionally as a way for the respondent to avoid answering certain questions, perhaps they feel uncomfortable discussing some topics in a survey. 

In some instances, the respondent may think about the question and then choose to not share a true value. This is different from the respondent scanning a question, seeing a word that they recognize, and then picking an answer that feels like it fits well enough. 

Keep it short.

If the respondent knows they have to complete the survey in its entirety to be awarded their incentive, they may be satisficing to just get through the survey. This is different from breakoffs where the respondent just decides to stop taking the survey. 

Satisficing often occurs when the respondent is bored or burdened by the survey length or requirements. They may begin rushing through the survey selecting whatever comes to their mind first or selecting random answers. And it can lead to poor survey data. In some instances the respondent may notice that selecting certain answers triggers more questions, so may begin selecting answers that avoid extra tasks to be completed.

Have you considered mode?

While we’ve mostly explored online web surveys, satisficing can happen in any survey, including those conducted through in-person interviews or over the phone. 

This is when having a skilled interviewer who can pick up on respondent cues that could be a sign of boredom or waning attention is paramount. Research has shown how different interviewers can induce an interviewer effect on the data, even impacting things such as response rate. Changing the mode of a survey may result in data that is closer to the true value.2

Expert tips on how to keep people engaged:

  1. Be aware of survey length – Keep the survey on the short end to reduce overall burden of the respondent 
  2. Create realistic expectations and incentives – Share with the respondent in the invite email how long they should expect the survey to take. Offer incentives that match the time spent on the survey. Offering a $5 gift card for a 40 minute survey may not be your best bet.
  3. Use auxiliary data – If you have outside information about the respondent, consider merging it separately in an effort to reduce survey length
  4. Carefully consider question sensitivity – Allow the respondent to “opt out” of personal questions. If you are concerned about the sensitivity of your demographic questions, consider placing them at the end of the questionnaire. 
  5. Randomize the order of responses – This can reduce recency effect or the respondent just selecting the first thing they read.
  6. Test question order – Mix up the question order. Having 5 matrix-style questions in a row may induce straightlining because the respondent feels overwhelmed with how much they see that they have to respond to. 
KEY LITERATURE

 1 Groves, Robert M., et al. Survey methodology. Vol. 561. John Wiley & Sons, 2011.

 2 West, Brady T., and Annelies G. Blom. “Explaining interviewer effects: A research synthesis.” Journal of survey statistics and methodology 5.2 (2017): 175-211.

To learn more about how to keep your survey participants interested and answering authentically, schedule time with our experts!
Currently, the FDA only regulates true direct-to-consumer (DTC) genetic tests, which have no health care provider involved either before or after testing. Consumer-initiated, physician-mediated genetic tests are considered lab developed tests (LDTs), which currently do not require FDA oversight. 

 

Our Study Design

Our study was designed to simulate the experience of an everyday person who is considering doing a health-related genetic test. For this reason, we only reviewed website contents presented to a consumer before ordering a test. By limiting our data collection to pre-test content, instead of digging around or contacting the companies to fill in missing data points, gaps in public-facing information that consumers use to make ‘informed’ decisions were revealed.  

Also, while a genetic counselor supervised the project, a research assistant (RA) conducted most of the website investigations. The RA was familiar enough with genetics and genetic testing to understand and identify the information presented on the websites, but has not had the clinical exposure that might create bias from knowing how specific tests work “behind-the-scenes”. 

 

To Sum Up

We set out to understand the landscape of health-related consumer genomics testing from the public perspective. By limiting our research (by design) to public-facing pre-test website content, we could not complete our data collection as set out in the protocol. However, this uncovered an important observation: consumer genomics websites are highly variable in content, readability and ease of use. 

This begs the question, if we can’t find basic test information on a consumer genomics website, how does a consumer have enough information to make an informed choice about testing? 

Stay tuned for Part 2 in this series, where we will dig into our study findings and reveal our most interesting observations.  

 

 

As experts in FDA user comprehension studies for consumer genomics companies seeking 510(k) clearance, we are interested in how everyday people access and understand health content that is meant for them. If you need help optimizing your consumer-directed health communications, we’ve got the in-house expertise and experience to meet your needs. Let’s chat

About the Author

Robert A. Schultz

Robert Schultz is currently a graduate student in the University of Michigan's Program in Survey and Data Science. Prior to Michigan, he attained a MA in Economics at Wayne State University. When not busy studying, you can find him trekking across Ann Arbor and Detroit MI, in search of the best sushi.