Survey Data Collection Part 1: 12 Pre-launch Steps to Quality Data

by | Aug 3, 2015 | Business Leadership, Social Science Business, Survey Methodology, Survey Operations

Ready to collect survey data?  It’s just a press of a button, right?

How I wish that were the case!  As with any science, care must be taken to ensure there is a consistent application.  Bias introduced by inconsistencies in the study implementation is not desirable.  Survey data researchers can minimize such bias by reducing the impact of unintended errors.

It is important to state that I believe that there is no perfectly implemented complex survey data collection.  We are humans studying humans, so there will always be unintended biases.  Our goal should be reduction, not the unrealistic ideal of perfection.

Today I would like to share the first six (of twelve) areas to consider prior to data collection launch.  Look for the remaining items in my next post.


Research is best done with collaborative working relationships that extend within and beyond research teams.  This collaboration is the basis for ensuring consistent and full understanding of the goals and trade-offs required to execute a quality study and collect quality survey data.  Share a common ground and understanding with all collaborators before proceeding.


Implementation of survey research is a logistical problem that calls for a logistic solution.  This means a detailed schedule that is built on a foundation of the work already done and provides a path for all efforts before, during, and after the data collection.  The best methodologies at work cannot contribute anything to the study if not implemented in a timely manner and in an appropriate context.


Survey methodologists are valuable (and can be expensive!) for a reason – they have an expertise in the conduct of surveys, and not only can help avoid quality problems, but can contribute effectively to the trade-off discussion (i.e. “I have $12,000 left in my study budget for incentives – how could we use that most effectively?”).  For the results of your data to be solid, the survey methodology must be sound, with trade-offs communicated and determined purposefully.


The best questionnaire fielded with the best data collection methodology is useless if the sample used for a study does not meet the study goals.  With the design documented, including limitations, a solid sampling plan and execution is critical for any study’s success.


Often overlooked by many researchers, thinking about the data early and frequently is critical to a successful study.  Discuss data needs before the questionnaire is even written, again before the survey is programmed or designed, and again after the survey is tested.  Check data frequently and critically in the early phases of data collection, and periodically after.  After the data collection is completed should not be the first time you look at your data.


The questions and answers included in a social science research study are the tools by which researchers measure many elements of human behavior.  However, the questionnaire does not stop at the questions—it includes any other mechanism by which we capture data from humans.  From mobile geolocation captures to innovative links with bluetooth bio-collection devices, a great questionnaire is a key to a great dataset.

Keep checking back to see items seven through twelve of our pre-launch steps to quality survey data!

Currently, the FDA only regulates true direct-to-consumer (DTC) genetic tests, which have no health care provider involved either before or after testing. Consumer-initiated, physician-mediated genetic tests are considered lab developed tests (LDTs), which currently do not require FDA oversight. 


Our Study Design

Our study was designed to simulate the experience of an everyday person who is considering doing a health-related genetic test. For this reason, we only reviewed website contents presented to a consumer before ordering a test. By limiting our data collection to pre-test content, instead of digging around or contacting the companies to fill in missing data points, gaps in public-facing information that consumers use to make ‘informed’ decisions were revealed.  

Also, while a genetic counselor supervised the project, a research assistant (RA) conducted most of the website investigations. The RA was familiar enough with genetics and genetic testing to understand and identify the information presented on the websites, but has not had the clinical exposure that might create bias from knowing how specific tests work “behind-the-scenes”. 


To Sum Up

We set out to understand the landscape of health-related consumer genomics testing from the public perspective. By limiting our research (by design) to public-facing pre-test website content, we could not complete our data collection as set out in the protocol. However, this uncovered an important observation: consumer genomics websites are highly variable in content, readability and ease of use. 

This begs the question, if we can’t find basic test information on a consumer genomics website, how does a consumer have enough information to make an informed choice about testing? 

Stay tuned for Part 2 in this series, where we will dig into our study findings and reveal our most interesting observations.  



As experts in FDA user comprehension studies for consumer genomics companies seeking 510(k) clearance, we are interested in how everyday people access and understand health content that is meant for them. If you need help optimizing your consumer-directed health communications, we’ve got the in-house expertise and experience to meet your needs. Let’s chat

About the Author

Scott D. Crawford

Scott D. Crawford is the Founder and Chief Vision Officer at SoundRocket. He is also often found practicing being a husband, father, entrepreneur, forever-learner, survey methodologist, science writer & advocate, and podcast lover. While he doesn’t believe in reincarnation, he’s certain he was a Great Dane (of the canine type) in a previous life.