Quality Control

04/18/2017

Trust, but Verify

To have confidence in our conclusions and recommendations, we, as researchers, need to have confidence in the data. When fielding online surveys, a thorough quality control process is critical to ensure the data is accurate before analysis begins. Especially when the respondents are part of a paid panel or have been offered an incentive, such as a drawing, to participate, we need to make sure they took the survey seriously and answered honestly and legitimately. Even when the respondents are part of a group that is motivated and engaged to respond, a thorough quality control review is an important research best practice. The following is a step-by-step review of the process Ideas in Focus follows on every survey project.

Completion Time

We begin by reviewing completion times. If the survey is completed under the benchmark we set, we flag that response. Many survey tools have algorithms that provide a suggested completion time, but we also consider how long it took in our testing as well as the number and types of questions. As a rule of thumb, three closed-end questions or one open-end question take a minute to complete. If we expect a survey to take ten minutes and the respondent completes it in three, we have a concern about their sincerity in reading and legitimately answering the questions. These “speedy” surveys are flagged and reviewed in more detail.

Quality of Responses

If an online survey has open-end response options, these can also provide you with a quick sense of the survey taker’s intentions. Are the responses complete? Are they legitimate? (i.e. did they answer the question or type in nonsense?) Did a question that should be answered in a sentence have a one word response? Does the response make sense? We flag and set aside any questionable open-end responses and then review the respondent’s entire set of responses more thoroughly.

Straight Lining

A third tactic we utilize is to look for “straight-lined” survey responses. A straight-lined survey means the respondent clicked on the same response for every question. Straight-lined surveys are a serious threat to data quality and will often, but not always, also have a short completion time. Another flag to watch for is these respondents will often answer “not familiar” or “don’t know” even when they should be familiar with the survey topic. Some survey platforms have tools to assist with identifying straight lined responses, but we find the “eyeball” test is effective as well.

IP Address

Another effective quality control tool is reviewing the respondents’ IP addresses. Some respondents will attempt to take a survey multiple times using different email addresses to qualify for more rewards. Most survey tools provide an IP address for each response. Identifying identical IP addresses helps determine which respondents took the survey more than once. Occasionally, respondents who do not qualify for the survey the first time will attempt to take the survey a second time to take advantage of the reward. Your survey tool may also offer a setting that limits responses to one per computer. This setting alleviates the risk of respondents taking the survey more than once. If this setting is not available, however, IP addresses are a great way to assess survey responses to ensure an individual did not complete the survey more than once.

Name Submitted and Email Address

Oftentimes, a respondent’s email, whether personal or work related, will include some aspect of their name. Another great quality control technique is to review respondent’s names and email addresses. If the name and email of a respondent don’t match, they may have attempted to take the survey more than one time. A response that has a questionable name or email address should be flagged and reviewed to determine whether the response should be included in the data.

Conclusions

We acknowledge that there is some subjectivity when evaluating online survey responses. We cannot know the respondent’s intentions. Maybe the person can speed read. Maybe answers that appear to be straight lined are legitimate answers. We start from the point of view that the response is good and go through these steps until we feel we cannot, in good conscience, use the data. If the sample is large and we are on the fence about a response, keeping it will not affect the overall results.

The steps we have outlined are helpful at identifying the most blatant cheats and dishonest respondents, but the final decision whether to keep a response is up to you and the client. Quality control tactics help us provide the client with peace of mind and to have faith that the conclusions and recommendations we provide are based on facts.