SIX TYPES OF BAD RESPONDENTS RESEARCHERS ENCOUNTER

Researchers collect reams of data in a single survey from thousands upon thousands of participants. Some surveys may have had a total of 10,000 respondents who opted in to take the survey, but only 1,000 made the final cut to qualify for the sample. When surveying thousands of participants per research project, you’re likely to run into a variety of participants, and not all of them are good. It’s important to work with a research team experienced in aggregating your sample into a strong, reflective group. Here are the six types of bad respondents you may encounter – but want to know how to steer away from – in market research.

The Straight Liner/The Christmas Tree

This participant will answer questions with the same responses throughout a survey. If your survey includes Likert scale items, they’re likely to select the same answers for each survey item. Researchers should be able to easily spot participants who answer “strongly agree” for 10 questions in a row. We call these respondents “straight liners.” Sometimes, participants get a little creative and will choose “strongly agree” then “neutral” then “strongly agree.” Rinse. Repeat. Researchers have a nickname for them as well because of the pattern their responses make – “Christmas trees.”

The Rebel

The rebel is a rule-breaker…a saboteur of sorts. For some reason (or no reason whatsoevner) they have it out for your survey. This person’s main goal is to throw your research off course. Why? Because why not. One person won’t disrupt the data too terribly, but they can be devastating in large numbers. A rebel respondent intentionally chooses inconsistent and/or disagreeable answers. Look out for their open-ended responses (their write-ins for “other, please specify” answer choices); they tend to leave behind colorful answers. Good researcher can weed these respondents out by running internal consistency checks on the back end.

The Bot

This is respondent is…well…not a real respondent. Powered by lines of code and an insatiable drive to complete as many questionnaires as possible, this “respondent” has the sole purpose of taking your survey to collect the survey participation incentive. Survey bots were very easy to detect initially, but improvements in artificial intelligence and machine learning have helped these pests have become more sophisticated over the years. Luckily, strong research teams also embrace the power of AI and machine learning to implement several safeguards and advanced detection algorithms that keep these bots from slipping through the cracks.

The One Who Doesn’t Belong

Researchers target specific people to participate in research; some groups demand a larger incentive for participation due to their unique sample definitions. For example, a sample consisting of doctors would include a higher incentive for participation compared to a sample of the general population due to a lower incidence of such participants available. Larger incentives sometime attract the wrong participants for the wrong reason. Some respondents try to figure out the intended target sample and will answer questions hoping to avoid being screened out of the survey. These participants want to pass through the screener questions in order to collective the incentive for completing the entire survey. Participants aren’t always dishonest — mistakes happen. A participant may accidentally select the wrong response choice when completing the filtering/screener questions. Again, a reputable research team will use proprietary methods and fail-safes to identify and remove these participants.

The Drop Out

This is the participants who decided to quit the survey without trying to make it all the way through. The problem here relates more to the design of the questionnaire more than the participants.

Sometimes surveys are too long and too boring, so the participant may decide that it’s not worth the time to make it to the finish line. Researchers have several solutions to encourage participants to complete a survey the whole way through.

Here are a few tips:

  • Use graphics and more interactive survey item components (e.g., drag and drop elements, sliding scales, etc.)
  • Include progress bars that let participants know how close they are to completing the survey
  • Limit the number of questions in a survey to prevent fatigue

The Socially Desirable

Some research questions can get a bit personal for some participants. Questions surrounding topics such as drug/alcohol use, sexual activity, political ideology, or even income may make some of your participants uncomfortable. The “socially desirable” participant will answer question according to how they think they ought to respond. Even if they know that the survey will not collect any identifying information, they are doubtful that their embarrassing or personal information will be captured.

Researchers have several solutions available. They can include an option of “prefer not to answer” for more sensitive questions, or they can implement a social desirability measure. Social desirability measures allow researchers to identify respondents, using algorithms that detect participants who are selecting responses that they believe to be desirable. Once detected, researchers can eliminate these respondents from the sample.

We’re Here to Help

Bad data can come from bad respondents who were not properly screened out. 4media group’s in-house research company, Atomik Research, deals with the bad respondents so you don’t have to. Reach out our team to discover how we can help you avoid the Christmas trees & the bots, and instead reach the right respondents for your next survey.

man holding burger