Request a Demo

Meet with our Truthlab experts to experience the product first-hand and see how it can help meet your company’s needs.

  • This field is for validation purposes and should be left unchanged.
*By clicking or tapping the “Submit Request” button, you agree to Truthlab contacting you for purposes of coordinating a product demonstration.

Subscribe to the Truthlab Blog

Receive blog article notifications by email and get insights on Product Experience, Customer Experience, Design, SaaS Technology, B2B Thought Leadership and more.

*By clicking or tapping the “Submit Request” button, you agree to receiving regular emails for Truthlab blog article posts. You can stop the emails at any time by clicking the “unsubscribe” links at the bottom of the emails.

Create a Test Account

Take full advantage of Truthlab’s moderated session recording by signing-up for a free trial account for 10 days.

  • This field is for validation purposes and should be left unchanged.
Watch Our 60 Second Introduction.
Customer Experience, Ideas

7 Examples of Survey Bias and How to Minimize It

5 minute read | By: Scott Hutchins

Survey bias is the tendency for some extraneous factor to affect survey results in a general, systematic way so the that results are “pushed” or “pulled” in a specific direction that is different from the target population as a whole (Alreck & Settle, 2004; Fowler, 2013).

How questions are asked can lead respondents to answer in ways that are not truly representative of their feelings and characteristics, or the feelings and characteristics of the population researchers are investigating. In turn, these biases in questions and answers can lead researchers to act based on inaccurate conclusions.

While it’s challenging to eliminate all bias from survey questions and responses, researchers can understand where biases most commonly occur and then employ techniques to minimize the biases within their research.

Some examples of commonly made mistakes in types of survey bias are as follows.

Loaded questions (assumptions)

Where do you like to go to eat pizza?

Instead: Don’t assume that all respondents like to eat pizza.  Give them a way to respond honestly or make sure that previous questions confirm their like of pizza.

Leading questions

How much do you like pizza?

Instead: How would you best describe your feelings about pizza?

  1. I really dislike pizza
  2. I somewhat dislike pizza
  3. I neither like nor dislike pizza
  4. I somewhat like pizza
  5. I really like pizza

Don’t use words that push users toward a particular feeling.

Using prompts

What is your favorite pizza topping? (ex. cheese, pepperoni, sausage, etc.)

Prompts (examples) can get in the way of respondents’ ability to think about their answers and respondents will usually take the easiest way of answering questions.  If answers are provided, they won’t think much further than those answers.  The example answers may be the most common and favorite toppings, but researchers won’t be able to determine whether respondents would have come up with the answers on their own if the examples weren’t provided.

Instead: Try not to use examples that can be answers that respondents give.

Overdemanding recall

How many times did you order pizza in 2017?

 Instead: In past two months, approximately how many times per month did you order pizza for pick-up or delivery?

Don’t ask respondents to remember anything too specific or taxing on their memory.  They won’t be able to remember it accurately and will have to estimate anyway.  Let them know that you don’t need an exact answer and that estimations are fine.  Their estimations will be better for more recent months.

Double-barreled question

Do you limit how much pizza you eat because it is unhealthy?

  1. Yes
  2. No

Instead:  Do you limit how much pizza you eat?  Why or why not?

If respondents don’t limit how much pizza they eat or if they limit it for reasons other than health, they won’t be able to answer the question.  Limit questions to one clear piece of information.  Asking more than one question within a single question can confuse respondents.  They won’t know which question to answer or how to answer the question honestly, especially if choices only allow them to answer one of the questions.

Response Bias

Unlike instrumental bias which is caused by poorly worded questions and incomplete or skewed scales, response bias is bias that is introduced by the behavior and tendencies of the respondents. While there are many types of response bias, the types that are most likely to be encountered in online surveys are:

  • Prestige – respondents may choose answers they think make them seem more smarter or more respectable (ex. inflations of salaries or education levels)
  • Acquiescence – respondents may choose answers that they think are desirable by the people conducting the survey (ex. inflation of product use reports or awareness of products)
  • Yea-and nay-saying – respondents’ tendencies to choose more positive or more negative answers
  • Order – respondents may suffer from answer fatigue, not evaluating questions independently or being influenced by previous answers and questions
  • Auspices – respondents may let their feelings about who is conducting the survey sway their answers

Response bias is often more challenging to avoid than instrument bias because researchers don’t have much control over their respondents.


It’s important to remind respondents that their truthful responses are desired and that their answers will not be shared with anyone.  You may also wish to inform respondents that answers will only be analyzed in aggregate and no individual respondents’ answers will be highlighted or viewed independently.  Reverse coding (flipping questions from positive to negative) may help avoid biases introduced by yea- or nay-sayers. Taking advantage of answer and question randomization within the survey programming platform will help prevent some types of order bias, such as respondent fatigue and first/last answer bias.  If  there are questions that need to be asked on an uncontaminated sample, ask those questions first or in a separate survey if there is a chance that asking the question will change the answers of subsequent questions.  (Alreck & Settle, 2004)

Experimenter Demand Bias

Experimenter demand bias may be especially prevalent in online surveys when there is an incentive to qualify for the survey based on the screener.  Respondents may try to guess the purpose or the qualifications for continuing in the survey in order to be allowed to take the survey and then their answers may be changed in minor or major ways depending on what respondents think the survey is about (Orne,1962). Many respondents from online panels have become wary of researchers trying to deceive them by stating a topic of a survey that is different than the types of questions that are presented.  Generally speaking, there is no reason to deceive respondents in most surveys.

Instead: Remember, the more truthful researchers are about why they are asking questions, the more truthful your respondents will be their answers. 

Recruiting Bias

One other place bias may sneak into a study is through recruiting methods. Researchers should carefully consider where they are recruiting respondents from and how the shared characteristics these respondents may be different or more concentrated than the population they want to study.  How surveys are conducted can even introduce bias into the results through respondent inclusion.


If it’s important that a study includes respondents from a variety of age groups with a mix of computer skills, online surveys and online recruiting methods may not give you a representative sample.  Older respondents or respondents who are not comfortable using technology may be unintentionally left out of the survey.


As demonstrated, developing surveys that eliminate all types of bias is extremely challenging.  However, identifying the most common types of biases and knowing where they occur most frequently is a tremendous step toward minimizing bias in research and addressing it during analysis.



Likert anchor resources:

Alreck, P. L., & Settle, R. B. (2004). The survey research handbook. Boston: McGraw-Hill/Irwin.

Fowler Jr, F. J. (2013). Survey research methods. Sage publications.

Orne, M. T. (1962). On the social psychology of the psychological experiment: With particular reference to demand characteristics and their implications. American psychologist, 17(11), 776.