What is the name of the phenomenon when a participant responds in the way they think the researcher wants them to?

Acquiescence bias is a real issue in survey research. Learn how you can spot it using our examples and read our 5 top tips to avoid it.

eBook Download: Start designing survey questions that drive results. 

What is acquiescence bias?

Acquiescence bias, also known as the agreement bias, is the tendency for survey respondents to agree with research statements, without the action being a true reflection of their own position or the question itself.

Typically, the issue will come up when you ask a participant to confirm a statement, or if the question is answered with opposing pairs, such as ‘Agree / disagree’, ‘True / false’ and ‘Yes / no’.

For example, in this Yes / no question, a participant might answer ‘yes’ if they have acquiescence bias:

What is the name of the phenomenon when a participant responds in the way they think the researcher wants them to?
That means that, regardless of the importance of the question that’s being asked, the participant’s results might be skewed to favor the ‘Yes’ answer.

This brings serious accuracy concerns to the data quality of your research. It can lead to incorrect information being used by teams, which leads to uninformed actions and decisions.

What causes acquiescence bias?

Participants have different backgrounds

There are some elements in a person’s background that may influence whether they may succumb to acquiescence bias. One study found that 15% of acquiescence variance came from country-level variations in corruption levels and collectivism, showing that external factors can influence submission to the acquiescence bias.

Education is also a factor. The same research found that where a participant has a low education level and a low degree of conservatism, the responses had a higher tendency towards acquiescence bias.

Participants are impacted by their ideal version of themselves

Participants can be influenced to change their behavior, by agreeing to be part of a survey. The very act of being part of a survey could alter first how they think of themselves, and then how they answer the questions.

Another research study found that participants who took a survey ended up using it as a way to share the views of their own ideal version of themselves. In that way, the survey was “transformed, from an inquiry about ‘what I do, to ask about ‘who I am.’”

Participants are influenced by the researcher

It is expected that participants would show a higher acquiescence bias on interviewer-administered surveys, than on the self-administered questionnaires (like those surveys sent by email or website). However, this isn’t the case.

Participants on self-administered surveys may want to see behind the questions to the researcher’s intent when creating the survey. This effect could be increased if they are given prior knowledge about the survey, or contribute to its making. This could result in unnatural responses that please the researcher, or aim to fulfill what they believe that the research wants.

(American psychologist Lee Cronbach opposed this theory, instead saying that a participant uses their own memories to form an opinion on the statement. Cronbach’s theory explains why there could be two contradictory answers in a survey response: One answer is based on pleasing the researcher, while the other is gained from the participant’s position based on their own memories - the two answers are truthful, but just happen to be at odds with each other.)

Participants lack the motivation to engage with the survey

Some respondents, including those who are not highly motivated to think through the questions, take mental shortcuts when they are responding to questions. This can occur if the survey is:

  • Not aimed at the right audience
  • The questions are not clear enough
  • Does not engage the participant enough to want to complete it
  • The survey is too long
  • The participant does not have the time to complete the survey

This tendency to answer positively is one of those common shortcuts, along with selecting ‘middle-ground’ answers without variance or thought.

Find out how to write better survey questions in our free guide

Participants want to keep to socially desirable attitudes

When participants are aware that there will be someone looking at their answers, or that the results can be tied back to them, this could cause anxiety about how their data will be used.

In these cases, where there are questions that ask confirmation questions about a socially unpopular research statement, the answer which appears more agreeable to the social norm will be selected.

For example, asking ‘would you consider yourself to be a sociable person?’ could get a stronger ‘yes’ reaction, than ‘no’, even if the answer isn’t true. This would be because the participant thinks being sociable is better than the opposite.

Participants don’t believe a ‘middle-ground’ answer exists

There are some questions that can bring out extreme beliefs of a participant, who may hold a strong view on an issue. For example, a question asking ‘Do you agree that reducing greenhouse gas emissions is everybody’s responsibility?’ is leading the participant to answer either ‘agree’ or ‘disagree’, depending on their view - even if there was a ‘slightly agree’ or ‘neither agree nor disagree’ option available.

The acquiescence bias occurs when a participant holds such a strong belief towards one view, that it overcomes the rating stage of the decision-making process. They don’t see a middle-ground existing, which produces skewed results in favour for the ‘extreme’ answers at each pole.

5 ways to avoid acquiescence bias in your survey

1. Reformulate the question

By reformulating the response formats and options to correspond more closely with the subject of your question, you also make interpretation easier for the respondents. For example, read this question:

What is the name of the phenomenon when a participant responds in the way they think the researcher wants them to?

The risk of acquiescence response bias can be reduced if the question was reformatted to simply ask how satisfied they feel about their experience.

Here is the question reformulated in a way that avoids asking for agreement below:

What is the name of the phenomenon when a participant responds in the way they think the researcher wants them to?

2. Introduce measures to help the participant’s focus

It’s possible to include more information about the question itself, by including subtitle text under the question. This helps your participant to focus on what’s being asked.

This method raises understanding and removes the likelihood that they will skim over the content of the question, or fail to understand what is being asked.

3. Plan your survey participant group to include the right survey participants

Responses to acquiescence bias can vary if participants from more than one country are surveyed at the same time, due to cross-national differences. When looking at creating your survey, think about which participants would be best to include and where they are located.

Being conscious of the different demographic information about your participants, can give you more accurate results in the context of participant’s backgrounds. This helps you at the time of analyzing your results, as you can judge the results with knowledge of the background factors.

Find out more information on finding the right survey participants.

4. Be sensitive in your role as the researcher

Bias created by the researcher is a core ethical consideration for anybody, and it’s still possible to occur in self-administered surveys.

Be careful in asking for confirmation on overly emotional positions. This can avoid scenarios where your participants feel cornered into a single position, where several exist. Instead, consider how your question can be positioned in a neutral and non-alarming way, so that each answer will be given due consideration

If you need to ask sensitive questions, try out a different research format to collect data. This can be through video interviews or in-person survey collection, which gives the participant the opportunity to expand their answer and explain the rationale behind their answers.

5. Reduce anxiety about the survey with transparency

The social norm of appearing agreeable and polite can bias your respondents, so consider if you can keep participants anonymous to gain a high level of truthfulness. This might be easier with topics that are considered socially taboo, or if there is likely to be social harm done to the participant if their answers are made public.

You can also prevent anxiety about the survey, by clearly stating how the participant’s data will be used, and confirming that they can opt-out if they don’t feel comfortable completing the survey. This can prevent the results from being diluted by inconsistent or incorrect data, and give your participants a choice about whether they want to go ahead. In choosing to continue, they’ll more likely be willing to answer honestly as they’ll feel in control.

The response bias refers to our tendency to provide inaccurate, or even false, answers to self-report questions, such as those asked on surveys or in structured interviews.

Most of us work & live in environments that aren’t optimized for solid decision-making. We work with organizations of all kinds to identify sources of cognitive bias & develop tailored solutions.

Learn about our work

Researchers who rely on participant self-report methods for data collection are faced with the challenge of structuring questionnaires in a way that increases the likelihood of respondents answering honestly. Take, for example, a researcher investigating alcohol consumption on college campuses through a survey administered to the student population. In this case, a major concern would be ensuring that the survey is neutral and non-judgmental in tone. If the survey comes across as disapproving of heavy alcohol consumption, respondents may be more likely to underreport their drinking, leading to biased survey results.

Most of us work & live in environments that aren’t optimized for solid decision-making. We work with organizations of all kinds to identify sources of cognitive bias & develop tailored solutions.

Learn about our work

When this bias occurs, we come up with an answer based on external factors, such as societal norms or what we think the researcher wants to hear. This prevents us from taking time to self-reflect and think about how the topic being assessed is actually relevant to us. Not only is this a missed opportunity for critical thinking about oneself and one’s actions, but, in the case of research, it results in the provision of inaccurate data.

Researchers need to proceed with caution when designing surveys or structured interviews in order to minimize the likelihood of respondents committing response bias. If they fail to do so, this systematic error could be detrimental to the entire study. Instead of progressing knowledge, biased results can lead researchers to draw inaccurate conclusions, which can have wide implications. Research is expensive to conduct and the questions under investigation tend to be of importance. For these reasons, tremendous effort is required in research design to ensure that all findings are as accurate as possible.

Response bias can occur for a variety of reasons. To categorize the possible causes, different forms of response bias have been defined.

Social desirability bias

First is social desirability bias, which refers to when sensitive questions are answered not with the truth, but with a response that conforms to societal norms. While there’s no real “right” answer to the survey question, social expectations may have deemed one viewpoint more acceptable than the other. In order to conform with what we feel is the appropriate stance, we tend to under- or over-report our own position. 

Demand characteristics

Second, are demand characteristics. This is when we attempt to predict how the researcher wants us to answer, and adjust our survey responses to align with that. Simply being part of a study can influence the way we respond. Anything from our interactions with the researcher to the extent of our knowledge about the research topic can have an effect on our answers. This is why it’s such a challenge for the principal investigator to design a study that eliminates, or at least minimizes, this bias.

Acquiescence bias

Third, is acquiescence bias, which is the tendency to agree with all “Yes/No” or “Agree/Disagree” questions. This may occur because we are striving to please the researcher, or, as posited by Cronbach,1 because we are motivated to call to mind information that supports the given statement. He suggests that we selectively focus on information that agrees with the statement, and unconsciously ignore any memories that contradict it.

Extreme responding 

A final example of a type of response bias is extreme responding. It’s commonly seen in surveys that use Likert scales - a type of scaled response format with several possible responses ranging from the most negative to the most positive. Responses are biased when respondents select the extremity responses almost exclusively. That is to say, if the Likert scale ranges from 1 to 7, they only ever answer 1 or 7. This can happen when respondents are disinterested and don’t feel like taking the time to actively consider the options. Other times, it happens because demand characteristics have led the participant to believe that the researcher desires a certain response.

In order to conduct well-designed research and obtain the most accurate results possible, academics must have a comprehensive understanding of response bias. However, it’s not just researchers who need to understand this effect. Most of us have, or will go onto, participate in research of some kind, even if it’s as simple as filling out a quick online survey. By being aware of this bias, we can work on being more critical and honest in answering these kinds of questions, instead of responding automatically.

By knowing about response bias and answering surveys and structured interviews actively, instead of passively, respondents can help researchers by providing more accurate information. However, when it comes to reducing the effects of this bias, the onus is on the creator of the questionnaire.

Wording is of particular importance when it comes to combating response bias. Leading questions can prompt survey-takers to respond in a certain way, even if it’s not how they really feel. For example, in a customer-satisfaction survey a question like “Did you find our customer service satisfactory?” subtly leans towards a more favorable response, whereas asking the respondent to “Rate your customer service experience” is more neutral.

Emphasizing the anonymity of the survey can help reduce social desirability bias, as people feel more comfortable answering sensitive questions honestly when their names aren’t attached to their answers. Utilizing a professional, non-judgemental tone is also important for this.

To avoid bias from demand characteristics, participants should be given as little information about the study as possible. Say, for example, you’re a psychologist at a university, investigating gender differences in shopping habits . A question on this survey might be something like: “How often do you go clothing shopping?”, with the following answer choices: “At least once a week”, “At least once a month”, “At least once a year”, and “Every few years”. If your participants figure out what you’re researching they may answer differently than they otherwise would have. 

Many of us resort to response bias, specifically extreme responding and acquiescence bias, when we get bored. This is because it’s easier than putting in the effort to actively consider each statement. For that reason, it’s important to factor in length when designing a survey or structured interview. If it’s too long, participants may zone out and respond less carefully, thereby giving less accurate information.

Interestingly, the response bias wasn’t originally considered much of an issue. Gove and Geerken claimed that “response bias variables act largely as random noise," which doesn’t significantly affect the results as long as the sample size is big enough.2 They weren’t the only researchers to try and quell concerns over this bias but, more recently, it has become increasingly recognized as a genuine source of concern in academia. This is due to the overwhelming amount of research that has come out supporting the presence of an effect, for example, Furnham’s literature review.3 Knäuper and Wittchen’s 1994 study also demonstrates this bias, specifically, in the context of standardized diagnostic interviews administered to the elderly, who engage in a form of response bias by tending to attribute symptoms of depression to physical conditions.4

An emotion-specific response bias has been observed in patients with major depression, as evidenced by a study conducted by Surguladze et al. in 2004.5 The results of this study showed that patients with major depression had greater difficulty discriminating between happy and sad faces presented for a short duration of time than did the healthy control group. This discrimination impairment wasn’t observed when facial expressions were presented for a longer duration. On these longer trials, patients with major depression exhibited a response bias towards sad faces. It’s important to note that discrimination impairment and response bias did not occur simultaneously, so it’s clear that one can’t be attributed to the other.

Understanding this emotion-specific response bias allows for further insight into the mechanisms of major depression, particularly into the impairments in social functioning associated with the disorder. It’s been suggested that the bias towards sad stimuli may cause people with major depression to interpret situations more negatively.6

Researchers working outside of mental health should be aware of this bias as well, so that they know to screen for major depression should their survey include questions pertaining to emotion or interpersonal interactions. 

Social media is a useful tool, thanks to both its versatility and its wide reach. However, while most of the surveys used in academic studies have gone through rigorous scrutiny and have been peer-reviewed by experts in the field, this isn’t always the case with social media polls. 

Many businesses will administer surveys over social media to gauge their audience’s views on a certain matter. There are many reasons why the results of these kinds of polls should be taken with a grain of salt - for one thing, the sample is most certainly not random. In these situations, response bias is also likely at play.

Take, for example, a poll conducted by a makeup company, where the question is “How much did you love our new mascara?”, with the possible answers: “So much!” and “Not at all.” This is a leading question, which essentially asks for a positive response. Additionally, respondents may be prone to commit acquiescence bias in order to please the company, since there’s no option for a middle-ground response. Even if results of this survey are overwhelmingly positive, you might not want to immediately splurge on the mascara. The positive response could have more to do with the structure of the survey than with the quality of the product.

Response bias describes our tendency to provide inaccurate responses on self-report measures.

Why it happens

Social pressures, disinterest in the survey, and eagerness to please the researcher are all possible causes of response bias. Furthermore, the design of the survey itself can prompt participants to adjust their responses. 

Example 1 - Major depression

People with major depression are more likely to identify a given facial expression as sad than people without major depression. This can impact daily interpersonal interactions, in addition to influencing responses on surveys related to emotion-processing.

Example 2 - Interpreting social media surveys

Surveys that aren’t designed to prevent response bias provide misleading results. For this reason, social media surveys, which can be created by anyone, shouldn’t be taken at face value.

How to avoid it

When filling out a survey, actively considering each response, instead of answering automatically, can decrease the amount to which we engage in response bias. Anyone conducting research should take care to craft surveys that are anonymous, that are neutral in tone, that provide sufficient answer options, and that don’t give away too much about the research question.

This article evaluates the ways in which our behaviors are molded by societal influences. The author breaks down the different influences our peers have on our actions, which is pertinent when it comes to exploring social desirability bias. 

The Framing Effect

The framing effect describes how the way factors such as wording, setting, and situation influence our choices and opinions. The way survey questions are framed can lead to response bias, by causing respondents to over- or under-report their true viewpoint. This article elaborates on the implications of the framing effect, which are powerful and widespread.