What's the most critical factor determining the success of a survey?You got it – the types of questions you ask.
From email to SMS surveys, the common denominator that determines effectiveness is the questions. Different question and answer types promote multiple answers, even for similar questions.
This guide covers the types of survey questions available and looks at what makes good survey questions. We'll also explore examples and give you access to sample survey questions as a template for writing your own.
Effectively using a different question and answer types lead to more engaging surveys. Properly incorporating the different types gives you more complete and accurate results.
1. Dichotomous questions
Dichotomous is generally a “yes/no” question. It's often a screening question to filter those who don't fit the needs of the research. Dichotomous question example:
For example, you want to know information about people who use your products. This type of question screens respondents to determine if they own your products. Those who have yet to buy move to the end of the survey.
Dichotomous questions can also separate respondents by a specific value. For example, this might be those who “have purchased” and those who “have yet to purchase” your products.
The survey then asks different question sets to the two groups. You may want to know the satisfaction of the “have purchased” group. On the other hand, you'll want to know why you're missing sales from the other group.
2. Multiple choice
Questionnaire design – Pew Research Center Methods
Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public.
Accurate random sampling and high response rates will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions.
Creating good measures involves both writing good questions and organizing them to form the questionnaire.
Questionnaire design is a multistage process that requires attention to many details at once.
Designing the questionnaire is complicated because surveys can ask about topics in varying degrees of detail, questions can be asked in different ways, and questions asked earlier in a survey may influence how people respond to later questions.
Researchers also are often interested in measuring change over time and therefore must be attentive to how opinions or behaviors have been measured in prior surveys.
Surveyors may conduct pilot tests or focus groups in the early stages of questionnaire development in order to better understand how people think about an issue or comprehend a question. Pretesting a survey is an essential step in the questionnaire design process to evaluate how people respond to the overall questionnaire and specific questions.
For many years, surveyors approached questionnaire design as an art, but substantial research over the past thirty years has demonstrated that there is a lot of science involved in crafting a good survey questionnaire. Here, we discuss the pitfalls and best practices of designing questionnaires.
There are several steps involved in developing a survey questionnaire. The first is identifying what topics will be covered in the survey.
For Pew Research Center surveys, this involves thinking about what is happening in our nation and the world and what will be relevant to the public, policymakers and the media.
We also track opinion on a variety of issues over time so we often ensure that we update these trends on a regular basis so we can understand whether people’s opinions are changing.
At Pew Research Center, questionnaire development is a collaborative and iterative process where staff meet to discuss drafts of the questionnaire several times over the course of its development. After the questionnaire is drafted and reviewed, we pretest every questionnaire and make final changes before fielding the survey.
Measuring change over time
Many surveyors want to track changes over time in people’s attitudes, opinions and behaviors. To measure change, questions are asked at two or more points in time.
A cross-sectional design, the most common one used in public opinion research, surveys different people in the same population at multiple points in time. A panel or longitudinal design, frequently used in other types of social research, surveys the same people over time.
Pew Research Center launched its own random sample panel survey in 2014; for more, see the section on the American Trends Panel.
Many of the questions in Pew Research surveys have been asked in prior polls. Asking the same questions at different points in time allows us to report on changes in the overall views of the general public (or a subset of the public, such as registered voters, men or African Americans).
When measuring change over time, it is important to use the same question wording and to be sensitive to where the question is asked in the questionnaire to maintain a similar context as when the question was asked previously (see question wording and question order for further information). All of our survey reports include a topline questionnaire that provides the exact question wording and sequencing, along with results from the current poll and previous polls in which the question was asked.
Open- and closed-ended questions
One of the most significant decisions that can affect how people answer questions is whether the question is posed as an open-ended question, where respondents provide a response in their own words, or a closed-ended question, where they are asked to choose from a list of answer choices.
When explicitly offered the economy as a response, more than half of respondents (58%) chose this answer; only 35% of those who responded to the open-ended version volunteered the economy.
Moreover, among those asked the closed-ended version, fewer than one-in-ten (8%) provided a response other than the five they were read; by contrast fully 43% of those asked the open-ended version provided a response not listed in the closed-ended version of the question.
All of the other issues were chosen at least slightly more often when explicitly offered in the closed-ended version than in the open-ended version. (Also see “High Marks for the Campaign, a High Bar for Obama” for more information.)
Researchers will sometimes conduct a pilot study using open-ended questions to discover which answers are most common. They will then develop closed-ended questions that include the most common responses as answer choices. In this way, the questions may better reflect what the public is thinking or how they view a particular issue.
When asking closed-ended questions, the choice of options provided, how each option is described, the number of response options offered and the order in which options are read can all influence how people respond.
One example of the impact of how categories are defined can be found in a Pew Research poll conducted in January 2002: When half of the sample was asked whether it was “more important for President Bush to focus on domestic policy or foreign policy,” 52% chose domestic policy while only 34% said foreign policy.
When the category “foreign policy” was narrowed to a specific aspect – “the war on terrorism” – far more people chose it; only 33% chose domestic policy while 52% chose the war on terrorism.
In most circumstances, the number of answer choices should be kept to a relatively small number – just four or perhaps five at most – especially in telephone surveys. Psychological research indicates that people have a hard time keeping more than this number of choices in mind at one time.
When the question is asking about an objective fact, such as the religious affiliation of the respondent, more categories can be used. For example, Pew Research Center’s standard religion question includes 12 different categories, beginning with the most common affiliations (Protestant and Catholic).
Most respondents have no trouble with this question because they can just wait until they hear their religious tradition read to respond.
What is your present religion, if any? Are you Protestant, Roman Catholic, Mormon, Orthodox such as Greek or Russian Orthodox, Jewish, Muslim, Buddhist, Hindu, atheist, agnostic, something else, or nothing in particular?
In addition to the number and choice of response options offered, the order of answer categories can influence how people respond to closed-ended questions. Research suggests that in telephone surveys respondents more frequently choose items heard later in a list (a “recency effect”).
Because of concerns about the effects of category order on responses to closed-ended questions, many sets of response options in Pew Research Center’s surveys are programmed to be randomized (when questions have two or more response options) to ensure that the options are not asked in the same order for each respondent.
For instance, in the example discussed above about what issue mattered most in people’s vote, the order of the five issues in the closed-ended version of the question was randomized so that no one issue appeared early or late in the list for all respondents.
Randomization of response items does not eliminate order effects, but it does ensure that this type of bias is spread randomly.
Questions with ordinal response categories – those with an underlying order (e.g., excellent, good, only fair, poor OR very favorable, mostly favorable, mostly unfavorable, very unfavorable) – are generally not randomized because the order of the categories conveys important information to help respondents answer the question.
Generally, these types of scales should be presented in order so respondents can easily place their responses along the continuum, but the order can be reversed for some respondents.
For example, in one of the Pew Research Center’s questions about abortion, half of the sample is asked whether abortion should be “legal in all cases, legal in most cases, illegal in most cases, illegal in all cases” while the other half of the sample is asked the same question with the response categories read in reverse order, starting with “illegal in all cases.” Again, reversing the order does not eliminate the recency effect but distributes it randomly across the population.
The choice of words and phrases in a question is critical in expressing the meaning and intent of the question to the respondent and ensuring that all respondents interpret the question the same way. Even small wording differences can substantially affect the answers people provide.
An example of a wording difference that had a significant impact on responses comes from a January 2003 Pew Research Center survey. When people were asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule,” 68% said they favored military action while 25% said they opposed military action.
However, when asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule even if it meant that U.S. forces might suffer thousands of casualties,” responses were dramatically different; only 43% said they favored military action, while 48% said they opposed it. The introduction of U.S.
casualties altered the context of the question and influenced whether people favored or opposed military action in Iraq.
There has been a substantial amount of research to gauge the impact of different ways of asking questions and how to minimize differences in the way respondents interpret what is being asked. The issues related to question wording are more numerous than can be treated adequately in this short space. Here are a few of the important things to consider in crafting survey questions:
First, it is important to ask questions that are clear and specific and that each respondent will be able to answer.
If a question is open-ended, it should be evident to respondents that they can answer in their own words and what type of response they should provide (an issue or problem, a month, number of days, etc.).
Closed-ended questions should include all reasonable responses (i.e., the list of options is exhaustive) and the response categories should not overlap (i.e., response options should be mutually exclusive).
It is also important to ask only one question at a time.
Questions that ask respondents to evaluate more than one concept (known as double-barreled questions) – such as “How much confidence do you have in President Obama to handle domestic and foreign policy?” – are difficult for respondents to answer and often lead to responses that are difficult to interpret. In this example, it would be more effective to ask two separate questions, one about domestic policy and another about foreign policy.
In general, questions that use simple and concrete language are more easily understood by respondents.
It is especially important to consider the education level of the survey population when thinking about how easy it will be for respondents to interpret and answer a question. Double negatives (e.g.
, do you favor or oppose not allowing gays and lesbians to legally marry) or unfamiliar abbreviations or jargon (e.g., ANWR instead of Arctic National Wildlife Refuge) can result in respondent confusion and should be avoided.
Similarly, it is important to consider whether certain words may be viewed as biased or potentially offensive to some respondents, as well as the emotional reaction that some words may provoke.
For example, in a 2005 Pew Research survey, 51% of respondents said they favored “making it legal for doctors to give terminally ill patients the means to end their lives,” but only 44% said they favored “making it legal for doctors to assist terminally ill patients in committing suicide.
” Although both versions of the question are asking about the same thing, the reaction of respondents was different. In another example, respondents have reacted differently to questions using the word “welfare” as opposed to the more generic “assistance to the poor.
” Several experiments have shown that there is much greater public support for expanding “assistance to the poor” than for expanding “welfare.”
How to write good survey questions – Pollfish Resources
Good survey questions lead to good data. But what makes a survey question “good” and when is the right time to use different types?
At Pollfish, we have distributed tens of thousands of surveys and manually review them all, so we know a thing or two about writing good survey questions. Our experts have compiled a list of the essentials below into a sort-of questionnaire template to make sure you have what you need to create great surveys and get the highest-quality data.
1. Have a goal in mind.
Consider what you’re trying to learn by conducting this survey. Do you have an idea that you want to validate, or are you hoping that you can disprove an assumption you’ve been operating under? Surveys work best when they are focused on one specific objective. When building the questionnaire for your survey, it is important to offer questions that support your goal.
2. Eliminate Jargon.
Just because a concept is clear to you doesn’t mean your target audience is always on the same page. Good questionnaire designs contain good survey questions to be sure.
But they also use plain language (not jargon) to explain concepts or acronyms that customers may be unfamiliar with and offer an opt-out for those who are unsure.
Don’t be afraid to use more than one question or offer an example to ensure clarity on complex information in your questionnaire template— a confused audience leads to frustration and low-quality responses.
3. Make answer choices clear and distinct.
When multiple-choice answers are presented, the respondent is required to make a selection. If these responses overlap or are confusing for the respondent, the quality of the data decreases because they aren’t sure what is being asked of them. Make sure answers are distinct and specific whenever possible so the respondent can confidently choose the best answer.
4. Give users an “other” option.
10 Tips For Crafting Good Survey Questions
Online survey tools have made it easy for marketers to conduct their own research. But while it may be easy to create a survey, surveying requires careful planning if you want to collect meaningful results that you can act on.
When crafting your survey questions, consider these Ten Dos and Don’ts.
What Makes a Good Survey Question
Start with a clear survey goal.
DO: Stick to your goal
Only ask questions that pertain to your goal or an objective that will help you achieve your goal. No matter how nice it might be to know, don’t ask if it does not help you achieve your goal. Set a clear goal on what you want to achieve and don’t stray from it.
DO: Use the right survey question type
To get clean data, you need to use the right question type.
Qualitative questions are open-ended and are great for asking “why”. Use these when exploring an issue. Use them sparingly as they are fatiguing for respondents and subject to interpretation bias.
Quantitative questions are closed- ended. These are far less fatiguing and easy to measure. These offer easy answer options for answering how, what and when. They often appear as:
- Radio buttons
- Check boxes
- Drop down menus
How To Write A Good Survey
Above all, your questionnaire should be as short as possible.
When drafting your questionnaire, make a mental distinction between what is essential to know, what would be useful to know and what would be unnecessary.
Retain the former, keep the useful to a minimum and discard the rest. If the question is not important enough to include in your report, it probably should be eliminated.
Back to table of contents
Use simple words
Survey recipients may have a variety of backgrounds so use simple language. For example, “What is the frequency of your automotive travel to your parents' residents in the last 30 days?” is better understood as, “About how many times in the last 30 days have you driven to your parent's home?”
Back to table of contents
Relax your grammar
Relax your grammatical standards if the questions sound too formal. For example, the word “who” is appropriate in many instances when “whom” is technical correct.
Back to table of contents
Assure a common understanding
Write questions that everyone will understand in the same way. Don't assume that everyone has the same understanding of the facts or a common basis of knowledge. Identify even commonly used abbreviations to be certain that everyone understands.
Back to table of contents
Start with interesting questions
How to write awesome survey questions – Part 1
Most international development programs involve one or more surveys – whether it’s baseline surveys, endline surveys, needs assessment surveys, or feedback forms from participants.
This guide explains how to write clear, concise survey questions that will collect accurate data.
The inspiration for many of these tips comes from The Survey Research Handbook by Pamela Alreck and Robert Settle.
This advice is for:
- Basic quantitative surveys such as feedback forms, needs assessments, simple baseline and endline surveys, etc.
- Written surveys completed by individuals who are literate.
This advice is NOT for:
- Complex baseline and endline surveys or research studies.
- Developing new measurement instruments for use in research (e.g. psychological instruments for measuring concepts such as confidence, motivation, etc).
- Graphical surveys completed by people with low literacy.
- Qualitative focus groups or interviews.
Don’t write new questions unless you have to
There are thousands of NGOs, UN agencies and donors doing surveys all the time. Chances are you aren’t the first person to do this type of survey.
So before you even consider creating new survey questions, do some research to find out if other people have done something similar which you could use. For example, if you’re planning to measure poverty then the Grameen Foundation already have a standard tool to do this.
Or if you want to measure the prevalence of diarrhea the Bristol Stool Scale has been created specifically for this purpose.
The other benefit of using standard questions is that you can compare the data you collect to what other people have collected. For example, if you use the same questions as a national survey then you can compare the results in your target area to the national results.
Write the questions in the local language first
In international development it’s quite common to have managers or technical advisers who don’t speak the local language. If you’re in this position it can be tempting to write the survey questions in your native language (such as English or French) and then have someone translate it.
If you’ve ever been to another country and seen poorly translated signs you’ll understand why this is a very bad idea. Even the best translator can have difficulty getting exactly the right meaning for a question. I’ve even had cases where the translator completely reversed the meaning of the question by accident, making the data useless.
To avoid this problem the questions should be written first in the local language, and then translated into other languages, such as English. If you don’t speak the local language then work with someone who does.
Discuss each of the questions verbally in a language you both understand, and then have them write the question in the local language.
Once they’ve written the question they can translate it into a language you understand.
Sometimes it’s not possible to write the questions in the local language, particularly if you’re using standard questions from another survey. In that case your best option is to use a process called “back translation”. Start by asking one translator to translate the question into the local language.
Then ask a different translator (who hasn’t seen the original) to translate it back again. Then have both translators compare all three versions to identify any discrepancies or problems with meaning (see example below).
If it’s a very important survey then you may want to have four translators working independently – two to translate, and two to back translate.