Surveys
Time
⏲
7+ Days
Difficulty
🕹
Hard
Materials
📦
Survey Software
Spreadsheet to track responses
People
🕴
1 Reserachers
5+ Users
Overview
Surveys are an investigation of the traits and opinions of a group of people, based on a series of questions
What
Survey – an investigation of the traits and opinions of a group of people, based on a series of questions.
Surveys do not measure behavior. They measure people’s perceptions of their behavior, so they are an indirect method of research.
Why
Useful for ascertaining:
- User demographics: How old are your users? What kind of Internet connection do they have? Is your user population homogenous, or is it a number of distinct groups?
- User opinions: What do they want? What do they like and dislike about the product?
- Competitive intel: What other work management products do people use? How long have they been using them? What compelling features do they have?
Many managers like surveys because they provide data that is easy to count and visualize in reports.
However, this data is easily misleading, so surveys must be designed and analyzed with extreme thoughtfulness.
Also, despite their popularity, remember that surveys are only one method of research. Sometimes the answers you need can (or can only) be found in other methods. So make sure you’re using the best method for your needs before investing the resources a survey requires.
When
A user profile could be done anytime you want a snapshot of your current user population. A satisfaction survey could be run before a major redesign in order to make sure the redesign resolves the main problems people are experiencing. A value survey, investigating what people find important, could be run before a major marketing campaign to shape how it describes and promotes the product.
Use quantitative (closed, such as multiple choice or Likert scale) survey questions to determine “how many” and “how much.” Example: How many of our users are executives? *Note: most surveys rely primarily on quantitative questions.
Use qualitative (open-ended) survey questions to determine how or why to fix a problem* Example: Why did you choose to use Trello? *Note: Use sparingly. It’s hard to accurately code qualitative data in large samples.
Steps
Step 1 Make a research plan
Determine the specific goal of your research.
Examples:
- Create a demographic, technological, and web use profile of our audience.
- Get a prioritized rating of the utility of our main features to the survey audience.
- Get a list of other tools users commonly use.
- Set the schedule. Most of your time will be spent in preparing the survey.
- Write the survey.
Brainstorm your questions. With a group, use casual language to write down every question you can possibly think of relating to the goals.
Examples:
- Characteristics – What do you do for a living? What kind of mobile phone do you own?
- Behaviors – What product features do they (claim to) use? What are the reasons they come to your site? Or your competitor’s?Attitudes** – Does the product do what they expected? What do they consider unnecessary or distracting?
Keep in mind there are 2 thing surveys ultimately do:
- Describe the audience’s characteristics, what they own, what they want, how they claim to behave.
- Explain people’s beliefs and behaviors by uncovering *relationships between the descriptive traits.* Example: how income affects the features they prefer. Once you know this, you can understand the core reason your product is worthwhile to people.
Write your questions. Each question must directly fulfill a goal from step #2 and be completely unambiguous. Focus on closed-ended (quantitative) questions.
- Common types of questions:
- Yes/no (“Have you ever used Workfront?”)
- Multiple choice (“Which one of the following…”)
- Checklist (“Check all that apply”)
- Likert scale (“Rate the following statements according to the scale: strongly disagree, disagree, neutral, agree, strongly agree”)
- Matrix (ranking 5 items in order of 1-5)
Answers must be specific, exhaustive, and mutually exclusive.
Ask more general questions first.
Don’t ask people to recall the distant past or to predict the future.
Avoid loaded questions that contain embedded assumptions or preferred answers, such as “Which Jeff Goldblum character is the most charming?”
Avoid negative questions (“Which are you not interested in?”).
Questions must only contain one concept.
Use specific, quantifiable words and concepts (“How much time did you spend reading news last week? None, 0-5 hours, 6-10 hours, more than 10 hours”). Avoid terms with vague meanings (“a lot,” “Is this design appealing?”), jargon, abbreviations.
Ask questions the same way every time.
Avoid extremes (“every time” or “never”).
Make questions relevant to things the participant would actually know and care about.
Create follow-up questions. If one question asks, “Check all the sites that you read regularly,” a follow-up can then contain a list of sites that were marked as being read regularly with the question “Rate how important each of the following sites are to you, from Crucial to Unimportant.”
Include an “other: _______” option. Also include opt-out options (“None of the above” or “I don’t use work management tools.”). This reduces participant frustration, as it gives them more accurate options to choose from.
Edit and order your questions. Pare down the questions to take 20 minutes or less. The question order is as important as the wording. Selectively reveal information to the participant in a logical way – earlier topics may influence people’s expectations and thoughts. Whenever possible, randomize the list of answers inside a question to reduce bias.
Surveys have 4 parts:
- An introduction that presents the purpose of the survey, instructions, duration, and contact info if a question arises.
- A beginning with teaser questions that draw people in (not boring demographic questions).
- A middle, where you keep things moving by alternating questions that are likely to be interesting with those that are not. Group questions thematically.
- The end, which concludes with demographic questions, provides an open-ended field for general response, and reiterates the contact info.
Write the instructions. 1 paragraph max explaining: that the survey is important, what it’s for, why people’s answers are safe, what the reward is, who is responsible for the survey, how long it takes to complete, how long the survey will run, and who to contact with questions. Example:
“We want to make [product name] a better product for you. Your participation in this survey is very important to us. The survey is to help us understand the needs and desire of the people using [product name] . All of your answers are confidential and will be used strictly for research. There will be no sales or marketing follow-up because of your participation in this survey. By completing this survey, you will have our gratitude and a 1 in 100 chance of winning an iPad. This survey is administered for [product name] by YourCompany LLC. This survey will take approximately 5 minutes to take. This survey will run from [Start Date] to [end date]. If you have any questions or comments about this survey, you may enter them into the form at the bottom of the survey or mail them to Jane Smith at janes@yourcompany.com.”
Web survey tips. You’ll probably want to administer your survey online. You’ll have to pay for full-fledged services like Qualtrics and SurveyMonkey, but you can experiment with free services like Google Forms for practice (like surveying where your team wants to go to lunch). When you buy a subscription to a full survey service, make sure it has these features:
- Error checking that prevents respondents from continuing with errors in their answers.
- Functionality across devices, operating systems, etc.
- Usability.
- Timing that records when answers were submitted, so you know which were submitted within the study timeframe.
- Mortality to keep track of the people who dropped out of the survey, and at what point.
- Response rate that counts how many people completed the survey, compared to how many were offered it.
Test the survey. Run a pilot test of 5-10 people who are representative of those in your sample. Don’t tell them it’s a pre-test. Watch 2-3 take the survey in person and note how long it takes, and problems and questions that arise. You can also follow up with pilot participants, asking them to discuss how the survey went.
Complete your report template, too, to ensure you’ll be able to do everything you set out to do.
The incentive. It’s important to reward people for their time, although the rewards are steeper for an hour of an executive’s time vs. 10 minutes of a teenager’s time. You can directly pay them, or offer an entry in a drawing.
Field the survey.
The sampling frame and the sample. The sampling frame is everyone in your universe of users who you could offer the survey to. The sample is the randomly-chosen subset of people who actually complete the survey. Unfortunately, there are many, many ways to sample badly and get the wrong sampling frame. So:
- Discuss your methods with your colleagues to identify blind spots!
- Define the users you’re most interested in (ex: usage frequency). Now add other characteristics that are important and that can affect the way you recruit people to your survey (ex: people who work in Marketing, power users).
Sample size. How many people should be in your sample? Use [this calculator](https://www.surveysystem.com/sscalc.htm) to determine sample size. Confidence level = 95%, confidence interval = 5, and population = # of people who would use the product.
Bias. “Sampling bias is the Great Satan of survey research.” – Observing the User Experience. This is another time it’s crucial to discuss your potential blind spots with your colleagues. For a sample to provide useful information about the whole population, it has to resemble that population. If it does not, then certain subgroups or views will be over-represented and while other groups get shortchanged. Since the purpose of the survey is to understand your users, this undermines your whole endeavor and can lead to costly errors in product. Common biases include:
- Non-responder bias: when there’s a pattern to those who choose not to participate in your survey (ex: a survey about time usage that takes 45 minutes… you just excluded all busy people from taking this survey).
- Timing bias – when you ask the question affects the answer (ex: asking someone how much they enjoy shopping… at midnight on Christmas Eve)
- Invitation bias – How did you invite people to participate in the survey? The places, times, incentives, and wording all affect who is going to respond. (ex: inviting skateboarders with the terms “Dear Sir or Madam, you are cordially invited to…” )
- Self-selection: A special kind of invitation bias and a common mistake in web surveys is letting people choose whether they want to participate in a survey without inviting them (ex: “Click here to take our survey!”). Who will take the survey? Why will they take it? You have no idea who the people NOT clicking the link are. Plus, these surveys tend to attract people with strong opinions and specialized interests, which is rarely who you want to attract.
- Presentation bias: the survey should have the same visual polish as the sites they link from, or credibility is compromised.
- Expectation bias: when people think they know what the survey is going to be about and it actually asks some other questions, they may abandon it. This is why you should keep the survey invitation and description broad.
- Fielding method: intercept surveys (triggered during the use of a site or application) and email surveys (in which participants are recruited from an email message).
Analyze survey responses.
There are two basic ways of analyzing survey responses: counting and comparing
Drawing conclusions. Pitfalls to avoid
- Confusing correlation with causation: Just because two things happen simultaneously, it doesn’t mean one caused the other. Just because a group of people likes a product and uses it a lot doesn’t mean that liking the product makes people use it more or that frequent use makes people like it better (ex: consider a company in which everyone is required to use the same email program whether they like it or not – the two phenomena could be unrelated.)
- Not differentiating between subpopulations: Sometimes what looks like a single trend is actually the result of multiple trends in different populations. To see if this is the case, look at the way answers are distributed rather than just the composite figures. For example, if you’re doing a satisfaction survey and half the people say they’re “extremely satisfied,” and the other half are “extremely dissatisfied,” looking only at the mean will not give you a good picture of your audience’s perceptions.
- Confusing belief with truth: There are many cognitive biases ([188 currently known](https://en.wikipedia.org/wiki/List_of_cognitive_biases)), and even more random psychological phenomena. They all blind the human mind to reality. So just because survey participants report knowing or experiencing something, doesn’t mean they did. Example: “ad blindness” is a phenomena in which people stop noticing advertisements. So if you ask, “Have you ever seen this ad?” you’ll get an answer about what people believe to be true, not reality.
- Not taking human nature into account: People want everything. People exaggerate. People will choose an answer even if they don’t feel strongly about it. People say what they imagine the survey-maker wants to hear. People lie.
Reporting:
Once you have the statistical results, you can plug the information into the report template you originally created.
Follow-up and ongoing research.
- Follow-up qualitative research. Surveys help us know WHAT people believe about themselves and the product, but they can’t tell us WHY people believe that. For WHY data, use interviews, focus groups, usability testing (with think-alouds), and observations onsite.
- Tracking surveys. You can run the same survey in the same way at regular intervals to track how your product’s audience and perceptions change as the product grows in popularity.
- Refined surveys. You can use a survey to determine core characteristics that define your audience, then conduct additional research to deepen your knowledge. (Ex: if you know that the most important factors defining your audience are frequency of computer usage, level of computer experience, and what software they use… now you can field surveys that further probe their preferences, satisfaction, common ways they use your product, etc.)
- Pre/post surveys. Run an identical survey before and after a certain event to see what, if any, effect the changes had on your population.
Step 2 Ready your participants
- Schedule an onboarding session to set expectations
- Create a cheat sheet (who to contact, when to log, etc.)
- Share rewards structure (diary studies are a lot of work and participants should be paid for their time)
Step 3 Log and Process
- Frequently communicate with your participants
- Send reminders
- Provide guidance
- Acknowledge entries coming in
- As data comes in…
- Take notes
- Transcribe videos or audio entries
- Write any follow-up questions
Step 4 Follow-up and Learn More
- Schedule follow-up interviews with your most engaged participants
- Ask for participant feedback
- Turn your qualitative data into quantitative data with follow-up surveys, usability tests, A/B tests, etc.
Step 5 Analyze and share
- Synthesize the data
- Create a summary of findings
- Tag like-items to find patterns (i.e. “distracting” or “hard” were common terms used)
- Share insights with stakeholders
Resources
Tools
None