Congratulations! Your prototype/product is out into the world.
Now you can just kick your feet up and relax as the revenue comes rolling in.
Well, not if you want to keep the end users happy.
To keep users happy you need to listen to them. And a great way to listen to them are user experience surveys.
Also known as feedback surveys, they gather both qualitative and quantitative data that helps you:
Keep a pulse on the customers and identify churn points.
Persuade stakeholders and investors who highly value data.
Decide on changes that will increase user satisfaction.
What’s in this guide:
When should you send a user experience survey?
With functionality being the highest priority. You should send user experience surveys at regular intervals throughout a user's journey.
Think of them like road signs:
10 miles from Exit 29
5 miles from Exit 29
2 miles from Exit 29
1 mile from Exit 29
If you had a road sign every 50 feet you would get sick of them immediately. Same rule applies with these questionnaires.
You DON’T want your end users so turned off by your surveys that they stop using your product.
However, sending surveys periodically can make sure your product’s usability is keeping the customers satisfied.
The intervals at which you can send surveys varies. They could be time gated, based on number of actions performed on screen, consecutive days signed in, etc.
Along with intervals; your surveys should have cool down periods. Where the survey won’t pop up no matter what triggers are hit.
Scale types and questions
One popular type of online survey templates are rating scales . These are broken up into two main groups; ordinal and interval
Ordinal - qualitative with answer sets that have logical order; supplies quantitative data.
Interval - answer sets where each interval within it tells you about a deeper meaning; supplies qualitative data.
Many subtypes of these scales exist. Some work best as ordinals and vice versa. So find what works best for you and your end user. You can check out the subtypes below.
Likert scale example from GeoPoll
Asks for the end user’s level of agreement with a given statement.
Example: “How much do you agree with this statement”
Frequency scale example from ProProfs Survey Maker
The frequency scale helps understand customer behavior and patterns, though differences of interpretation are common, so combing through data may be difficult.
Example: ‘When [performing action], how often do you [end result]?
Comparative scale example from QuestionPro
The comparative scale asks for user preferences. This scale is the most useful when comparison between two linked items
Example: ‘Which feature do you like most? From [UX format #1/ UX format #2]
Semantic differential scale:
Semantic differential scale compared to Likert scale from MBA in Simple Words
Users indicate where product lies on scale, with each end being polar opposites. This scale is often criticized for not giving clarity to mid points.
Example: How would you rate your experience with X product? Extremely satisfied to extremely unsatisfied
Graphic scale example from QuestionPro
The graphic scale has users pick values based on graphical representation. This is useful where product adoption is worldwide. As it doesn’t need translation.
Example: Do you like our product?
Slide scale example from QuestionPro
Unlike the previous options the slider scale doesn’t limit users to 5 options . Whole scale can be used by them to give feedback. By not limiting feedback to easily understandable values. It makes interpreting the feedback difficult. So use it sparingly.
Example: How satisfied are you with the website?
Design process questions
Going deeper past scales. You can ask users questions based on where your team is in the development process. These survey questions are broken up into three categories; THINKING, MAKING, AND CHECKING.
Asking the right questions here can: A) help your team think of new ideas B) avoid developing new products/features that won’t be utilized C) see how recent updates have fared with users.
THINK: user experience survey questions
THINK user experience questions ask for customer feedback that encourages users to think deeper. These gather qualitative data with the express purpose of helping UX designers generate new ideas.
1. Question: If you could improve one thing about this product, what would it be?
Why it's useful: Brings up the users biggest pain point by narrowing down to one change
2. Question: What is one thing you wish this product could do that it doesn’t do already?
Why it’s useful: Could bring to light a feature the team hasn’t thought of. On the flip side the answer could validate an idea already being discussed.
3. Question: If you knew that we would make one change to our product the next time you logged in, what would you want it to be?
Why it's useful: Get ideas for improvements, uncover possible user pain points, and prioritize changes.
4. Question: What is one thing you wish this product could do that it doesn’t do already
Why it's useful: Get ideas for new tools or features to add to your product roadmap.
5. Question: What would you change in our product if you had a magic wand?
Why it's useful: Doesn’t constrain the user to “realistic” changes.
6. How did you first learn about our product/website?
Why it’s useful: Helps you find where most of your user base comes from.
7. What comes to your mind when you think about our product/website?
Why it’s useful: Usually points to your product’s biggest feature or flaw. Allowing you to build on the good or fix the bad.
8. Which of the following best describes your role in the purchase process?
Why it’s useful: Helps understand your end users role in their org. Do they have to get approval to buy your product or are they the executives.
9. What are the main problems you want to solve with [Product]?
Why it’s useful: Allows you to see if your product is being used to solve problems you didn’t anticipate.
10. Which other options did you consider before choosing our [Product name]?
Why it’s useful: May bring new competitors to light.
11. Please tell us about your experience with [Product]?
Why it’s useful: Simple honest feedback.
12. What do you like the least/most about [Product]?
Why it’s useful: similar to #7. Shifts how you see features and flaws in your product.
13. What stopped you today from completing the purchase?
Why it’s useful: Use if your product has add-ons or tiers. You can see why they didn’t find upgrading valuable.
14. Please share with us the one time you found our product/service to be highly satisfying?
Why it’s useful: Allows you to double down on what’s working.
15. What could we have done better?
Why it’s useful: If one person has this issue there are bound to be others.
16. Please name three crucial features you think we are missing in [Product]?
Why it’s useful: Input could be injected into the next ideation session.
17. What are the three features most valuable to you in our product?
Why it’s useful: Prevents you from unknowing axing features your users find valuable.
MAKE: user experience survey questions
Using UX surveys in the MAKE phase of development can help you prioritize aspects of your product roadmap.
18. Question: On a scale of 1-10, how would your use of our product be impacted by [feature/change]?
Why it's useful: Understand the potential impact of changes from the users' perspective.
19. Question: Please state your agreement with the following: "[feature/change] would make my job easier.”
Why it's useful: See if a change will be for the designers benefit or the user.
20. Question: Is there anything here you never use?
Why it's useful: Putting some features on the chopping block allows for easier usability and room for new more valuable features.
21. Question: What products do you use other than ours to perform similar tasks?
Why it’s useful: Gives deeper knowledge into how other tools compare to yours.
22. Question: Is our pricing clear to you?
Why it’s useful: Identifying if the pricing description is too lengthy or too vague can help encourage users to move from the free track to the paid one.
23. Is there something missing on this page?
Why it’s useful: Could point out features users want or if there is a coding bug.
24. Do you supplement our app with another to get a full experience?
Why it’s useful: These missing features could be improved upon and added to your product.
25. Is there anything you would like us to help you with?
Why it’s useful: Exposes potential bottlenecks in your product.
26. Which payment/delivery method do you prefer?
Why it’s useful: Simple user preference.
27. Which features were confusing to you?
Why it’s useful: Brings to light UX /UI issues.
CHECK: user experience survey questions
CHECK UX questions help you understand whether product changes have improved user experience. You can use a mixture of closed- and open-ended questions during this phase:
28. Question: On a scale of 1 to 10, how upset would you be if [feature name/product/service] was no longer available?
Why it’s useful: Helps you remove fluff from products. The only thing worse than a feature not being used is an underutilized feature that makes using other features tiresome to navigate.
29. Question: Rate your agreement with the following: "[feature/change] has made my job easier.”
30. Question: How has [feature/change] affected the way you use our product?
Why it's useful: Get voice of the customer (VoC) feedback that directly relates to a recent change or update to your product.
31. Question: Is there anything you would change about how [feature/change] works?
Why it’s useful: May uncover hidden bottlenecks that can be removed. Or integration with other features.
32. Question: Which of our competitors, both online and offline, did you consider before choosing our [Product name]?
Why it’s useful: Allows you to see how you fair against competitors through your users eyes.
33. Which of the following words would you use to describe our product/service?
Why it’s useful: Categorizes customers based on their answer.
34. How would you rate our service on a customer satisfaction scale of 1 – 10?
Why it’s useful: Combined with other quantitative data; you can see what customer profiles are satisfied and vice versa.
35. What stopped you from getting our subscription?
Why it’s useful: Alerts you to hurdles in the user buying experience.
36. Was the app easy to pick up and use?
Why it’s useful: Straightforward UX/UI question.
37. If we ask you to rate our product/website out of 10, what score would you give us?
Why it’s useful: Great post update question. Sees if the score increased or decreased since update.
38. Did you find what you were looking for?
Why it’s useful: Find out if your product is easy to navigate.
Tips for making the most of your UX survey
Use these tips to make the most of your user experience survey:
1. Conduct a UX audit
If UX surveys act as car sensors. Then an UX audit is taking your website/app to the mechanic. You can conduct a UX audit for a variety of reasons such as UX surveys alerting you to a high churn spot, or simple routine maintenance. These can be done in house if your team has the capability or hire a user experience architect.
2. Create a user persona.
While you probably already have a user persona from the design thinking process. Your product wouldn’t be the first to find popularity among a different crowd.
Users evolve and change. So updating/creating a user persona makes it easier to format the user experience survey in a way that attracts them. Because what’s the point in creating a survey that isn’t relevant to users and won’t have viable response rates?
3. Augment your findings with product analytics
It’s important to match your findings across multiple sets of user research data. For example; coupling churn analysis with customer feedback. Doing this helps you avoid red herrings while fixing the biggest problems that will have the most impact.
4. Avoid question bias
Creating questions that skew results gives you less than valuable information. And it often happens by accident! UX planet has a great breakdown of the types of biases.
However, don't nitpick your questionnaire while writing it. After you finish, look it over and see if you and your team can spot any biases. Then make changes based on the feedback.