Blog
Business
10 Types of User Testing Questions
Learn 10 types of user testing questions that get you real feedback. Includes example questions, a comparison table, and tips for your next test session.

Nafis Amiri
Co-Founder of CatDoes

TL;DR: The 10 types of user testing questions are: task-based, satisfaction/NPS, open-ended perception, usability, feature prioritization, information architecture, emotional response, competitive comparison, accessibility, and motivation/goals. Mix 3-4 types per session based on what you need to learn. This guide covers all 10 with example questions you can use today.
You already know user testing matters. The problem is most sessions produce vague, unusable feedback because the questions are wrong. Asking “Do you like this?” gets you a polite “yes” and nothing you can act on.
This guide breaks down 10 types of user testing questions with examples for each. Whether you are a product manager, UX researcher, or founder running your first usability study, you will know which types to use, when to use them, and how to phrase them so you get answers worth building on.
Table of Contents
What Are User Testing Questions?
1. Task-Based Completion Questions
2. Satisfaction and NPS Questions
3. Open-Ended Perception Questions
4. Usability and Design Questions
5. Feature Prioritization Questions
6. Information Architecture Questions
7. Emotional Response Questions
8. Competitive Comparison Questions
9. Accessibility Questions
10. User Motivation and Goals Questions
Comparison of All 10 Question Types
How to Run a User Testing Session
FAQ
What Are User Testing Questions?
User testing questions are prompts you give participants during a usability test, survey, or interview to understand how they experience your product. The type of question you ask determines the type of feedback you get.
According to Nielsen Norman Group, the biggest mistake in user testing is asking leading or closed-ended questions that confirm your own assumptions instead of revealing real behavior. Getting the question type right fixes this at the source.
1. Task-Based Completion Questions
Task-based questions ask users to do something specific in your interface while you watch. Instead of asking for opinions, you observe actions. The metric is simple: can they do it, and how long does it take?
When we tested CatDoes’ onboarding flow, we asked 6 users to “create a new app project and add a login screen.” 4 of them could not find the template gallery within 2 minutes. That single finding led to a layout change that cut average onboarding time by 40%.
Example questions:
“Imagine you want to change your password. Show me how you would do that.”
“You just received a promo code. Try to apply it to your order.”
“Find the most recent invoice in your account.”
Tip: Ask users to think out loud while they work. The moments of hesitation and wrong clicks are where your best insights live.
2. Satisfaction and NPS Questions
Satisfaction and NPS (Net Promoter Score) questions measure how users feel about their experience on a numeric scale. NPS specifically asks: “On a scale of 0 to 10, how likely are you to recommend this product to a friend?” Users scoring 9-10 are Promoters, 7-8 are Passives, and 0-6 are Detractors.
A single NPS score tells you very little on its own. The trend over 3-6 months tells you whether your product is getting better or worse. Track it quarterly at minimum.
Example questions:
“On a scale of 0-10, how likely are you to recommend this product?”
“How satisfied are you with the checkout experience? (1-5)”
“What is the main reason for your score?”
Tip: Always pair a scale question with an open-ended follow-up. The number gives you the “what.” The follow-up gives you the “why.”
3. Open-Ended Perception Questions
Open-ended questions let users describe their experience in their own words. You are looking for language, emotions, and mental models that structured questions miss entirely.
Dropbox used questions like “What problem does this solve for you?” early on to understand how users actually thought about the product. The answers shaped both the feature roadmap and the marketing copy.
Example questions:
“What is your first impression of this page?”
“How would you describe this product to a friend?”
“Was anything confusing or unexpected?”
Tip: When a user gives a short answer, follow up with “Why do you say that?” or “Can you tell me more?” The real insight usually comes on the second or third prompt.
4. Usability and Design Questions
Usability questions assess whether your interface is clear and intuitive. They focus on visual hierarchy, label clarity, and whether users can figure out what to do without instructions.
These work best when paired with observation. If a user says “this looks easy” but then clicks the wrong button three times, you have found the gap between perception and reality.
Example questions:
“Looking at this screen, what do you think you can do here?”
“What is the first thing that catches your eye?”
“Where would you tap to go back to the home screen?”
Tip: Use the System Usability Scale (SUS) after testing to get a quantitative score you can track across iterations. For more on building intuitive interfaces, see our guide on app design best practices.
5. Feature Prioritization Questions
Feature prioritization questions force users to rank what matters to them. Instead of asking “Is this feature useful?” (everyone says yes), you ask users to choose between options. This forces tradeoffs and reveals real priorities.
One method that works well: give users a list of 5-7 features and ask them to pick the top 2 and bottom 2. The results usually surprise you.
Example questions:
“From this list, which two features are most important to you?”
“If you could only keep three features, which would they be?”
“Which of these would you remove without missing it?”
Tip: Segment results by user type. New users and power users almost always have different priorities. This matters when defining your minimum viable product.
6. Information Architecture Questions
Information architecture (IA) questions test whether your navigation and labels make sense to users. The classic methods are card sorting (users group items into categories) and tree testing (users find items in a text-only menu). Both isolate whether your structure works, separate from visual design.
If users consistently look for “Billing” under “Profile” but you put it under “Settings,” that is an IA problem no amount of visual polish will fix.
Example questions:
“Where would you expect to find your billing information?”
“What do you think the ‘Resources’ section contains?”
“Group these 15 features into categories that make sense to you.”
Tip: Run tree tests early, before you invest in visual design. Fixing IA problems after launch costs significantly more. For a deeper look at structural foundations, read our UX design guide.
7. Emotional Response Questions
Emotional response questions capture how your product makes users feel. A user can complete a task successfully and still walk away frustrated. Task success alone does not tell you if someone will come back.
Headspace is a good example. The app has to function correctly, but it also has to feel calm. If the meditation timer works but the interface feels stressful, the product has failed at its core job.
Example questions:
“Pick 3 words from this list that describe your experience.” (Give options like: easy, frustrating, confusing, fast, trustworthy, fun)
“At what point did you feel most confident? Least confident?”
“How did the checkout process make you feel?”
Tip: Emoji scales work well for quick post-task reactions, especially on mobile. A simple row of faces from frowning to smiling captures sentiment faster than a written prompt.
8. Competitive Comparison Questions
Competitive comparison questions ask users to evaluate your product against alternatives they already use. The feedback reveals blind spots that internal testing never catches.
A feature you think is unique might be standard in a competitor’s product. A minor friction point in your app might be the exact reason users prefer a rival. You will not know unless you ask directly.
Example questions:
“You just did the same task in both products. Which felt easier, and why?”
“What does [Competitor] do better than us?”
“What would it take for you to switch from [Competitor] to this product?”
Tip: Recruit people who actively use competing products. Their existing knowledge makes the comparison meaningful rather than hypothetical.
9. Accessibility Questions
Accessibility questions check whether your product works for people with different abilities, including those using screen readers, keyboard navigation, voice control, or screen magnifiers. This is not just about compliance. It is about whether real people can actually use your product.
A button that is not labeled for screen readers is invisible to a blind user. A click target smaller than 44x44 pixels is unusable for someone with motor impairments.
According to the WHO, roughly 16% of the global population lives with some form of disability. These are not edge cases.
Example questions:
“Using only your keyboard, complete the signup process.”
“Using your screen reader, find and add two items to the cart.”
“Can you read all the text on this page comfortably?”
Tip: Test with people who use assistive technology daily. Simulating disabilities does not produce the same quality of feedback as working with someone who actually relies on these tools.
10. User Motivation and Goals Questions
Motivation questions dig into why users want your product in the first place. This is the “Jobs to Be Done” angle: people do not buy a product for its features. They hire it to solve a problem in their life.
Peloton figured out that their customers were not buying exercise equipment. They were buying motivation, convenience, and a sense of community. That insight shaped everything from product design to marketing.
Example questions:
“Tell me about the last time you tried to solve this problem. What happened?”
“What would success look like for you with this product?”
“What are you trying to avoid by using a tool like this?”
Tip: Use the “Five Whys” technique. When a user describes a behavior, keep asking “Why?” to get from the surface action to the root motivation.
Comparison of All 10 Question Types
Type | Complexity | Best for | What you get |
|---|---|---|---|
Task-Based | Moderate | Usability validation, onboarding | Completion rates, error paths, time on task |
Satisfaction/NPS | Low | Tracking sentiment over time | Loyalty scores, trend data |
Open-Ended | Moderate | Exploratory research, messaging | User language, unknown issues |
Usability/Design | Moderate | Interface clarity, prototype checks | Design fixes, learnability data |
Feature Prioritization | Moderate | Roadmap planning, MVP definition | Ranked feature lists |
Information Architecture | Moderate | Navigation, label validation | Label and structure fixes |
Emotional Response | Moderate | Brand feel, UX perception | Emotional triggers, sentiment |
Competitive Comparison | Low-Moderate | Market positioning | Strengths, gaps, switching barriers |
Accessibility | High | Inclusive design, compliance | Barriers, WCAG issues |
Motivation/Goals | High | Product strategy, positioning | Jobs to be done, core needs |
How to Run a User Testing Session
Knowing the question types is step one. Combining them into a useful session is step two.
Define your goal first. Are you testing a new feature, validating a redesign, or exploring a new market? Your goal determines which 3-4 question types to use.
Start broad, then get specific. Open with perception questions to build rapport. Move to task-based and usability questions. End with satisfaction scores.
Keep sessions under 60 minutes. After an hour, participants get tired and give shallow answers. 5-7 participants per round is enough to surface about 85% of usability issues.
Look for patterns, not individual opinions. If 3 out of 5 users struggle with the same flow, that is a real problem. One user’s personal preference is not.
For a deeper look at combining testing with rapid iteration, see our guide on prototyping and testing.
Ready to build a testable app without writing code? CatDoes turns your ideas into working mobile apps using plain language. Get a prototype in front of users fast and start collecting feedback that actually moves your product forward.
FAQ
How many user testing questions should I ask per session?
Aim for 5-10 questions per session, mixing 3-4 question types. More than that leads to participant fatigue and lower-quality answers. A focused 45-minute session with 7 strong questions beats a 90-minute marathon every time.
What is the difference between open-ended and closed-ended testing questions?
Closed-ended questions have fixed answers like scales, yes/no, or rankings. Open-ended questions let users respond in their own words. Use closed-ended for tracking metrics over time and open-ended for discovering problems you did not know existed.
When should I use NPS vs. task-based questions?
Use NPS for periodic health checks across your entire user base. Use task-based questions when you need to find specific usability problems in a flow.
NPS measures loyalty and sentiment. Task-based measures whether the interface actually works. They answer different questions.
How many participants do I need for user testing?
Jakob Nielsen’s research at Nielsen Norman Group shows that 5 users find roughly 85% of usability issues. For quantitative studies like NPS or satisfaction scores, you need larger samples of 20 or more to get statistically meaningful results.
Can I combine multiple question types in one session?
Yes, and you should. A typical session might start with open-ended perception questions, move into task-based scenarios, include emotional response checks after key tasks, and finish with an NPS score. Order them so broad questions come before specific ones.

Nafis Amiri
Co-Founder of CatDoes


