Crafting good UX research questions for qualitative research
There is no better feeling than getting good results from participants easily. Whether it’s confirming the prototype you’re working on is useful and exciting to potential customers, or getting a better understanding of how individuals view your current website, getting good responses can make your work feel worthwhile.
So how do you hit it out of the park with every study? A great place to start is to focus on crafting quality questions for every study you run!
In this article, I’ll be going through the steps to take before getting your questions written, how to find the right type of study for your project, what types of questions yield the best results and how to stay as un-biased as possible in the process!
There is definitely an art to crafting good, non-biased research questions – however, you first need to focus on how to find the goal for your study and what type of study you should run for the project you are working on.
So to get started, let’s talk about goals.
Recently here at PlaybookUX, I have been working on updating our tester experience. I wanted to get an idea of what testers thought of our landing page and the experience of signing up for an account. But before I could do anything, I needed to stop and ask: “Why?” What is the goal of this? Sure, it’s nice to have a clean, updated website, but I need to understand the purpose of this page.
The landing page, if successful, should be informative enough to entice someone to want to join the platform. So, with that in mind, I can come up with a goal for my redesign: I want to increase the number of recruited testers with my design. This gives me a great jumping off point for my research, and an idea of progress that I can track in the long run.
Finding a goal to work towards is fundamental to good research – you want to have something guiding you forward and informing your decisions. Going into a study blind or without a purpose can risk biased research – just because something is important to you doesn’t mean it is important to your customers, and designing and building a feature without a valid purpose can be a huge time-waster if it backfires.
So goals are extremely important and can come from different places. In the previous example, I wanted to increase testers on our platform to improve the customer experience. Other companies might want to decrease the amount of drop offs from a shopping-cart-to-checkout flow. Some companies might want to decrease the amount of support tickets and drive more volume to their help center by improving their chat bot… these are all great examples of different goals that can be established before starting your research.
Now there are two other things that need to be decided before writing questions: (1) Who are you looking to target and get feedback from and (2) What type of test should you run?
Your participants will vary depending on the goal you set – for example if you are working on a banking app for businesses and you want to drive traffic to your startup program, choosing an audience of small business owners, CEOs, accountants, etc. will most likely be helpful for your study. You can even dig deeper to get more specific with screener questions – for example, finding individuals who regularly use grocery apps on their phone to have groceries delivered, if you are a grocery store trying to revamp the shopping cart/checkout experience.
In the tester experience example, I want to look for a wide variety of individuals who have either little experience doing user testing or have done many user tests in the past to get an opinion from both sides, so I will leave the audience more broad.
Finally, you need to determine what type of study to run.
There are a variety of different types of studies that all serve different purposes and have different ways of asking questions. One of the most common types is called a first impression test – this study will give you a better idea of how a participant thinks and feels about the website/prototype/product they are looking at. First impression tests are a great first step to designing early-stage prototypes or getting a baseline of what people think of your current design – because they will be judging the content based on how it looks and is presented to them, the functionality of the product does not have to be perfect.
Now if you want to know if your product works and makes sense, you will want to run a usability study – which is another commonly used test. In this study, the participant will walk through steps to use the website/prototype/product and communicate their understanding of how it works. Compared to first impression tests, usability studies are best done mid-to-end-project development as you will only want to test the designs that have passed the first impressions of participants.
I’ll talk more about these two types of studies momentarily, but I want to stress that those are just 2 out of many different types of studies you can run. If you are looking to understand your audience more – their needs, their experiences on and off platform – you can run a Persona test. You can dive deeper into experiences like chat bot, shopping cart, phone app, etc. and specialize studies to test if these programs work. If you are not quite sure what type of test you want to run, feel free to check out the PlaybookUX Academy, where we have many common types of studies and questions you can utilize.
So let’s go back to my example of the tester landing page. I first want to make sure that my landing page is enticing people to sign up to be a tester, so I want to run a first mpressions test to see how people perceive the site as it is now – this will help me understand areas that are working well and others that need improvement as I move forward.
Now I’m going to say a statement that is going to sound very obvious, but I think it’s important to highlight: the audience cannot read the researcher’s mind.
The reason I highlight this is because one of the most common mistakes I see in research studies is vague questions that do not provide directions. How often have you had a participant race through a prototype because they didn’t realize it was not a real app/website/program yet? This has absolutely happened to me before, and I realized it was because my questions were not specific enough.
So my rules of thumb when crafting questions #1: Be as specific as possible with your questions, and don’t be afraid of context and directions.
Think about the type of material you want, and how you would go about obtaining that. Do you want the participants to answer questions before they start the study? Do you want them to walk through a study slowly? These are both possible, but you will want to make sure the participant understands what they are doing so they don’t instinctively move forward.
When coming up with questions, give instructions to direct them through each step. Use phrases like “Without clicking”, and “scroll up and down but don’t click,” on questions where you want them to look over something – this will help the user slow down and not charge forward through each webpage. For my tester landing page, I want to know where they think the tester information would live on the website. So I might ask “Where would you go to find more information on becoming a tester?” – my participant, after reading over the prompt, may take a moment to look for the correct response and then click on the area they think would take them to that spot. I don’t want this for many reasons: If I had follow up questions about that experience, they would now be ahead of me and potentially reviewing the information without instruction. Or worse, they may have clicked on the wrong spot, which will only make the next few questions confusing if they can’t navigate back. Now, if I ask the same question, but add “ Without clicking, where would you go to find more information on becoming a tester?” – this gives the participant a specific instruction that they should not click anything, but they are being asked to locate something so they can answer the question without moving forward. This keeps them on the correct path and in a spot where they could answer potential follow up questions.
Going back to my example about prototypes – as a researcher it is best to assume that your participants may not have experience looking at prototypes or websites, so providing context can help set the stage for parts of the study that may otherwise confuse them. If I have a prototype on Figma that I want to show my participants, I might set up a task/question that says “On the next page, you’ll be taken to a prototype of our design. Once the prototype is loaded, the following questions will prompt you on how to move forward!” This will allow my participants inside knowledge that their screen will change and may not operate the same as they are used to, but that the questions will provide instructions. You’re building trust with your participants this way, and thus making the research better.
I do want to point out that some researchers like to keep things vague to see how people interact without instruction – this is completely at your discretion and is a valid form of research! Depending on the nature of your product, audience, and information you are looking to gather, you can ask more open-ended questions and see how a participant flows through a site, which can test how intuitive or easy to use the site is. That being said, I recommend doing this type of study in a moderated session, where a conversation and flow can happen more naturally instead of an unmoderated study.
So, for my tester landing page, I want to run a first impressions study. So what types of questions should I ask? I want to learn more about how the participant feels about the design, so I want to stick to asking questions about what they think and feel about certain aspects of the page.
- “Do you find this website helpful?”
- “Describe this section in 3 keywords.”
- “On a scale of 1-10, how easy is it to navigate through the site?”
- “What would you improve about the website?”
You’ll notice that my questions are not necessarily asking if it’s functional but rather looking at it on a surface level – these are all great questions for first impression tests. However, I am still not crafting good questions. Why?
My rules of thumb #2: You do not want to lead your participant down a path – this will create bias.
Listen – we’re all human, we all have preferences and ideas of what looks good or works for us. But in research, it’s important to get as much of your opinion out the door as possible so you can allow the participants to make up their own mind about the content they are seeing. So using keywords like ‘helpful’ or ‘easy’ in a question, or asking them a yes/no question, might put the thought into the participants head before they had a chance to form their own, or even worse, it might make them feel uncomfortable sharing their opinion if they didn’t find the information ‘helpful’ or ‘easy’. Suddenly there’s a secret expectation that the participants have to make you feel good about your research and your design. Yes/No questions also lack answers with substance as most individuals might not want to answer their reason why they chose what they chose.
So you want to frame questions in a way that doesn’t pre-suppose what the person is going to say, and you want to find a balance. If you are going to ask “What part of the webpage was the easiest to understand?” follow it up with “What part of the webpage was the most challenging to understand?”
Instead of asking “How easy is this website to navigate,” say “On a scale of one to ten, how did you find the process of navigating the website” and save ‘easy’ for the scale.
As you start to configure your questions and work on being less biased, you may encounter a different type of issue – asking vague or very open ended questions can often confuse and overwhelm your participants. “Tell me what you think of this page” – this is a question that can lead participants to either overthink about the page they are looking at, or will prompt them to describe what they are seeing without giving much of an opinion. I’ve seen studies where, during questions like this, participants will answer with phrases like “It’s fine,” or “I like this” without really diving deeper into what it evokes. Questions like “Describe this page in 3 words” or “what draws attention to you the most on the webpage and why” give the participant a little more direction to answer more thoughtfully, and often lead to interesting and more detailed insights.
So my questions are starting to come together with just a few edits:
- “What is the most helpful part of the website?”
- “What is the least helpful part of the website?”
- “Describe this section in 3 keywords.”
- “On a scale of 1-10, how did you find the process of navigating the website?”
- “What would you improve about the website?”
You’ll see I added more here to my list. But how do these questions differ for a usability test?
With a usability test, the same rules of thumb apply, but your questions will be geared more towards how the product works. In this case, providing instructions becomes much more important as you want to make sure your participant is moving through the product in the order that you choose.
For example: “Where would you find information on startup banking accounts?”
When I run usability studies, I also like to ask questions before clicking and follow up after clicking to see if it met expectations: “Without clicking, what information do you expect to find on the “Create an account” page?”
Followed by: “Go ahead and click on “Create an account” – what do you think of the way it is laid out – compared to your previous expectations?”
This helps me understand how intuitive my program is or isn’t. Of course with usability tests, you can use questions that could also be used for first impression tests – asking things like “Did you find anything surprising about this page?” or “what draws attention to you the most on the webpage and why” can help you understand how the user is understanding the product.
Now, coming up with questions can be challenging for anyone, so don’t be discouraged if you are having trouble wording questions or coming up with ideas. We are always happy to help you, and there are a couple of ways you can utilize PlaybookUX to your advantage. First, you can click on “Popular Questions” – this is a list of questions that researchers often use in a variety of different types of studies. You can browse through and select as many questions as you’d like to add to your study, and all these questions can be reworded, duplicated, moved around and deleted at your leisure. If you want a good jumping off point, you can even import one of PlaybookUX’s templates – here we’ve compiled a list of common types of tests (including both usability and first impression tests), and you can select a full template to add to your study and edit as you please. If you and your colleagues launch lots of successful tests on the platform, you can even save those questions in a workspace template to import for future studies – so check in with your colleagues and check their templates for usable questions!
Once you have your questions put together, you might be asking yourself – how do I know if these questions are usable? The best way to check if your questions work is by running a study! Many researchers will create what is called a “pilot test” – this is a 1-person study that will give you an idea of how successful the questions are. Once you get your participant’s response, you can analyze how they answered the questions, make notes on what worked and what didn’t, and then duplicate your study – making edits as you go. If you find that the study went really well and you don’t need to make any changes, you can add participants to the same study and work on getting more responses! This pilot test process is a super helpful way to ensure you are not wasting your time and energy analyzing research that is not very helpful.
Finally, if you find that your study goes well and the questions were successful, consider saving your study as a workspace template – not only will this make it easy to find good questions later when you launch other studies, but you’ll be helping out your colleagues in the process!
I want to briefly talk about moderated studies and asking questions, which I consider to be a whole different ballpark. With moderated studies, you will be face to face with the participant as they are working through each question, so you can have more natural conversations and ask follow up questions as you go along. But staying unbiased in a moderated study is a whole lot more difficult than you might expect. There’s a lot of pressure to fill the silence or help your participant as they struggle through a part of the study – these can be times where saying something can actually hurt your research. Don’t feel like you have to jump in on uncomfortable silences: remember, you are testing your product to see how easy it is to navigate or how easy it is to understand, and by providing too much guidance you are not allowing the participants to work at their own pace. If they are struggling with a section that is not directly relevant to your research, you can lightly set them back on track but remember to allow for room for them to figure things out on their own.
In addition to this, you want to be careful with how you ask follow up questions. You might find yourself wanting to ask things like “ Did you find it more challenging than the other part?” When asking them to compare two things – just like with the keywords from before, you are now putting an idea in the participant’s head that the second part was more challenging, whether they think that or not. Sometimes giving the participant space to answer, even if it takes a little longer, can help you get a better response: avoid answering your own question in a question. For example “Do you think maybe you did it this way because ____?” This assumes the answer of the participant and makes it harder to get a unique answer from them.
I’m not going to lie to you – staying unbiased in a moderated interview is a lot harder than you think, and is a skill to unlearn – so if you slip up, just make a note to work on it in the future.
By employing these rules of thumb and working on how you present questions to your participants, you’ll start to see the fruit of your labor in the responses you receive. Better questions reveal better results and get you closer to accomplishing your goal!
Speak to high quality people