You've probably heard you should focus on understanding your user and the problem they're trying to solve first. This is true*. You've also heard that you shouldn't just ask customers "hey, do you want XYZ", because they are very likely to say yes, and also very likely to never actually use that feature. And if you have unlimited time, money, and authority for every research project you ever do, you can avoid ever having to do this. But if you don't, there are often times when it makes sense to say yes to a survey-based evaluation of possible ideas: maybe you're about to exhaust your funding and need to pick a direction today, maybe you're building the case for user research at a new company, and you've got approval for a quick, cheap research project but nothing more; maybe you need to convince a particular stakeholder who is comfortable with statistically significant numbers but isn't ready to trust you on qualitative yet. Here are three tactics I use to make "Do customers want it?" surveys better, not perfect.
What you practice: Prioritization, neutrality, survey design, evaluative research
When to use: When your CEO asks you to figure out if customers want XYZ feature before tomorrow's board meeting, when you have a bunch of ideas and you need to decide which to prototype first, when you have 50 ideas and resources to prototype 5 of them, when you need quantitative data to back up your qualitative insights
This question format strikes a great balance between information density and respondent fatigue, and lets you compare several concepts without significantly driving up your question count. Present all of the options in the first question; ask respondents to choose only "highly important" ones in the second.
Example
Q1. Which of the following problems do you experience, at least some of the time? Choose all that apply.
Q2. Of those, which problems are highly important to you? Choose all that apply. [Pipe in Q1 selections]
Never test one idea at a time. Mix in multiple features, some of which you're actually evaluating and some of which you aren't.
Add a few features you've already launched to use as a reference point. Include a $ discount so you can compare ideas to plain money (everyone always likes money).
Example
Imagine your home insurance provider is considering offering new services with your plan. Which of the following would you be interested in?
[a] An idea you're really evaluating
[b] An idea you're really evaluating
[c] A monetary discount
[d] A feature you already offer
[e] Another idea you're really evaluating
[f] An orthogonally related feature you're not actually considering
etc
It's hard for people to answer questions about their hypothetical future actions accurately. If you can't avoid it, pair your hypothetical with a question about what people have done in the past. Previous questions affect people's answers to future question (priming) - use this to your advantage by reminding people of what they actually did in the past.
Example
Q1. Thinking back to the last time you changed your cell phone service provider, what best describes the main reason you switched providers?
Q2. How likely would you be to switch cell phone providers [to get our cool new feature idea]?
*Fun fact, if you don't reference Henry "faster horses" Ford at least once a quarter, they revoke your user research license. Sorry, I don't make the rules.
What you practice: Increase comfort with uncertainty; futures thinking; making falsifiable hypotheses; identifying measurable outcomes of complex situations
When to use: Long range planning, early in a project, before quarterly/annual planning, before developing OKRs
Materials: Give each participant a large piece of paper and a marker.
Define a topic you want the team to be thinking about - this exercise is best oriented around mid-to-large scope topics, like the broader industry your team works in. Examples: "How will AI adoption change UX research in large American companies?", "What will the market for home internet look like in 1 year?", etc. Give a concrete time frame - about a year usually works, although this exercise can be done with smaller time frames for smaller topics. Very long time frames - eg. 10 years - will lose the ability for participants to learn from the results.
Individually, each participant must write one prediction they think has a 50% chance of happening in the X time frame.
50% means "equally likely to happen as to not happen". Do not write predictions you are virtually sure will happen.
Predictions must be concrete and measurable. At the end of the time frame, there should be no remaining uncertainty about whether your prediction happened, or didn't happen. This means you'll need to avoid qualitative assessments ("...will be the most important", "...will be outdated"), and instead commit to a single quantifiable measure ("20% of households will own...", "3+ companies will IPO with market valuations >1bn").
Give participants 5 minutes to generate their predictions- 10 if you find you need to help individuals 1:1.
Break into groups of ~5. Each participant shares their prediction and the rest of the team must take either the "Over" (more than 50% chance of happening) or "Under" (less than 50% chance of happening). Record the over/under votes on the same sheet with tally marks or stickers.
Variant: You can also have the group hang their predictions on the wall, and give the group time to walk around and vote individually. This is good for reducing group-think; but reduces the opportunity for the facilitator to course-correct in real time if people present predictions that are too vague to be evaluated.
Debrief: Overall, how does the distribution of Over/Under votes look? Because the goal is 50% predictions, most predictions should have a fairly even split. If not, why not? In general, does your team trend more towards high likelihoods, or lower ones? As a group, how would you adjust them to be closer to 50%?
Consider keeping the predictions and returning to them in a year. As a group, how did you do? Remember, the goal was 50% - if all or most of your team's predictions actually happened, consider it a failure.
You might notice that two of the core skills required for this exercise - defining a concrete thing to measure in an uncertain future; and identifying probabilities other than 0%/100% - are also common struggles for teams trying to adopt OKRs. This game is an excellent warm-up prior to writing OKRs, or an intro to begin a conversation about how these tendencies impact your team's ability to develop OKRs.
If your OKRs are too conservative
Quarter after quarter, your team meets 100% of their OKRs. The team only commits to goals they are 100% sure they can meet. In this exercise, you'll likely see your team struggle most with the "50/50" rule - they'll want to suggest predictions that are nearly certain to happen. Some variants that can be particularly helpful for this kind of team:
Have each participant present a 1-sentence argument-for (why is their prediction likely to happen) and argument-against (why is their prediction unlikely to happen). Ask them to reflect on which was easier to generate - if the argument-against was harder to formulate; ask them to revise their prediction until both arguments are equally easy to make
Instead of small group discussions, have a large group vote over/under on every prediction (to keep the timing reasonable, eliminate discussion of the predictions - they'll have to stand on their own as written). Identify the predictions that came closest to 50/50, and discuss as examples - what made them equally likely as unlikely to occur? As a team, how would you modify some of the overly-likely predictions (garnered significantly more "over" votes than "under") to be more balanced?
Try a challenge round aimed at 33% predictions (half as likely to happen as not-happen).
If your OKRs are too vague
Your OKRs might be ambitious - but they're so vague the team never agrees on whether you're meeting them or not. You might see phrases like "be the best-in-class provider" or "delight customers" (great things to aim for! but terrible OKRs!). Your team needs practice translating those broad picture goals into concrete items to measure - how specifically will you know if that overall goal is coming true? Try any of these variants:
Pretend it's the end of the time frame. You need to evaluate whether the prediction happened. What are you going to do? There should be almost no room for debate about how to proceed. (Got time for the long game? Actually keep the predictions and evaluate them at the end - don't worry about whether the predictions did/didn't happen; only whether they are straight forward to evaluate).
Eliminate the time frame - write statements that you think have a 50/50 chance of being true right now (no googling yet). Evaluate in real time (googling now!) - is the team quickly able to come to a consensus about whether that statement is true, or are you bogged down in debates over definitions (what does "best in class" even mean?!?)
Prediction: "Customers will prioritize speed and reliability in an increasingly complex mobile provider market"
✖️ - This is too likely to happen - do you really think there's a 50% chance customers STOP caring about whether their cell phone service is fast or reliable?
✖️ - Not measurable enough - imagine having a debate about whether this prediction has been true over the *past* year. Reasonable people could cite different facts in support of both yes & no. You want a prediction where the outcome is uncertain now; but won't be at all at the end of the time frame.
Better Revisions
✅ "More than 50% of Americans will own a 5G capable smart phone."
✅ "A new entrant in the US mobile market will capture >10% of market share in the next year."
✅ "Average monthly cell phone bill in the US will exceed $100 by Dec 2025."
These are my recommendations and advice for teams or individuals who kind of think the idea of talking to customers before building something sounds interesting; but are truly starting from zero. These activities are designed for teams or individual contributors without institutional backing - things that you can do, even at a large, bureaucratic company, without anyone's approval. If you have more authority, or have some dedicated user research time & funding, there are lots of better guides out there to instituting qualitative user research in a more rigorous way (or, you can hire me, and I'll set it up for you!).
How do I find someone to interview:
If you have the ability to get in contact with current or potential customers, use that. But if you don't, it's ok to start with friendlies: your neighbors, family, colleagues, and friends. Set up an uninterrupted hour with an interviewee - a current or potential user of your product is best; but if you can't get in touch with "real customers" try:
Your friend, neighbor, family member
A colleague - from a different team if possible. If you're an engineer, ask non-engineering teams: HR, Legal, Operations,
Book a conference room for the afternoon. Put up a sign that says "Free Cookies". Dragoon whoever walks in first.
Still stuck? trade interviews with whoever sits nearest to you. Talking to anyone is better than talking to no one.
How do I know what to say:
Write down the questions you want to ask in advance - focus on open-ended questions, and questions that let the interviewee tell you a story. Some good questions to consider :
● Tell me about the most recent time you...
● Tell me about the most recent problem you had with ... ? Tell me about the most frustrating problem you’ve had with ... ?
● What do you use ... for?
● On a scale of 1-10, how satisfied are you with __? Why did you choose that number? What would a score of 10 look like / what would a score of 1 look like?
● What is your favorite __? What are 3 reasons it’s your favorite? (people will need to think about this one, give them some time!) You mentioned [reason], why is that important to you?
● Tell me more about ...
You'll notice that none of these questions are "Would you use [new feature I want to build]?". Don't ask that.
During the interview, ask open-ended questions and listen much more than you talk - 90/10 is a good ratio to aim for. Record the interview.
Then what?
After the interview, review your recording and make notes. Focus particularly on things that surprised you, problems or pain-points the user experiences, and the why behind what people are doing.
Take the 10 most interesting direct quotes and write them down verbatim. Stick a few in the next time you're writing a project update, feature description, pitching your idea.