Discount launch pricing for the Data Product Leadership Community ends Sep. 30!
Participate in our live monthly Zoom webinars (recorded for on-demand access), connect over a 24/7 Slack, and get introduced to a new peer 1x1 every 2 weeks. Join my free Insights mailing list to get subscriber-only pricing before Oct. 1.
Get the coupon then apply for membership

10 Sample Questions to Ask Users of Data Science Solutions to Solicit Needs and Get Problem Clarity

This is a two-part article focused on "what" to ask users of data science solutions and data products, and how to ask/conduct these types of research sessions. In part one, we will look at the "what," and part two will cover the "how."

Human-centered design for data products and data science solutions doesn't happen without the right conversations with the right people. Empathy is the root of good human-centered design, and this type of qualitative UX research–in the form of 1x1 customer or stakeholder interviews–can help you develop that empathy. If you can build a habit of doing this type of behavior prior and while you're building out data products and data science solutions, you're much more likely to develop solutions and services that customer will trust and engage with.

Part I: 10 Sample Questions to Ask During User Interviews

If you look closely at these questions, most of these topics are intentionally "open-ended" questions that should not solicit "yes or no" responses. These questions were chosen to get your users to open up to you and have a dialog so you can discover possible blockers to adopting your tool, as well as possible opportunities to delight your customer. These are not in order, and I am hoping they will inspire you to come up with your own related questions.

  1. Can you show/tell me about the last time you used [our tool][solution]? Walk me through it.
  2. How does [solution] fit into your day to day work now? How would you like it to in the future?
  3. How does [solution] help you do your job or make it more difficult?
  4. You asked us to create a predictive model for [x]. What would you think or do next if we found out it [insert unexpected prediction example]?
  5. Are you able to make better decisions with this data? If not, what impedes you from doing that?
  6. What other people are relevant to your use of data, making decisions, or getting approvals? How does [our solution] help or hinder that process?
  7. What would the data/tool/service need to do or show to be useful to your work?
  8. Do you recall a time you felt you didn’t trust the information in [solution]? Why was that the case?
  9. Do you have concerns about making the wrong decision using data? What could go wrong?
  10. If I told you that we could probably build a predicted model to answer [insert business question], but I also told you that there may not be a way for the software to show how it arrived at the prediction, how might this influence or change how you would use this information in your work?

Part II: The How

Here are some actionable tips for doing "shadowing" or "ride-along" style research. This type of 1-on-1 research is something you can do with a customer to tease out how well (or not) an existing product, a competitor product, a prototype, or design mockup is working for your audience. As apposed to a pass-fail style usability study, I like to use these when I'm curious exactly how people are using the service (i.e. you don't know exactly what they want to do, typically do, or might do just yet). As with any type of activity, you can always get better at it with practice, but for most of you, the biggest challenge will be the activation energy required to simply start your first 1x1 customer research session. One of the blockers to starting may be that you feel like you don't know how to facilitate the actual study itself. So, today, I want to give you a set of probing questions and tactics you can use to help run your first session. You do not need a degree in human-computer interaction do have a fruitful conversation that starts to inform you of where there may be problems with a given data product, service or application.

Assumptions:

I will assume for this article that you have already:

  • Scheduled a study session with a customer (ideally this is a realistic user, not a proxy person, but a proxy is better than no study at all)
  • You have a product, application, mockup, MVP, or some other asset to use that you want to get information about
  • You've populated this tool/product with realistic data. This is important: don't use data that feels "way off" unless you are intentionally trying to test people's reaction to surprising results. (For example, if you think a future predictive model might actually provide some really unexpected results, you may intentionally populate your prototype with this type of data to tease out customer reactions).
  • You've got a room/place or conference set up to conduct the study (you can do this remotely with screen-sharing software, but in person can be nice)
  • You have at least one primary use case in mind you want to learn about, and possibly use to anchor or start the study

10 Tips and Tactics To Use During a Data Product Research  Study

  1. Encourage “Thinking aloud” - while I want you to focus on what they do, what you also want is a running narrative of the conversation they're having with themselves in their head to accompany their use of your service. Don't let them run silent too long!
  2. Deflect questions.  If they ask you how to do something or what something means, you say “Is there a way to find out using the tool? what would you do if I wasn’t here? What do you think it means”? (Try to avoid answering them, unless they get totally blocked at which point make a note they would have failed and then move on). Yes, you might feel like you're now the psychologist, but you're not there to train or help them. You're there to learn about the efficacy of the service/app you are studying.
  3. Focus on what participants are DOING more than what they are SAYING. What people say is often not reflective of what they're going to do, and sometimes you'll hear some really unexpected things. For example, they may completely struggle with your service, and perhaps even make a wrong assumption, but then tell you that "this was super easy and helpful." Or vice versa. If you have the resources, get a second person to help you whose role is simply to take notes (not to speak) about what they observed. I strongly recommend this as you will miss things while facilitating, but again, I believe that getting started, even if it's alone, is better than waiting to get a "team" in place to run your research.
  4. Avoid using the word “test” or saying this aloud with participants. We don’t want them to feel like rats in the maze. Position this as a “study” e.g. “we think there may be ways to make [name of service] easier for everyone to use so we’re studying what’s working well and what isn’t. We’re evaluating the tool itself, not you, so you won’t hurt our feelings giving us candid feedback. Please be honest and say what you think as you go through the tool." These sentences also make a nice "intro" to the study when you first sit down.
  5. Give praise as needed e.g. “that was great feedback!” or “was great seeing how you go about doing that task. This is really informative for me/my team.” (Avoid discussing what’s right/wrong with the tool, and try to stay neutral, at least until the end).
  6. Be alert to the participant talking on behalf of others in hypotheticals. Be very aware of people talking about what “others” like them, might do/need/want. If they start offering that advice, turn it back on them with this: “That’s great feedback.  However, can you show me what you yourself would do next in this particular situation? ” (This is to avoid lots of conjecture and keep people focused on their own tasks and not role playing as the designer - which doesn’t help us get good qual feedback about what’s actually happening in the wild). Fun fact: almost everyone thinks that the way they do things/use a software application or whatever it is—is very unique. So, you will likely hear things like, "I saw this prediction/number/thing/button, and I know what it means, but I doubt anybody else likely would. I would probably make it bigger and put it at the top with a chart so other people can understand it better."  Your goal is to get THEIR INPUT so you have primary-source data to work with. Not hypothetical data. This does not mean the customer is necessarily wrong, but take their feedback with a grain of salt. That button or number may be just fine where it is, despite them thinking it was "hard to find."
  7. Ignore your parents and interrupt! Before they interact with an affordance, probe "what do you expect will happen next?" Want tease out how intuitive the design of a data product is? If people are about to click on a button or do something “next,” feel free to regularly interrupt and say, “Before you click on that; can you ask what you think is going to happen if you click on that?” (This is to probe whether the interface is actually fitting into their mental model of how it should work). After they interact with something, esp. if it exposes a new mode/UI/screen/tasks, feel free to follow up with, “Is this what you expected to happen?” This helps validate/invalidate their expectation.
  8. Be alert to any mentions of other people, departments, or tools. You may find out your customer is using your service in an interesting way, alongside another service, or perhaps is routinely sharing/involving another department/person/team. Consider asking your participant for an introduction to any "connected" people, particularly if they're part of the overall process of ensuring your data product will actually creative value. You can then open the next study with something like, "I heard from [name of participant 1] that they sometimes share [x] data with you and they said you'll probably do [y] with it. Can you show me how you do this using this tool/service, assuming that's even true?" Imagine if you have a predictive model that can tell the sales team "who should we follow up with each week?" If you want to test out "will people use this new pricing model?," this is way to "follow" the prediction through the people and depts. it touches and see where it hits a wall,  excitement, resistance, or confusion. Perhaps you went to a sales person first, but then found out that they want to first check in with their regional sales manager to "get approval" to use the prediction, especially because "it seems off." Now you can have a conversation with with the Regional Sales Manager and see how they'd react to both the model, and this participant's request to "get approval." What does it actually mean to "get approval"? If you and they don't know, it may be hard to get your model into production successfully—which means the analytics won't create value.
  9. Use these questions if the session "stalls" or they're quiet. Two great things you can ask repeatedly are, "what are you thinking about right now as you look at this?" And secondly, "what would you do next?" You can also ask them, "would you like to take a break?" if they feel uncomfortable.
  10. Have fun! Tell them you're learning too. At the end, ask if you can follow up in the future if you plan to iterate and improve. Most people enjoy doing these sessions and—particularly for internal products and tools—this is a great way to quietly start building trust and a relationship with a department/group/stakeholder/customer who is a linchpin in terms of your service actually generating business value.

Good luck!

Photo courtesy of Chase Elliot Clark via Flickr.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

More Free Insights: