This week, I’m chatting with Steve Portigal, who is the Principal of Portigal Consulting and the Author of Interviewing Users. We discuss the changes that prompted him to release a second version of his book 10 years after its initial release, and dive into the best practices that any team can implement to start unlocking the value of data product UX research. Steve explains that the key to making time for user research is knowing what business value you’re after, not simply having a list of research questions. We then role-play through some in-depth examples of real-life experiences we’ve seen from both end users and leadership when it comes to implementing a user research strategy. Throughout our conversation, we come back to the idea that even taking imperfect action towards doing user research can lead to increased data product adoption and business value.
Highlights/ Skip to:
- I introduce Steve Portigal, Principal of Portigal Consulting and Author of Interviewing Users (00:38)
- What changes caused Steve to release a second edition of his book (00:58)
- Steve and I discuss the importance of understanding how to conduct effective user research (03:44)
- Steve explains why it’s crucial to understand that the business challenge and the research questions are two different things (08:16)
- Brian and Steve role-play a common scenario that comes up in user research, and Steve explains an optimal workflow for user research (11:50)
- The importance of provocation in performing user research (21:02)
- How Steve would handle a situation where a member of leadership is preventing research being done with end users (24:23)
- Why a consultative approach is valuable when getting buy-in for conducting user research (35:04)
- Steve shares some of the major benefits of taking imperfect action towards starting user research (36:59)
- The impact and value even easy wins in user research can have (42:54)
- Steve describes the exploratory nature of user research and how to maximize the chance of finding the most valuable insights (46:57)
- Where you can connect with Steve and get a copy of v2 of his book, Interviewing Users (49:35)
Quotes from Today’s Episode
- “If you don’t know what you’re doing, and you don’t know what you should be investing effort-wise, that’s the inexperience in the approach. If you don’t know how to plan, what should we be trying to solve in this research? What are we trying to learn? What are we going to do with it in the organization? Who should we be talking to? How do we find them? What do we ask them? And then a really good one: how do we make sense of that information so that it has impact that we can take away?” — Steve Portigal (07:15)
- “What do people get [from user research]? I think the chance for a team to align around something that comes in from the outside.” – Steve Portigal (41:36)
- On the impact user research can have if teams embrace it: “They had a product that did a thing that no one [understood], and they had to change the product, but also change how they talked about it, change how they built it, and change how they packaged it. And that was a really dramatic turnaround. And it came out of our research, but [mostly] because they really leaned into making use of this stuff.” – Steve Portigal (42:35)
- "If we knew all the questions to ask, we would just write a survey, right? It’s a lower time commitment from the participant to do that. But we’re trying to get at what we don’t know that we don’t know. For some of us, that’s fun!" – Steve Portigal (48:36)
- Interviewing Users (use code DATA20 to get 20% off the list price): https://rosenfeldmedia.com/books/interviewing-users-second-edition/
- Personal website: https://portigal.com
- Publisher website: https://rosenfeldmedia.com
- LinkedIn: https://www.linkedin.com/in/steveportigal/
Brian: Welcome back to Experiencing Data. This is Brian T. O’Neill. Today, I have Steve Portigal on the line, who is a user researcher and an author who just published the second edition of a text about Interviewing Users by that very name. And, Steve, what I want to know is… you published this book ten years ago, what changed in ten years such that we need a second edition? Like, what happened?
Steve: Yeah, I was resistant for a long time to the idea of updating the book because I felt like, well, nothing’s changed. This is a book about a fundamental human-to-human activity about asking questions and listening and following up and, you know, dealing with your own biases, and all that stuff that are, you know, part and parcel of interviewing people. And, you know, I think as the 10th year anniversary came up, I sort of forced myself to look at, well, how was I going to mark that anniversary? And—because it was significant to me personally. And I think in doing so, you know, I asked myself that very question, like, “What has changed?”
And I found myself coming up with stuff to change—just to kind of categorize it—things that I could explain better because I keep making that mistake over and over again, or things that I can explain better because I’ve been teaching for ten years since the first edition came out, and I’ve learned from people how they solve this problem and how they think about it. So, I think, you know, same sort of principles about listening, and talking to people, and kind of making a connection and understanding what their lives are about, but being able to better articulate that. But I think that’s a preamble to the bigger issue is that the context in which we do user research, in which we do interview people, has changed. When I wrote this book ten years ago, there weren’t, for example, as many, sort of, in-house teams of researchers working inside organizations. There weren’t as many people who weren’t with the title of user researcher, who were still doing this work.
It was—it just—the work has grown, it’s become formalized, there are operations departments devoted to research and organizations, even if it’s one person. When you bring an operations mindset in, now you’re formalizing processes, you’re kind of setting goals, there are things like legal compliance stuff that we kind of collectively as a field, you know, ignored. I think I said, you know, ten years ago, “Hey, these kinds of things are a good idea,” but now they’re legally mandatory, so the context for talking about some of the logistics of research and how to set up the operations has changed. I think there’s lots of questions that are being asked more now about research is a process you go through where you learn something, but how do we activate this inside of the organization so it actually has the end result of influencing decision-making? So, you know, some of those are things that I could have talked about ten years ago, and maybe I left off, some of those are external things that changed in kind of the business, some of those are things that kind of changed in me by living and working with this material and these practices for ten years.
Brian: Yeah, yeah. I want to step back and just give some context to some of our listeners, you know, our data science community, and some of our data product leaders that maybe aren’t coming out of a software tech industry. And so, just for framing, and please correct me if I’m wrong, primarily when we’re talking about your book and your work, when we’re talking about user research, we’re usually talking about this in the context of delivering digital applications, tech products, things like this, and user research is a subset of the user experience design or user experience profession. It’s gotten more and more distinct from the design side, so a lot of times user research—sometimes you’ll see UXR is an abbreviation—this lives as kind of its own team in large organizations, separate from the design team, usually rolls up into UX, which usually rolls up into product management. Did I get any of that wrong, just to set context for our audience that doesn’t know as much about this field?
Steve: That’s a great starting place, I think, yes.
Brian: Yeah, yeah.
Steve: Let’s not pick a card anymore, let’s use that as a baseline. I think that’s really good.
Brian: Cool. So, for context, for additional context, why is Steve here? Well, I think Steve can teach us, and some of you all that are listening, about, you hear me, and I’ve gone a little—written articles about this, like, types of questions to go ask, and in my training, I talk about how you go do some of this work. Steve’s written an entire book on this and revised it after ten years. It sounds very simple to go out and talk to people.
You got a lot of—teams aren’t doing this, and sometimes what I hear from people that don’t do this work professionally, it’s like, “Okay, we did that, and we got all this random stuff back.” And it’s like, they don’t know what they want, and it’s all over the place, and, “I know we’re supposed to, like, synthesize this, but at some point, we need to start building something, and when are we going to get to [laugh] that?” And the nervousness starts to build up because they know they need to be doing this work, but they don’t have the resources in-house to do it.
But they’re aware of—they know that their data products are suffering because the adoption thing has killed them in the past. We’ve spent all this time, we’ve made all this stuff, it’s technically right, but effectively wrong when it comes off the factory line at the end. We want to get better at understanding the problem space early. What are we doing wrong when we go out and do this, but we’re not getting consistent information. We’re not hearing the same stuff. Like, how does a non-professional that doesn’t have a team devoted get better at that? Like, what are they doing wrong?
Steve: And just, let’s set a little bit more context that builds on that kind of outline that you said because I think you’re right, there are organizations that have the teams, and the structures, and the roles that you talked about, and one of the—I mentioned operations in research, and one of the people that’s really championed research operations is Kate Towsey, and Kate came up with the term ‘people who do research.’ So, she talks about researchers—which, and you described kind of that… sort of org structure thing—and people who do research, which she calls PWDR. So, I think the folks that, you know, maybe we’re talking to today, like, there’s a term for that. It’s a legitimate use of, kind of, research methods and approaches that just may be different the way someone like me who’s a quote, “Researcher,” does it. And so, you think about, yeah, we want to help researchers do better, we want to help people who do research to do better.
Yeah, and I think you’re—you had this, like, frenetic scenario that I think kind of pulls a lot of pieces in, like, we don’t have time to do it, we don’t have the confidence in our approach, those two things already start to kind of exacerbate each other. If you don’t know what you’re doing, and you don’t know what you should be investing, like, effort-wise, that’s sort of the inexperience in the approach makes it, you know, if you don’t know how to plan, like, what should we be trying to solve in this research? What are we trying to learn? What are we going to do with it in the organization? Who should we be talking to? How do we find them? What do we ask them? And then you had a really good one: how do we make sense of that information so that it has impact that we can take away?
Those are all areas of expertise that take practice or take guidance or take, you know, going to a training and reading a cla—or reading a book, and all the stuff to be good at, but you need all of them to have some successful outcomes. So, any one of them that you can make improvement on, I think is great. If you can figure out how to ask better questions, great. If you can figure out how to sort of set up the research so that it’s going after the information that you need.
Brian: How would I improperly set it up? Like, what would be an example of an improper setup? And just for context, let’s assume, you know, for I’d say maybe, you know, maybe half the audience, I really don’t know what the number is, but a lot of people listening are self-identifying in the product management space, probably coming out of data science. They might be in data science, and sort of becoming product managers. A good half of them might be working on internal data products.
So, they’re building machine-learning models, analytics dashboards, tooling for decision-making that might be used internally, not as a commercial product that’s being sold, but for the sales team, the marketing team, the finance team. So, they can find their people. They’re inside, they’re employees, they don’t need to pay them. You know, like [laugh] some of the operations pieces a little bit made easier. I would—well, you can—maybe that’s an assumption that’s incorrect, that just because they’re internal employees, it makes it easier.
But how can they get started? And I guess I want to figure out, how do we help them get started to do this taking imperfect action, to take that first step, even if it’s just to know what we don’t know about it, and to realize maybe we need to get help, or maybe we there’s something we can do on our own to get better at this.
Steve: You know, I think the impulse to do research comes in a few different directions. One is presented that way: “Oh, we need to talk to some people. We need to talk to users, we need to talk to these internal, you know, consumers of these data products.” So, someone has, kind of, come up with the idea that here’s the approach we should take, and I think it’s worth asking in that situation, like, what is it that you want to know? You know, in the book, I talk about the business challenge and the research question as being kind of things that are paired.
And you know, and so people identify a business challenge. They’ll say, like, “Oh, you know, we’ve rolled this out, and people aren’t using it the way we expected them to. It has all these features, and somebody with influence or so—you know, our usage data shows that this feature that we put a lot of effort into is not getting any traction. But”—and this sometimes happens, right—“People literally asked us in some requirements-setting process, they asked us to put this feature in.” So there’s, you know, there’s a business question where, are we satisfying the internal customers we’re supposed to be satisfying? Like, why didn’t this feature get taken up?
And so, you know, the initiative there comes from some problem, something isn’t working as a business challenge. Sometimes these come in with a research question like, “Hey, we want to know X about this group of people Y. We want to know what features they need.” We want to know something. We want to know—in the example I’m giving here, like, we might want to know, why aren’t they using this thing which we gave them?
I’m not talking about the question I would ask somebody, I’m talking about the thing I want to understand here. That’s kind of the resea—research is meant to give us an answer to this question. So, it takes some time, I think, to figure out what is the challenge? What is the question? And you need both, right?
We don’t want to go do research if we don’t have a sense of, like, why answering this question is important to the business. So, I think a way to get from, you know, to improve no matter where you are, is just to be aware of those are separate things and to take time to, you know, excavate both of them and sort of understand how they connect to each other. If the thing you’re trying to answer with the research doesn’t support the action that you want to take, then you’re not really putting effort into something that’s going to be useful with your research and vice versa.
Brian: Sure. Can I give you a role-play? Like an example of—
Steve: Let’s do it.
Brian: —a question that I think, like, some of our leaders might be getting. So, let’s say I, you know, I’m a VP or director of data science at a large enterprise company, and the head of sales just came to me and said, “Hey, Steve”—you’re the head of data you’re now the VP of data science at X Corp, whatever that is; not the Twitter one, though [laugh]—“So Steve, we need—like, have you seen ChatGPT? We need, like, ChatGPT, but against our customer data. Like, my team wants to be able to ask ChatGPT anything about our customers, especially our leads. We’re trying to close more business, and frankly, like, the team doesn’t like to spend a lot of time looking at the dashboards. And I know you’ve built us a whole bunch of dashboards that have all the KPIs, and that’s great, and we do look at that stuff sometimes, but we really, like, ChatGPT because we could just ask questions. So, could you guys go off and build us, like—can’t we just, like, put our Salesforce data in ChatGPT, and just being able to ask it questions would be awesome.”
Steve: Yeah. Okay—oh, we’re role-playing. I was going to speak—
Steve: —as Steve here.
Steve: Right. I feel like I should be wearing a special hat or something for that. Sorry, I’m, like, slow in the role-playing thing here.
Brian: That’s okay. You can react to it however you want. You don’t have to role-play.
Brian: But this idea of, I just fed you a scenario—
Brian: —very open-ended, that—and I can tell you that if you put your VP of data science hat back on, you’re immediately asking, like, “Well, what are you trying to answer?” Like, what questions do you want to ask this thing? Like, that is very open-ended. We’re making the assumption that you need an LLM to do the thing that you want, which may or may not be the right thing, both cost, time, and efficacy-wise. Those are usually—I mean, my audience, they’re smart people. Like, they understand this stuff, but getting to what is this sales team really need us to go build for them—
Brian: —that is the challenge. And so often, when we take what they ask for, they don’t use it at the end of the day. “Oh, that’s not what I meant.” It’s like, “Yeah, thank—like, thank you.” And it’s like, other priority has already kicked in. That’s old news [laugh]. It’s like, it’s just out of—it’s—by the time it comes off the shelf, it’s either too late or wrong, or they don’t even know how to express what’s wrong with it. How can research help us, like, unpack this more?
Steve: Yeah. And I think that scenario is familiar to all of us across a lot of roles. Like, the proposal is made to you that here’s the problem, and here’s the solution, right? And, you know, I think in research—and I think this is what you’re kind of saying your folks would ask as well, which is like, “Well, why,” like, “What’s the real problem?” And so, I think if that person approached me, I would have a lot of why questions, like, “Why do you believe that’s the problem? And why do you believe that’s the solution?”
And I guess in here, it’s sort of like, “Oh, and we want to use this technology to create the solution to this problem.” So, it is a bit of, potentially, a house-of-cards of things. And, like, I think, you know, for research or, like, this is great. I really love hearing the assumptions that people have. And so, I would ask that—yes, I want to ask that person why, like, what else have you seen, where’s this coming from, is there signals that a competitor is doing something or an analogy from another field or, you know, weak signals that you’re getting from, you know, the folks that you’re interacting with?
And I think I would want to talk to them. And we’re just kind of setting this up, we’re not doing any actual research here, but just unpacking a little bit. I want to query both the, you know, what’s the assumptions about the problem, and query about why this solution solves the problem? So, I think, you know, in research, yes, we want to go look at that problem, and it’s much better to go with a hypothesis. Well, I guess for data people, I should be really careful using the word hypothesis. I mean, that means something really specific. I guess for me—
Brian: What does it mean, in your—yeah, tell us what does it mean in your context?
Steve: Yeah, it’s not like H0, H1, you test, and you prove the null, like that, you know?
Steve: I guess, I mean, I like to have some… I like to have some assumptions about, let’s just say how the world works, how people think, what their mental model is, whatever sort of is behind this. I want to have that, and I want to hold on to it. And my research process is to, like, go to the people that we want to build for, and who would use this—in theory, these salespeople that would use this thing—and not go to them and say, for example—I mean, here’s where things sometimes people go wrong—“Hey, here’s a screengrab of, like, you know, a mock-up of this thing. Like, do you want it? How would you use it?”
Like, I really, that isn’t necessarily the first question I would want to ask. I might want to understand their workflow, their tools, where they’re seeing breakdowns, where they’re seeing inefficiencies. Having them explain themselves to me as opposed to, sort of, filtering my questions to them through the proposal to me is, like, a very limited thing: they have this problem; we’re going to make the solution. And so, I’m not trying to evaluate the solution; I’m trying to understand the problem, at least if we’re starting with something that’s new to us, like ChatGPT, chatbots, self-service. Querying is not new to the world, but it’s new to our organization, so I might want to understand their work.
So, figuring out—yes, like you said, we can get these folks, we know who they are, it’s part of the request. Talk to them about how they work, not proposing solutions to them, not asking them to thumbs up or thumbs down features, but just getting a sense of what they do, so that I, separate from talking to them across a group of people, can understand if the problem as it’s described to me, “Oh, our salespeople are—I don’t know, they have 12 windows open, and they can’t do this, and they’re using their hands and their feet, but not their voice. We want to give them something for their voice.” Like, there’s some logic behind that proposal, and I want to come back and say, it’s not exactly like we assumed. It’s different. It’s different for these reasons. If we provide solutions or interventions that demand this, it’s not going to succeed because they have these limitations. But if we do provide something over here—so some criteria that can be built on.
And so, the hypothesis is helpful because it tells me what to listen for, and not try and—and hope it doesn’t come off as—so I’m not trying to, you know, boil the ocean. I’m not trying to write, you know, a dissertation on the work of, you know, a salesperson who might make use of these tools. I’m trying to understand some aspect of their task, which this solution was kind of focusing us around. And so, I want to come back and sort of understand, you know, in the analysis and synthesis of this data, how are they solving this problem, what’s blocking them, what’s enabling them, and, you know, have some conclusions about where some levers are, where some places are to get traction or to make improvement. And then we can talk about solutions that might map to those criteria that might suit those.
So, this is kind of if we’re early. If we’re later, “Hey, we’ve already started building this thing up,” or, “We got an enterprise license for ChatGPT, and like, we’ve run some prompts and, like, we already think this is the solution, and we’re starting to build it; we just need you to integrate it.” At that point, then I might do something that looks a little more evaluative. So, I said I wasn’t going to show them something and say, “Hey, do you like this? Would you use it?” I think there is a point in the process, and you know, I think with expertise, you sort of try—you learn, am I trying to just understand their work or am I trying to get reactions to an idea.
And I think just to be clear here, it’s not, build it and then show it to somebody and see if they like it or want it. It’s to make something low effort, low fidelity that you can show somebody to talk about a future where a capability that doesn’t exist today does exist. So, Version A is kind of, “Tell me about your work,” and then I’m going to interpret that and see where opportunities are. Version B is, “Tell me about your work, and here’s an idea for something that’s being explored.” You know, “What’s your reaction to it? Who would use this? How would it integrate into what you’re doing? What would you expect it to do? What are the risks here?”
You know, and it’s a low fidelity, it might be a pen and paper kind of mock-up, it might not be hooked into ChatGPT, to give a real answer. You know, this is prototyping, in the jargon of this. Sometimes people do prototypes where you don’t even see what the interaction looks like. They do, like, a scenario. “So, here’s a series of steps that someone goes through. If you were to do this work, does this set of steps make sense to you? How does it fit with how you work?”
So, you’re evaluating something, you’re still learning about how they work, but you’re provoking them, you’re prompting them with something to react to because the thing that you’re trying to build doesn’t exist yet, and maybe we can’t ask people to—we need that reaction to kind of give us a little more grit about, you know, assessing what the next steps for this team should be.
Brian: Yeah, that aligns with stuff I’ve talked about on this show, which is, you know, for me, the way I model and try to teach design, which includes research in it, is that it’s not a linear process. Like, so many things in the build side of tech do kind of tend to follow a fairly prescriptive flow before they go to production. Sometimes we have to design a little bit to figure out what should we be designing? So, you’re not really designing to get more and more refinement; it’s a provocation to get someone to react and say, “That’s not what I’m talking about. No, that’s not what I meant. I meant this.”
But until they saw something, sometimes it’s so abstract, it’s harder to get that feedback, but as soon as you start visualizing something, it’s really easy to get hot opinions. [laugh] I feel like it’s much easier sometimes to get that prompting going and getting people to open up. So, we just also have to realize that we got to let go of our attachment to that. And this is why the low fidelity thing, and I want you to disagree if you disagree, but the low fidelity is important so that we’re not—we don’t have the sunk cost bias, we haven’t put too much time into it, we’re not attached to it, we’re open to throwing that first thing away. Because it’s really just a provocation tool. I—disagree, agree?
Steve: Yeah, I don’t disagree. I just want to highlight something that you said that I think is important, the idea of the provocation. So, there’s low fidelity, that’s sort of how finished it looks. But then there’s also sort of the, I don’t know, the realism or the hyperrealism. Like, so what you’re saying about, you know, when we design something, we kind of refine and refine and refine, and I think the mistake I see teams making is taking those artifacts out of a design process, quote, “Into the field.” They get reactions to those things because they already have them.
And I think we have a lot of power—high fidelity or low fidelity, we have a lot of power and opportunity to create artifacts that are specifically those provocations. So, if we wanted to, you know, build a ChatGPT prompt—well, that word has another meaning—provocation, stimulus for some user research, we might—and this is a silly example, but we might, you know, have a screen with information on one side, on the other side, we have Bender from Futurama, you know, with little talk balloons. And that’s not the intent of what we—you know, you’re creating some… some unreality to it to kind of get somebody to comment on, I don’t know, whatever the—I don’t know if Bender is a good idea or not, but you’re making a thing to show that you would never make. Maybe it has more capability, more visual refinement, maybe it has more intelligence if you’re going to stimulate a dialog because you are trying to get those hot opinions, and so you want to highlight the aspect of it that you are curious about. You know, buttons are bigger, or they are—they have sort of loud names because you’re trying to get somebody to react to something that’s there.
You’re not evaluating your visual layout; you’re evaluating the idea of this being different in a way, so make it clear how it is different. So yeah, I just think that is just a big opportunity for people to kind of play with and explore the potential, the extreme potential, as a way to make sure they’re really getting those interesting reactions.
Brian: I want to step back a little bit to, like, before we even got a chance to talk to the end-user because stakeholder John thinks he represents all of his subordinates. So, in this case, I manage US sales. I’m coming back to you, Steve Portigal, VP of data science at company whatever. And so, you ask me about my—why do I want this ChatGPT thing?
“Well, we just want to be able to ask pretty much whatever we want out of Salesforce. It takes us forever to go into Salesforce and get the data, and I know we have the dashboards, but we have to, like, do that data connection thing that you guys showed us, and it takes time to refresh the data, and frankly, my team’s on the—I want them on the phone. I want them talking to leads and prospects and closing deals and not spending time. And if they could just ask, like, when they’re on the call, that would be better. And so, I don’t really think you need to go talk to all these people because I used to do this job. Like, I’m pretty sure I know what we need, and we need this—like, we really want to have this, like, chat interface. So, could you guys make that for us?”
Steve: Yeah, I’m—I guess I’m not role-playing back to you, but I like that.
Brian: [laugh] I’m trying to be a gatekeeper here, which is—
Brian: —“It takes a long time to go do all that stuff. I want my people on the floor calling, you know, on the phone, hitting the door-, doorbells, whatever their method is, and I know what they need.” You, I’m leading you now [laugh], but I want to hear your take on the importance—or maybe it’s not—of talking to a stakeholder versus talking to an end-user. Like, how many hops away? And, like, tell me—
Steve: It’s super important. It is—I mean, I think you’re exactly right to kind of put in this process where you’re putting it. If you know, stakeholder John thinks he knows the solution and thinks he knows the approach, you know, let’s just say you decided to do research anyway. [unintelligible 00:26:07] stakeholder John’s not going to accept that. This needs to be a collaboration, you know
And so, I think some things you can point to are that dashboard that you asked us to build based on these criteria that we built and people didn’t use, so that, you know, if we look at how we’ve worked together in the past, you know, we’ve started with some assumptions that have proven to be different, and that’s been costly. And so, right, I think there’s some politics there. I don’t know how we decided to make this dashboard and if that was John’s doing or my doing, right—
Steve: But, you know, I think that sort of the principle is here, like, you know, let’s look at what track record do we have of building based on an initial understanding. Have we succeeded in that or not? I don’t think that’s pointing at John. I think that’s us, right? That’s me as the VP of data products, like, building stuff without, sort of, a clear understanding.
I think, you know, that first conversation with John that represents a change in the approach is harder, you know, because it can be—we can definitely frame it as it was a little bit of conflict or different objectives. I think taking a longer-term approach where me, Steve as the VP of Data Products, you know, lets people know, the ways I work to ensure that, you know, we minimize risk and we maximize efficiency. And so, the conversation with John, if no one’s ever talked about this ever, in the organization, like, egh, this is, like, a new thing, then we definitely want to look at where have we made mistakes in the past, and how can we kind of take a different approach? I think it gets easier with time when you sort of set a standard, you set an expectation, and that this—hey, this is how we create successful outcomes. Having some successful outcomes, you know, also makes that easier.
And so, I think, you know, if you’re Steve, the VP of Data Science at Whatever Company, maybe John is not someone that you win over right now, and maybe John gets a, like, we’re going to ask John a lot of why questions, we’re going to see if we can talk to some of his lieutenants, and, you know, whatever, and can we get anything to kind of unpack this a little bit. But maybe stakeholder Martina in a different team is, like, “You know, I heard this great podcast. You know, they’re talking about research.” And you know, Steve, the VP of data science says to Martina, “Yeah, like, this thing that you want to do, like, here’s the approach that we take,” right? We’re working with different stakeholders who have different levels of anxiety, comfort, and confidence, or, you know, unearned or not.
So, I think you can’t win them all. You can look for small wins, and you take kind of a longer-term view of that. You know, I wouldn’t want to, for example, set up a gotcha with John, like, “Okay, we’ll do it your way, but here’s my concern.” And, you know, I think you could voice concerns, make some recommendations, have a postmortem, you know, build in some check-ins, you know, make it a little more explicit, and I think there’s still going to be some small wins if John sees the value in asking some of these questions, even if John was not willing to, kind of, invest the time.
Brian: Let me change the scenario a little bit because I might have painted John, this head of sales, as a little bit more of a brick wall than I wanted to. So, I’m now John, I’m putting that John back hat on.
Brian: “So Steve, look, frankly, I don’t give a [BLEEP] how you guys build this thing and what your process is. I just don’t care. What I don’t have a ton of time, and the last time someone came and talked to our people, it’s like, I don’t know what they got out of it, but I just know that they weren’t on the phone, and it seemed like it took a long—everything I know about that re—it just takes a long time, and it’s like, I never really know what we’re getting out of that and stuff. So, I’m not saying no. I’m just, frankly, like, I feel like I’ve done this job a long time, and like, it shouldn’t—like, why do you guys really need to go talk to them if, you know, I’ve done this before? And look, you know, the C-team has told us, like, AI, like, we’re supposed to be using AI in all parts of the business, and you know, I got to account, you know, to the CEO, or whoever my you know, fill in the blank, whoever I report to. So like, again, I don’t really care how you guys do it?”
Brian: But so this person is open—so I’m taking the hat off now, and Steve and Brian are talking. This person is open. There’s a door there to go through, but how do we—if we get that foot in the door, we get some buy-in, like, how do you navigate that conversation? How do we maybe show some wins or something there that gets this head of sales—who will always be a customer of us; they’re not going away—but how do we show some progress there? Like, what kinds of things might you do maybe in the short term to show those wins, or navigate this conversation? Like, I want to kind of let you take it where you want to go there. But—
Steve: I mean, I think John identified a bunch of concerns there. So, you know, availability of his people, unclear what the result of doing research in the past is, his own expertise. And those are all right, right? Those are all good, and those are—doing this work well, I think, addresses all of those. So, I’d want to talk with, you know, whenever, we would put together a project plan. Let’s just say that’s the thing that we would do. Like, we would have to be clear how many people we want to talk to and how long we would want to spend with them.
So, it’s not just go off and do research. You know, we’re asking for access for people that should be on the phone, so let’s negotiate what that looks like. You know, it wouldn’t be my preference to talk to two salespeople at once, but maybe we do. So, there’s 12 salespeople, I want to meet them all. We’re going to do 6 one-hour sessions instead of 12 one-hour sessions. So, I don’t know, although now that I say that, we’re still getting those people off the floor for less time, so maybe that’s a false economy there. But whatever sort of way of doing.
Brian: “Hey, Steve, I can afford that. That’s fine. I’m fine with that. That’s not a big ask. I thought it was going to take a lot longer than that, so that—we can afford that.”
Steve: “Good. I think your experience and expertise is super valuable, and what I want to do is, with you, and if we can identify two or three other people that are industry long,”—now I’m back in the character—“We can identify two or three other people in our organization who are, you know, industry veterans like yourself, we want to start this off by, you know, doing, we call them stakeholder interviews. We’ll have maybe 30 minutes. We want to get your download of, like, what you’ve seen, what you expect, where the problem is, why you see this as an opportunity, and really formally kind of get that information. I think we should see some contrast between you and some other folks here.
“So, after we’ve got that additional context and expertise, and sort of understand more about how you’re thinking about this, and what you know, that’s going to set us up really well, I think, to have these conversations with your folks. And I think your concern you identified is really good one: what are we getting from the research? And you know, we can involve you as much or as little as is available to you, I think it’s good to get a little bit of time to kind of check in and show you not everything that we’re hearing, but some of the patterns that we’re seeing. So, we want to come back to you after—right after we’ve done these, talked to these folks, and just give you a top line of saying, like, here’s the initial patterns. And that’s a chance for you to be, you know, confirm what we’re hearing. You might be surprised. I don’t know what’s going to happen there, but, you know, we can quickly kind of have that check in, we can make some of the stuff available as we’re hearing it.
“Because I think what often happens in these things is, you know, the download we get from you is right, but there’s some, maybe some subtle differences, and I think those subtle differences are sometimes what block the success. We’re not using the same words to mean the same thing. And so, we want to loop back to you with some of the things that we’re learning along the way in, kind of, a low, low demand on your time way of fashion, so you can kind of see what is happening, what are we getting from this. And then I think as we, you know, get that input from you, and we can involve whoever we want to involve in that, then we’ll do a little more deeper dive into, you know, where we started with this, what we think the solutions are, what we’re hearing from people, and try to come back with some conclusions and recommendations.
“So, that’s a little more of a formal reporting, and we want to build that summary, share it with you, and I think then together, we can make some decisions about, does this point us to what we’re going to build? Have we confirmed kind of where we started? What are the nuances? What are the priorities? Even if we agree with everything, the research kind of confirms everything, it’s still going to give us some additional directions on stuff that’s really going to help the designers, like, the priority or the sequence of operations, or stuff that this new capability, how it’s going to integrate. I think there’s going to be stuff that we’re going to get that’s going to be new.”
Brian: Well, Steve, what—just for listeners, if you go back and listen to this, one of the things I really liked about this was that there was a consultative approach, and Steve didn’t really get into the weeds about what he’s going to go do and how he’s going to go talk to people, but Steve talked to me, in my role-play is the head of sales, about what are the benefits to me? What are some of the signals we’re looking for? So, it sounds like, “Oh, we might validate what I thought we need to do, we might get surprised, but we’re getting some kind of confidence back about our direction.” Now, I’m starting to see why we might do this because if we build the wrong thing, we might end up with that dashboard that we got last year that really didn’t get used too much. So, I liked how you frame that as some benefits that me, Mr. Salesperson, who doesn’t know what research is, and why we should spend time going to do that versus building stuff to get. And I don’t know if that was intentional, but I started to feel like I was in the mode of, like, oh okay, I’m starting to—I’m buying this. Like, I’m starting to get it [laugh].
Steve: Yeah. And I think it’s, you know, if we’re going to postmortem our role-play. Like, I think your character saying, “I don’t care about this stuff,” is actually, it’s almost like a relief to me because now I can focus on this other part. And in some ways, someone saying they don’t care is like a gift because you don’t have to sort of sell them on the mechanics and, you know, maybe I’m just nerdy enough to think that that’s—I mean, I wrote a whole book about the mechanics, so that kind of that overcame some of my own shortcomings, which would be to kind of go there. So, you know, a takeaway for me, I think is, like, is just being more sensitive to what that person does care about and not try to get them excited about the thing that I’m excited about, right?
Brian: Right, right.
Steve: We didn’t explain the mechanics of asking open-ended follow-up questions in this scene, which I could talk more than anyone on this podcast wants to hear me talk about.
Brian: [laugh] Yeah, the nitty-gritty of doing it, you wrote a whole book on that. We have a coupon code for that, which we’re going to share at the end, and I want you to say a little bit about the book, but I felt like that’s something people can go read about, and I really wanted to dig into some of these meta questions because a lot of our listeners are in leadership positions. They might not be the ones that are actually going out and doing this interview work. It might be their principles and their leads, et cetera, and frankly, they may be needing to pass the book along to them or give them some tips on—like a data scientist—tips on how do you go and do this work. And so, they’re dabbling in this space. Maybe they don’t know if they need to hire someone to do this work or not, so I like that we’re framing it this way as benefits because it gets our foot in the door to start doing some of this work and taking that imperfect action, which, to me, is that kind of first step.
So, a question for you before we, you know, wrap up and talk about the book, and where they can get that and all of that stuff. Can you paint a picture of, like, what does a big benefit from research look like? Like, can you give an example from your work, or even if it’s not your work it’s some other story that you have about, we did this work. Maybe we had Skeptical Sam, the head of sales, and Skeptical Sam was not so into this idea, but we uncovered some really big insights, and it changed our direction. And maybe it wasn’t a surprise, but I kind of want to turn the lights on for our listeners about what are the kinds of things that we might get back from this that will really sink in, why we do this work. Why do researchers do this work, and why is it valid, and valued so much in tech companies?
Steve: Yeah, a few years ago, I worked on a consumer product—it was a number of years ago—that was very much the product of, uh—the result of, like, a strong engineering culture. Yeah, it was like a—this, this’ll—the era of this technology will really date it. It was a streaming box for your audio system. It was like, you know, in the era when AV stuff and computer stuff were really separate. This was like a bridging product.
So, you could put—you know, it was a computer company would make a box that you could put in your AV setup that would, you know—there was no Spotify. There wasn’t a lot of stuff, but there were mp3 collections that you had to listen to on your crappy laptop speakers, or you know, your Creative Zen Jukebox or whatever. Like, it was just a different era. And they were trying to move it from one place into the next. But it was really, like, the product of a lot of people hacking on stuff. So, it had all these features that were… let’s say you had to go into, like, some Unix config file to change. Like, it was very inaccessible.
Brian: Every audiophile’s dream to go into Unix and change your preference [laugh].
Steve: And yet this thing was, I think, functionally amazing. Like, ahead of its time amazing, but also, like, really put together by some very talented people. And you know, they had, like, the person that ran this division believes in it really, really strongly. And I think the way we ended up framing it was, like, this assumption that, you know, if people knew about this, they would use it. There’s a lot of people who have, in this case, it was like big mp3 collection to know where to listen to them.
So, they believed in the value of what they had made. And they didn’t—they asked a very open-ended question, which was, like, you know, “Why—what could we do to make this product actually succeed for people that aren’t config file editor types?” And, you know, so the research we did—and I know, you know, John doesn’t care about—or whoever it was now—doesn’t care, but we gave these boxes to people who should know about them, who should use them, but would never have come across that stuff, but who could reap the benefits. And we had them live with it, and then we had them—you know, we gave them workbooks, and had them trial. There’s, like, there’s a lot of sort of process and method there.
But I think, you know, what we learned from them was how they understood this thing. And in fact, how they didn’t. It just didn’t make sense, the sort of the model of these pieces that had components that had names, there were software that linked together, and you could just—you know, it was, in some ways, like a usability test, when you see somebody struggle with something, but this was like living with it and then explaining it back. And so, I think, you know, the biggest opportunity for this team became clear that they had to, like, I mean, re-architect, all these pieces, relabel them, not just how they explained it, but you can’t document your way out of, like, a broken, like, user model. And so, I mean, the team was really, really great because when they saw this from us, they, like, aligned on it.
So, to your point, to your question, like, what do people get, you know, I think the chance for a team to align around something that comes in from the outside. It’s not sitting there in a meeting and saying, like, “Oh, it’s this thing. It’s this thing. We think this,” or, “We tried that,” it’s when you get this, you know, clearly organized, synthesized, presented back from an outside person, I think helped in this case, you know, when you get that sort of new knowledge presented back to you, and I think they had a great leader that really helped facilitate, I would say, what I think the takeaways were, and she was kind of reiterate back to people. And you could start to see every edit that we did, the biggest points came across.
And they started using the language consistently, identifying, like, the three areas to kind of improve. And so, their whole conception of their product, of the language around, of the architecture, of the labels, it changed the packaging, you know? They had a product that did a thing that no one got, and they had to change the product, but also change how they talked about it, change how they built it, and change how they packaged it. And that was, like, a really dramatic turnaround. And it came out of our research, but it came out because they really leaned into making use of this stuff.
Brian: I would attempt to clarify, and then I want you to play back if that’s what you meant. Like, we talked a little bit about, like, mental models. So like, would an example of this been—and I’m going to try to use a kind of data-y type example—but in this product that you guys built, they used the word customer—and I’m going back to my sales guy now—
Brian: And the sales team thinks that customer is a human being at a big company. It’s Rachel who works at Kellogg’s in the grain-buying department for cereals, West Coast. It’s not Kellogg’s. Kellogg’s is not the customer. Rachel at Kellogg’s is the customer. But in the data science world, a customer would be a large company that we have a business relationship, which is Kellogg’s.
And so that’s, like, a mental model disagreement. It’s a label; there’s a label in there that are our mental model of what a customer wasn’t the same between the data team and the sales team. Is that kind of like—and I know that customer isn’t the right example in your case, probably—but is it stuff like that, that we’re finding out, like, we’re not even using the same words. Like, what we call a speaker isn’t a physical speaker. It’s a collection of playback devices in a room, or something? And it’s like, no to a user, a speaker is a physical thing that has music coming out of it. Is that what you’re talking about, stuff like that?
Steve: That’s part of it. I would say that’s sort of, um, if the iceberg is two thirds below water, I think you know, different labels meaning different things is the part that’s above the water. And thank you for calling mental model. It is a piece of jargon that I threw in too loosely. Here, I think it might be—and I’m going to get out of my lane a little bit here—that Party A and Party B—so the sales team and the data team—they have different ideas of what the sales funnel is because, I don’t know, there’s a—because here it’s, you know, at Whatever Corp, we use the modified such-and-such approach, and it actually prioritizes this activity, and you know, the tool is a very generic way that it thinks about the sales funnel, and so, you know, there’s a mismatch.
Or we’ve got the funnel upside down, or that we—it’s not a funnel at all. We just have a bunch of, you know, parallel things that are kind of stacked up in a—that are adjacencies. So that, the model there is sort of a fundamental system of operating. You know, I think, with these folks, we talked about, like, there’s some great sort of illustrations of, like, how people think a car works, right? And it’s like, it’s a steering wheel, and you know, a thing that makes a go, and a thing that makes it stop. And it has four wheels. And that’s the mental model that a driver needs.
And there is, like, you know, that cutaway view of the car that, like, the engineer needs, that sort of shows all the pieces connected, and so on. These guys had, like, they were trying to get users to understand that model, and the model was broken. They were using the wrong kind of funnel. Now, I’m mixing every kind of metaphor here and examples, but—
Steve: You know, so you’ve got kind of a hierarchy. Okay, we’re calling it the wrong thing, it doesn’t work the way that we’re describing, and we’re assuming that they share the same sort of fundamental primitives that we’re building this model from, and we don’t. So, it gets fairly deep, kind of the further you go down, you know, under the water there. And so yeah, if you want to, like, call customers, customers, change the label on your thing, like, that’s great, but it’s only going to get you so far, if the thing that is enabling you to do is not how they think about doing the thing that they’re trying to do. And I think, like, all this stuff, there is—you know, it’s a win.
You asked before about, sort of, small wins. Like, if we can align our language, that’s a win. That’s a lower effort win. You might talk to a couple people and sort that out, and just go build on that, and know that there are, sort of, harder things that are lurking, you know, beneath the surface of the water there, that we need more buy-in, we need more commitment of time to get to, that, you know, there’s just a progression of a sort of depth of insight, I guess. And the shallow ones are great, you know? And you can go more when you have those opportunities.
Brian: Last question before we get into the book, where people can get it and all that. One of the things I kind of selfishly like about research, or I guess, the thing that kind of provokes me to want to spend the time to go do that is the fun of learning the stuff that you didn’t go in to ask about in the first place. So, you have that agenda [laugh], and yet you’re getting this other signals coming back that you didn’t really prompt intentionally, but there’s, like, gold in there. There’s just so much information coming back. Or sometimes it’s just one giant thing, one comment, but it’s, like, really insightful, and it just, like, lights go on because you had no idea that people saw the world that way. I don’t know, do you have that feeling? Is that something that gets you going is kind of the not knowing part, like, about even what to ask about? I don’t know. For me, that’s… I find that really thrilling.
Steve: Yeah, and so tactically, right? We have a list of questions, but we don’t… it’s not a verbal survey. You don’t go, “Question A. Got it. Question B…” right? It’s exploratory, and I think it’s challenging and fun and scary to sort of think about, what are you going to ask next? Where’s this conversation going? And I think over time, you sort of—I think about them as trailheads. Like, that thing that’s gold doesn’t necessarily get offered up, but you get a little spidey sense kind of thing.
Somebody says something, or they laugh, or they say something that surprises you, they mention another tool or another product, and you have to kind of… you have to be present enough to say, like, “Oh. Okay, tell me about that.” And sometimes it goes somewhere, and sometimes it doesn’t, but it is really fun. And it’s like, if we knew all the questions to ask, we will just write a survey, right? It’s sort of a lower time commitment from the participant, say, to do that.
But we’re trying to get at what we don’t know that we don’t know. And for some of us, that’s fun. I hope it’s not intimidating to people, but yeah, those moments are great. And it’s, I think, it’s not like someone says something in the middle of an interview, and you have, like, a vision of what the new solution is going to be, but you go into this mode of—and you put it really well, right—like, people are different, they are surprising and unexpected, and it is really, it’s a joy and a privilege to do that kind of stuff. And then yes, we have to find ways to apply that and bring it back, but it is a way of being, I think, that makes these interviews different than yeah, being a survey-taker. We’re not just collecting data; we are really trying to get to a point where we understand something differently, and that changed by doing it. And that is extremely fun. And there’s a great word for it.
Brian: Steve, this has been so fun to talk to you. So, your v2, second edition book Interviewing Users is out.
Brian: Is portigal.com, is that the best place to get it? And that’s, like, ‘Portugal’ with an ‘I’ instead of a ‘U’? That’s your site. Is that the best place to get it, or where would you drive people to get it?
Steve: That’s the best place to find me, if people want to find me. The best place to get the book is from the publisher, which is Rosenfeld Media. So, rosenfeldmedia.com books are sold online at other big online booksellers, but if you want to support small business and support the author of this a little better for all the small players to buy from the publisher directly.
Steve: And I think Rosenfeld does something that the other folks don’t do, which is if you buy a print copy, you get a digital copy as well.
Brian: Got it.
Steve: And we have a discount for people.
Brian: Yeah, DATA20, right? Is that the—
Steve: That’s right.
Brian: Yeah. So, D-A-T-A number 20. That’s good to February 8—if you’re listening real time to this February 8th 2024. Is that correct?
Steve: Yeah. That sounds right.
Steve: At Rosenfeld Media. You can’t use that coupon anywhere else. But that code is good there.
Brian: Awesome. Cool. And just for some of our, you know, non-UX folks that are listening to the show—I know we have a lot of those—you know, Rosenfeld is a really established book publisher in the UX space, so Steve knows that stuff here. I definitely recommend you go out there, if you’re looking for, really, like, a step-by-step guide for your team on how to go out and start doing this kind of work, and to take that first step. I mean, my listeners know, I’m a big fan of, like, taking that imperfect action, and then learning kind of where your gaps are, and do you need outside help, or do you keep refining yourself on getting better at those skills? So, I assume you do this work, too, consulting as well, Steve, is that right? Like, tell us? What do you do besides write books? Like, where can people find you?
Steve: You know, find me on LinkedIn, and as we mentioned, my own site talks about some of the work that I do. And I mentioned it off the top that I teach research as part of, you know, as part of my work, so I do lots of training for organizations, and you know, that’s a fun way for me to learn as well. But all this expertise that I’m kind of pulling out comes from my experience doing research.
I’ve been doing research, not too long ago, looking at what it’s like to be a manager in a remote work environment, so we’re looking at the employee experience for this company that’s a remote-first company.
Brian: Oh cool.
Steve: But I’ve worked on payroll systems for International Film Productions. What does the software—what does the work look like, and how does that software have to kind of support some pretty, sort of, wild and crazy work that gets done?
Brian: Excellent. Steve, this has been so great, a great conversation here. Thanks for sharing your insights with Experiencing Data listenership, so it’s been great having you.
Steve: Thanks so much. Great to chat with you.