088 – Doing UX Research for Data Products and The Magic of Qualitative User Feedback with Mike Oren, Head of Design Research at Klaviyo

Experiencing Data with Brian T. O'Neill
Experiencing Data with Brian T. O'Neill
088 - Doing UX Research for Data Products and The Magic of Qualitative User Feedback with Mike Oren, Head of Design Research at Klaviyo
/

Mike Oren, Head of Design Research at Klaviyo, joins today’s episode to discuss how we do UX research for data products—and why qualitative research matters. Mike and I recently met in Lou Rosenfeld’s Quant vs. Qual group, which is for people interested in both qualitative and quantitative methods for conducting user research. Mike goes into the details on how Klaviyo and his teams are identifying what customers need through research, how they use data to get to that point, what data scientists and non-UX professionals need to know about conducting UX research, and some tips for getting started quickly. He also explains how Klaviyo’s data scientists—not just the UX team—are directly involved in talking to users to develop an understanding of their problem space.

Klaviyo is a communications platform that allows customers to personalize email and text messages powered by data. In this episode, Mike talks about how to ask research questions to get at what customers actually need. Mikes also offers some excellent “getting started” techniques for conducting interviews (qualitative research), the kinds of things to be aware of and avoid when interviewing users, and some examples of the types of findings you might learn. He also gives us some examples of how these research insights become features or solutions in the product, and how they interpret whether their design choices are actually useful and usable once a customer interacts with them. I really enjoyed Mike’s take on designing data-driven solutions, his ideas on data literacy (for both designers, and users), and hearing about the types of dinner conversations he has with his wife who is an economist 😉 . Check out our conversation for Mike’s take on the relevance of research for data products and user experience. 

In this episode, we cover:

  • Using “small data” such as qualitative user feedback  to improve UX and data products—and the #1 way qualitative data beats quantitative data  (01:45)
  • Mike explains what Klaviyo is, and gives an example of how they use qualitative information to inform the design of this communications product  (03:38)
  • Mike discusses Klaviyo data scientists doing research and their methods for conducting research with their customers (09:45)
  • Mike’s tips on what to avoid when you’re conducting research so you get objective, useful feedback on your data product  (12:45)
  • Why dashboards are Mike’s pet peeve (17:45)
  • Mike’s thoughts about data illiteracy, how much design needs to accommodate it, and how design can help with it (22:36)
  • How Mike conveys the research to other teams that help mitigate risk  (32:00)
  • Life with an economist! (36:00)
  • What the UX and design community needs to know about data (38:30)

Quotes from Today’s Episode

  • “I actually tell my team never to do any qualitative research around preferences…Preferences are usually something that you’re not going to get a reliable enough sample from if you’re just getting it qualitatively, just because preferences do tend to vary a lot from individual to individual; there’s lots of other factors. ”- Mike (@mikeoren) (03:05)
  • “[Discussing a product design choice influenced by research findings]: Three options gave [the customers a] feeling of more control. In terms of what actual options they wanted, two options was really the most practical, but the thing was that we weren’t really answering the main question that they had, which was what was going to happen with their data if they restarted the test with a new algorithm that was being used. That was something that we wouldn’t have been able to identify if we were only looking at the quantitative data if we were only serving them; we had to get them to voice through their concerns about it.” - Mike (@mikeoren) (07:00)
  • “When people create dashboards, they stick everything on there. If a stakeholder within the organization asked for a piece of data, that goes on the dashboard. If one time a piece of information was needed with other pieces of information that are already on the dashboard, that now gets added to the dashboard. And so you end up with dashboards that just have all these different things on them…you no longer have a clear line of signal.” - Mike (@mikeoren) (17:50)
  • “Part of the experience we need to talk about when we talk about experiencing data is that the experience can happen in more additional vehicles besides a dashboard: A text message, an email notification, there’s other ways to experience the effects of good, intelligent data product work. Pushing the right information at the right time instead of all the information all the time.” - Brian (@rhythmspice) (20:00)
  • “[Data illiteracy is] everyone’s problem. Depending upon what type of data we’re talking about, and what that product is doing, if an organization is truly trying to make data-driven decisions, but then they haven’t trained their leaders to understand the data in the right way, then they’re not actually making data-driven decisions; they’re really making instinctual decisions, or they’re pretending that they’re using the data.” - Mike (@mikeoren)(23:50)
  • “Sometimes statistical significance doesn’t matter to your end-users. More often than not organizations aren’t looking for 95% significance. Usually, 80% is actually good enough for most business decisions. Depending upon the cost of getting a high level of confidence, they might not even really value that additional 15% significance.” - Mike (@mikeoren) (31:06)
  • “In order to effectively make software easier for people to use, to make it useful to people, [designers have] to learn a minimum amount about that medium in order to start crafting those different pieces of the experience that we’re preparing to provide value to people. We’re running into the same thing with data applications where it’s not enough to just know that numbers exist and those are a thing, or to know some graphic primitives of line charts, bar charts, et cetera. As a designer, we have to understand that medium well enough that we can have a conversation with our partners on the data science team.” - Mike (@mikeoren) (39:30)

Links Referenced:

Transcript

Brian: Welcome back to Experiencing Data. This is Brian T. O’Neill. Today I’ve got Mike Oren on the line here. Mike’s Head of Design Research at Klaviyo. And so part of the reason I have Mike on the show here—he doesn’t know this—is that I was talking to one of the past guests of the show and one of my seminar participants on the data science side of the house, and they were interested in hearing more from user experience professionals and design professionals about their perspectives about working with data products and all this, and so Mike and I had just met in Lou Rosenfeld’s Quant Versus Qual group, and so we just got to be talking and I thought it would be interesting to kind of talk about what does a whole research team do, let alone one person, at a software company, particularly in the UX space, how does that translate to data products, and get your perspective. So, welcome to the show.

Mike: Glad to be here. Thank you.

Brian: Yeah, yeah. First question I wanted to talk about—and I had a seminar student bring this up one time—which was, like, this fear that—you know, I’m a big fan of pushing for primarily qualitative research. Like, let’s get that going before we worry about doing quantitative stuff, especially if you’re building—for some of this audience that primarily builds tools for internal stakeholders, you don’t have thousands or millions of people such that you necessarily need large amounts of information, quantitative information to make good decisions. Can you talk to me about how you frame when do we need to worry about having lots of data? And how do you get someone over the hoop of feeling comfortable with small data, with qualitative information, about helping us to improve the usability and the utility of data products? How do you sell someone that I don’t need to run a survey; we don’t need to ask 1000 people to know that this is a direction to go in.

Mike: I will say on the positive side, I don’t have to deal with that question quite as much as I did in, kind of, the early days where a lot of teams were more engineering-driven. Now, more, kind of, product teams are out there, and then there’s more kind of education about kind of design thinking within business schools. But the biggest one is really trying to help them understand, kind of, when qualitative data is going to be better. So, [unintelligible 00:02:47] of anything where you’re trying to understand the ‘why,’ quant data doesn’t typically give you the why. It tells you the ‘what,’ the ‘how much,’ but the why is definitely lacking.

The other thing is, within the qualitative data that my team primarily does where we’re looking at behaviors, I actually tell my team never to do any qualitative research around preferences, even though I know that’s still done by some teams. Preferences are usually something that you’re not going to get a reliable enough sample from if you’re just getting it qualitatively, just because preferences do tend to vary a lot from individual to individual; there’s lots of other factors. Might be able to help people understand, like, why people are preferring one version over the other, but in terms of saying people prefer this option, we’d get biased data if we made the decision through qualitative research. And so, I mean, part of it, in terms of selling qualitative data, is really acknowledging the limitations of qualitative data in addition to talking about what it does do really well, which give us that really deep understanding of where customers are, what are their motivations, what are their behaviors?

Brian: Got it. Can you give me a hard example of—and I’ve seen this, too, before—it’s really hard to say, well, we’re going to roll with option one when we just got all this nice verbal feedback about option two. And that’s what they asked for, and that’s what they said they liked, and Mike’s saying that we’re not supposed to do this, but like, these people loved it when we showed them that and they said they love it. Argue that back to me, the other side of that.

Mike: Yeah, I actually have a pretty recent example of this [where 00:04:21] actually tied to data products, too. We’re in the process of updating our AB testing backend for a portion of the experience, and as part of that, we want to message to our customers what options they have in terms of whether it’s restarting their test, or ending the ones that are statistically significant, or just choosing a winner, whether it’s statistically significant or not.

Brian: I’m going to pause you, sorry to interrupt you; quick, give us 30 seconds about what Klaviyo does, just to give people the context, and then jump back in.

Mike: Not a problem. So, Klaviyo is a platform that helps customers, our customers, communicate better with their customers. A lot of it’s email and SMS communication, and so we allow our customers do personalization of their messages as well as A/B test and track the data so that they know how well their communications are actually performing.

Brian: Got it. Okay, so we’re talking about a tool to send email; multiple versions were designed up; you’re getting feedback from these users on which ones they quote, “Like,” and that’s where we’re coming back to, is that right?

Mike: Exactly.

Brian: Okay. So, please continue.

Mike: Yeah. Originally, this was something that the product manager in that space and designer were going and they were talking to customers and having internal conversations, but they kept hearing conflicting opinions. And this was something where we could have potentially rolled out a survey, but that survey would have told us that people want more options, [laugh] which was one of the questions, “Do you want two options or three options?” Everybody’s going to say they want more options, but by instead, going out and talking to people, showing them the different options, as well as showing them things in different ways. So, we had one where it was, without any of the experience design, just to kind of get what their base reaction is, their kind of gut, this is what I’m thinking.

Another one where we showed them the design. Oftentimes, when we design things, we try to nudge people into a decision that we think it’s going to be better for their business. So, we put a little recommended label on it. People chose a different option than when they were just presented the options by themselves; not super surprising, that’s kind of where psychology comes into play.

And then at the end, we give—we showed them all the options together as well as, kind of, do you want two options, three options, and had them talk through it? Again, not surprising, most people chose three options, but because we had the information from those two previous questions and we had them verbally talking about why they were selecting those different options, we were actually able to make the recommendation that while people feel like they want three options, it’s because there were issues with each of the options that they were being selected. And three options gave them a feeling of more control. In terms of what actual options they wanted, two options was really the most practical, but the thing was that we weren’t really answering the main question that they had, which was what was going to happen with their data if they restarted the test with a new algorithm that was being used. And so that also was something that we wouldn’t have been able to identify if we were only looking at the quantitative data if we were only serving them; we had to get them to voice through their concerns about it.

Brian: Got it. So, I know that the answer to this is not, “Never listen to anything that your stakeholders or customers want.” We know that’s not the answer. When should we pay attention to what people—what’s coming out of the mouth [laugh] versus what they’re doing? How do we—is that just [unintelligible 00:07:49] experience or are there rules of thumb that you can give about, when we’re in this context, I want you to focus on this because of this, and this is why we can’t take that as the word of God or whatever. [laugh].

Mike: Yeah, I mean, so some of it comes down to how they phrase the question. If you’re asking them if they want something and you’re saying what that thing is that you want them to want, then they’re naturally going to say yes because it’s social desirability bias. So, being aware of some of those, kind of, cognitive biases is one of the key things. The other thing that will help is asking the same question in different ways so that we can begin to understand the different ways people are thinking about it. So, in that particular case, we actually presented the same question, but with different things being shown to them so that way we could see what was the impact of basically the visual stimuli, whether it was just words within the survey question or the actual visual design of the interface or began kind of seeing all the options. And then from that, we’re able to extract, kind of, what the actual truth is, basically do some triangulation. Which, sometimes you do that triangulation mixing the quant and qual. In this case, it was mixing all the qual data together in order to figure out what’s the real intent behind them.

Brian: Got it. Got it. So, how would you teach—I don’t know, if you have, you know, for example, data scientists working on your team or if you get them out into the field doing any type of ride alongs, or interviews, or anything like that—if you’re coming from kind of the data world, not from the user experience world, how do you get started with this? And the next question I know some people have is like, they’re worried about doing it wrong. So, how do you get started, and what are maybe some of the things that we don’t need to worry about getting wrong to get started and actually get some value out of this? Can you talk about that?

Mike: Yeah. So, at Klaviyo, actually, a decent number of the data scientists do go out and talk to customers, and we definitely encourage that. We also run monthly training sessions for them. The one that we just ran earlier this week was about best practices for interviewing. So, a couple high-level ones were avoid leading questions, asking one question at a time, making sure that you’re accepting intentional silence as well as—you know, when you ask a question, especially during a live interview—and sometimes we’ll do unmoderated qualitative research where we’re not actually present ask a follow-up question, but if you are there, make sure you’re really listening to what the person is saying and following up based off of the things that participant found most interesting, that customer found most interesting, and not just focusing on what you want to hear, what you thought you wanted to learn about because sometimes those rabbit holes, for lack of a better phrase, will actually lead you to a better solution than anything that you would have thought of before.

Because we all kind of bring our own biases into conversations; we have solutions in mind, if we’re not keeping an open mind when we go have these conversations with our customers, there’s a high risk that we’ll end up just doing research that it’s not—it gets us to a local maximum because yeah, we get feedback to say, this is validated. But we don’t really understand what the problem was that people are having and how best to solve it because we’re not really working on trying to understand them as a human being, for lack of a better way to put it.

Brian: Yeah. I [laugh] I’m one hundred percent with you. I was—the way I see this, there’s gold, and then there are tangents. Those tangents are good. There’s so much stuff there, often.

Sometimes it’s just a tangent and I think part of the skill of a researcher is knowing how far am I going to let the line out of the reel before I start dialing it back in because I’m balancing I’ve got one hour with this person; it was really hard to get that time, do I keep letting them take line out or do I start reeling it back in? And I think that’s something that comes with practice. But you know, one of the things, you know, I talk to my participants in my seminar about with this is that, you know, part of the reason you do qualitative research is you’re going in to learn about the things that you never went into ask about because you didn’t know they were important. So, the script is really just a guideline, just like right now I have a script for my interview with Mike, but it’s just really like if conversation pauses, or we kind of like finish up a topic, I might grab one off that, but I didn’t plan to talk to you about best practices for your team about doing research. Great. Let’s dump into that.

And so I’m deliberately going here with you because I didn’t know to ask you about that and I didn’t know you were running training in-house. So, we’re kind of living that example right now. So, you talked about bias questions, which I think bias is something that our data people definitely understand. You talk about silence, I assume you’re talking about not jumping into the silence too quickly. Is that what you’re talking about?

Mike: Yeah, there’s a natural human tendency to fill any period of silence. I think it’s up to three seconds, people start feeling very uncomfortable. So, it’s important to, after you ask your question, count in your head, usually about seven to eight seconds—add Mississippi or whatever else you need to do so it’s not just a quick one through seven—but really give it time and kind of let things sink in. The other thing, especially if you’re really at the early stage of a project and you’re not really just trying to see if a solution is working, but you’re really trying to understand what the problem is that you’re trying to solve. Just ask them to tell you a story about a recent incident that’s related to it.

Because you get much richer descriptions from people if you ask them to tell you a story versus if you just ask them a straightforward question. I used this example yesterday, and I teach as well a grad class, so I used it with them because there’s a tendency to jump straight to the solution space when sometimes you need to go into the problem space, where you’re trying to identify what it is that’s really most important to people to then follow through that.

Brian: Got it. Got it. Is there a specific, like, example you can give of that?

Mike: Yeah. So, I mean, in the case of class yesterday—

Brian: Sorry, what’s the class, and who are the students, just for context?

Mike: So, I teach a class called Evidence-Based Design, and so it’s a class at IIT’s [unintelligible 00:14:16] of Design. Because they’re all Design Masters students. And so, two questions I asked is—the first one was I had them—well I asked them about their experience using an app to order food. So—and that was this story-based and more focused on the technology. And there I got, like really, kind of the answers you’d expect, especially if you’re working for a company like GrubHub, where, you know, they use GrubHub, and, like, they talked about using the place that they normally ordered from, and kind of a typical kind of feature sets that largely already exist.

Versus when I reframe the question to have her tell me a story about the last time she dined out with—or dined with someone else. It was actually related to the same story because last time she ordered food was also the last time she dined with somebody. But in that case, she went to a lot more detail in terms of why she ended up deciding to order food in terms of, like, it had been a long day; she was really tired, and then her boyfriend was the person she was dining with, and he had different preferences. So, kind of that back and forth that happened between those two in order to decide what to eat, and kind of what to order. Which then opens up new opportunities for experiences within something like GrubHub.

Because GrubHub right now, is really designed for a single user ordering food, but we know that from that story, you know, it’s not just about her ordering food, it’s really about well, what time of day is it because if it’s late in the day and she hasn’t eaten, yet, she’s going to be more likely to go order food. While it’s kind of creepy, we can technically get data from people’s devices about, like, how long your screen has been on; we can tell, like, hey, they’re probably haven’t eaten because they’ve been on their device, so now that’s a good time to prompt them to go order something. We may have a sense that, like, hey, this person, when they order, they typically order food for more than one person. So, maybe that’s where we then recommend that other person open up GrubHub on their phone, and they can actually co-order together, and then we don’t have to do that back and forth. Together, you can streamline that process a little bit. And those are things that you wouldn’t typically get just by asking questions about that specific experience, but really having the person tell a story and having a story take you where it leads.

Brian: Yeah, I’m a big fan of the same thing. A lot of the—you know, what I try to get people going with this, particularly in the analytic space, you’re supposed to be helping people make decisions. Like, if you’re—especially—and this goes back to more the, you know, our enterprise data teams that are helping internal stakeholder groups make decisions with data, there’s a good chance they’ve made the decision in the past without the dashboard or the model or the thing that you’re making. So, it’s like, well, how did you do it last week when you made this decision—or last quarter—and having them replay that scenario, whether it’s using a competing tool, or a homemade Excel spreadsheet, or whatever, there can be a lot of gold in there to watch them walk through it and talk through that process. It sounds like you’re kind of saying the same thing is show me what you did, tell me a story about it. You know, the recall thing can be really interesting.

Mike: Absolutely. Another reason for trying to recruit people who have recently experienced it, instead of just contacting anybody from your list.

Brian: You sent over—I think when we first met; I forget how the context came up and you said, “Dashboards are a real pet peeve for me.” [laugh]. What was that about?

Mike: Yeah, they are definitely a pet peeve of mine. What happens too often is when people create dashboards, they stick everything on there. If a stakeholder within the organization asked for a piece of data, that goes on the dashboard. If one time a piece of information was needed with other pieces of information that are already on the dashboard, that now gets added to the dashboard. And so you end up with dashboards that just have all these different things on them, and you lose the signal to noise, right? You no longer have a clear line of signal.

The other thing that tends to happen with a dashboard is people don’t typically go and look at it. I worked at an organization in the past where they created these dashboards, set up a whole room to monitor the dashboards, and then nobody ever went into the [laugh] room, unless, of course, something was going wrong. But the dashboard wasn’t being used to, kind of, alert them about when things were going wrong; it was just being set up because they felt like they had to have a dashboard. And if someone [had it 00:18:58], and this is the other [unintelligible 00:19:00] thing, too, you set up these dashboard rooms and you have people sitting in them, but because you’ll have the different data anomalies, but you have all those things on the screen, nobody’s actually noticing when an anomaly occurs because there’s just so many different things already going on. And so instead of dashboard, like, can’t we make more intelligent systems that really show us what we need to know when we need to know it instead of trying to tell us everything at once?

Brian: Sure. Yeah, you may—this is true, particularly for more operational things when you’re, you know, a system that’s observing anything, a group of objects, of scenarios, things like this, and watching the monitor—monitoring is probably the right verb for that, but I agree with you. And you know, part of the experience we need to talk about when we talk about experiencing data is that the experience can happen in more additional vehicles besides a dashboard: A text message, an email notification, there’s other ways to experience the effects of good, intelligent data product work. And like you said, pushing the right information at the right time instead of all the information all the time. [laugh].

Or worse, like, “That was really interesting. Oh, it’s gone.” It’s like, “Wait, did I need to know what that said, that red blip?” And there’s no way to get back to it. It’s like, “Well, is it still persisting? And was it a mistake?”

So, I’m with you on the same thing. I’ve seen the same thing where there’s this comfort in having a large flat-screen monitor up in the air that has some pie charts on it, and it makes us feel like we’re aware of what’s going on in the system, but if something were to go wrong, you probably wouldn’t be typing on a keyboard down here, craning your neck, looking up at the sky on something where you can’t read the text because it’s, you know, 12-point font, on a 30-inch monitor that’s eight feet away from you. It’s a very impractical way to actually make any decisions with the information. But you know, there—I’m not saying it’s always a wrong thing, but I think these are the kinds of things that when we separate the user experience out from the dashboard, this is the kind of stuff that can happen. We get so focused on the UI and the output and not about the experience of, “Well, what would you do if there was a crisis? Would you really be looking up in the air, and everyone’s going to look up at the air at this low-density display, that’s 12 feet away?” You know?

Mike: And depending upon who’s designed that dashboard, too, like, so many of them—like, data literacy in the US is really low. It’s like 7 to 11%, I think, at one point, so a lot of people don’t actually have the right level of data literacy to know what the right way to visualize the data is or kind of what are the right signals to even display given any particular instance of data. So, what you end up is, sometimes you’ll just get designers choosing what they feel looks the nicest on the screen. So, you get, like, these really pretty dashboards, but then you have things that, like at times, time-series [unintelligible 00:21:50] they normally, thankfully, graph as a line, so that’s good. But you’ll sometimes get these donut charts that really don’t make sense as donut charts. Like, that’s not the most meaningful way to look at that data, to cut that data. And that’s an issue.

Brian: Yes. So, I’ve seen issues with this as well, where particularly people in visual design who have not had a lot of experience with user experience design, or even real interface design for data products where you do start to see fads and visual trends around dashboards and data visualization. You know, “Oh, we heard pie charts were bad, so that’s finally gotten out, so now we’ll make them donut charts and make a bar chart really hard to read, but it sure looks good when you show it to someone who’s not actually going to use it.” But it makes us feel confident that our product is shiny. So, you know, this is an issue, too, and there’s a literacy problem on the designers’ piece as well, and that’s something that I think needs to increase as well.

But talk to me a little bit about the data literacy thing here. I see two sides of this. I think most of the work, ultimately, yes, literacy, even within a company, may need to come up, but if we constantly take the position that, “Well, it’s not my fault that they don’t understand how to use it,” which I think is the default space that a lot of data teams are in, which is that’s a them problem, which is, if they don’t understand statistics, it’s like, sorry, like, you got to—that’s not my problem; on to the next thing. Where’s the line there? Do you think it’s design’s job to make that stuff—to meet the user where they’re at?

Any comments just about kind of this data literacy thing, particularly inside the business and not so much the general public? But whose problem is it when we don’t understand what the interface is telling us? Is that a design problem or a user problem? [laugh].

Mike: I think it’s—I mean—it’s an everyone’s problem, right? So, depending upon what type of data we’re talking about, and, like, what that product is doing, if an organization is truly trying to make data-driven decisions, but then they haven’t trained their leaders to understand the data in the right way, then they’re not actually making data-driven decisions; they’re really making instinctual decisions, or they’re pretending that they’re using the data. [unintelligible 00:24:05] say like, “Oh, look, they’re saying [laugh] it did this thing.” And then that’ll end up actually making them to use the data to make, sometimes, bad decisions because there wasn’t enough data to really trust that it was the right decision to make at that time. That said, there’s always going to be gaps in data; like, there’s no such thing as a perfect answer.

Like, either you weren’t able to capture the full context—which is where qualitative data can help kind of fill some of those contextual gaps in quantitative data—or sometimes things happen and you just missed a month of data missed a—hopefully not a whole month but, like, an hour data [unintelligible 00:24:44] that things happen. So, then it’s—what I’ve seen happen sometimes is because of low data literacy and because of sometimes data purism within a data analyst team, what you end up having is some data mistrust within the organization. So, knowing that data is never going to be perfect, some folks will still advocate that we have to have perfect data to make decisions; these people who aren’t data literate then kind of latch onto that, and then what you end up is while you have data and you could be using it to help inform decision, the teams are really again, going to their gut. Our design kind of come in, especially when we’re talking about within a company and, kind of, products to help the organization make better decisions, or products that are being sold, kind of, B2B, is design can help focus on the signal and less on the noise. So, again, there’s a lot of different data out there; showing it to everybody is how you get into some of these issues where people who are debt purists will kind of cling to, “We can’t make this decision because we don’t have enough data.”

Whereas the people who are data literate will just feel overwhelmed by all the different things and just pick and choose the things that they think tell the story that they already want to tell. But if you have great designers who have at least a good understanding of the data—it’s rare that you get great designers who are also great at understanding the data; it does happen on occasion, but then you’re paying a lot [crosstalk 00:26:16]—

Brian: [laugh].

Mike: Good premium. [They can 00:26:18] hire you, Brian. [laugh]. But when you have that though, you can really help the organization focus on the most important pieces of data at that time. So, and that’s not to say that you necessarily start with that for your design.

I was working with a different startup at a point where they were making predictions on whether or not equipment was going to fail. And there, what we decided to do is actually create a—I’m blanking on the term right now, but basically, we’re reducing the interface over time. So, we would show more signals at the start because the initial user base was more data literate; it’s more of data analysts. But the intent of the system was to eventually go and be used by people on the shop floor because the people who were—the data analysts at these companies were actually on the verge of retirement and there was a need to still have this type of work done, but have it done, kind of, where the work was actually happening, which was on the shop floor. So, by learning from the data analysts, by tracking what they did in the system in terms of adding additional signals in or graphing the data in different ways, we were able to evolve it over time to start showing, kind of, what are the most important signals when different types of faults occurred.

In addition to that tracking, what recommended actions within different business contexts people were taking within the system. So, that way, it wasn’t just showing, “Here’s the thing, and here’s how you know this is the thing,” but, “Here’s the actual actions you should be taking off the data.” Which is really—the ultimate point of data is to help make better decisions, help companies take action.

Brian: So yeah, you had some prescriptions in there as to what to do in this scenario—

Mike: Yeah.

Brian: And it sounds like those were informed through research, going out and watching the people who would normally be tasked with what happened, what do we do about—is that correct?—and kind of following that and then modeling that into the design itself for someone else to use that is not an analyst. Is that basically correct?

Mike: That’s exactly right.

Brian: Yeah, yeah. I’ve had the same experience, working in a different domain context, but the same idea. And you know, we’re able to reduce giant tabular interfaces with zillions of columns of information and watching—learning how customers, like, “Well, what do you do when you log in?” “Well, I do this by looking for anything over 35 in column E.” Like, “Why don’t we just show them the stuff that’s over there?” [laugh].

And so, by mapping this whole process, and I like go, “What do you do if it’s over 35?” “Well, then I click on it and I drill down.” “Well, drill down into what?” “Well, I want to see whether there’s a correlation between A and B, and is that still going on now, or did it just blip? And how long is it going on for?” “Okay, so show me how you do that.” [laugh].

And all of a sudden, what seems like here’s all this data all plotted, you put it together with help from your parents, batteries not included. Now, it’s kind of like we can do a lot of this work for you, reduce the tooling effort, and push an experience on you that maps to, “The way people like us do stuff like this,” to borrow Seth Godin line. That’s kind of what we’re trying to do. And it doesn’t mean that’s always the right way to do it, but the net effect there might be to overall reduced tool tax and time the user has to spend digging through stuff to see is there anything here worth paying attention to at all?

So, I’m with you. It sounds like we’ve had similar [laugh] experiences there. And this is a plague of a lot of analytics tools that I see that come across my desk is that a lot of time, they’re what I would call it—they have no opinions; it’s like opening Photoshop and there’s is a blank template. And we want you to start with blank because God forbid that we were to decide what you should look at when you log into this tool. We’re just the raw data; whatever you want.

And the reality is, like, if this is a purpose-built tool with specific use cases in mind, which it probably should be if you want anyone to use it, you’re not really starting with zero. There’s no null state. No one is just deciding to walk into this tool out of nowhere with no context, no need, power it up, and just decide to plot stuff. That’s a non-existent use case. So, we have to do the work to go figure out well, why are you here? How can we help you? What’s this data going to do? How will you make decisions with it? And then we try to model the interface and experience around that behavior. So, I’m going off on my soapbox here, but I’ve had very similar—[laugh].

Mike: I would add one more thing to this because this is something I think doesn’t get talked about enough with data science teams, product analy—like, sometimes the statistical significance doesn’t matter to your end-users. Like, sometimes—actually, should say, more often than not, [laugh] maybe too often, like, I mean, most organizations aren’t looking for 95% significance. Usually, 80% is actually good enough for most business decisions. Depending upon the cost of getting a high level of confidence, they might not even really value that additional 15% significance.

Brian: Right.

Mike: I know it doesn’t quite work in terms of—[laugh]. But the thing is, like, I know a lot of data science teams will, like, beat themselves up trying to get really high confident predictions, but if it’s not a life and death model, like, if it’s not something that’s related to healthcare or related to, you know, catastrophic system failure somewhere, most organizations are willing to accept lower confidence in order to just get to decisions faster is really what it comes down to, and have enough confidence.

Brian: Yeah, I mean, what one of the big learnings that I had in this space, in addition to talking to many guests on this show about model precision and model accuracy versus utility and how much do you need to make a decision, I think the thing for listeners, if you go back to Episode 80, with Doug Hubbard when he talks about how to measure anything—and I think that’s the name of his book—but the thing that really stuck with me was, we think of measurement as, like, getting the right answer. And it’s like, measurement is about reducing the risk, right? So, it’s like, we’re 60% confident this is right, we’ve reduced the risk from a wild-ass guess down to something that has some level of confidence that’s with it. And so, our job is data people—and I think, conceptually, data people do get this, but we get lost sometimes in accuracy—that we’re probably never going to give a perfect answer, and if we could absolutely predict the future, everyone else would be doing it and it wouldn’t be worth anything. So, by definition, we will never be fully accurate with this stuff.

So, we just need to figure out well, what does accurate enough mean to our customer? And just remind them when is the missing data or the model prediction when it’s wrong, what is the downside of that? Do they understand what the risk is associated with making a decision based on this information? All of this is kind of part of that experience. And I think if we’re thinking—if we’re doing this with empathy and thinking about how the user makes decisions, it can really help us understand what the level, what the number needs to be in terms of confidence there that someone can actually proceed with the task at hand that they’re doing. So, I’m fully with you there.

I love this idea of we’re really just about reducing risk; it’s not actually to say the answer is it’s 422.2 feet long. “Is it over 200 feet?” “Yes.” “Is it below a 1000?” “Fine. That’s enough. Let’s go.”

That’s how a lot of CEOs think. It’s like, I got my gut and, like, “Is it a million?” “No, it’s around 400.” “Okay, fine. Then, like, I don’t care if it’s under ten though”—you know, that’s the level that some leaders are moving because they know that we can’t get—that getting to 99% is a waste of time.

It’s not going to change how I make a decision, so the more we can accept some of this stuff, it’s hard, I think when we—when you know the inner workings and you know just how the system’s not real-time, and there’s only samples every hour and not every minute, and so, you know all the flaws [laugh] and you know the other department calculates the metric differently than this department does. I think it’s really hard to put something out you feel confident in, and because you’d know too much about how dirty it is under the covers. You know?

Mike: Absolutely. I also think that goes well with your example of, like, the person who is only really looking for anything above a certain number, like, filter all that noise, then. Especially as you move up in an organization, you’re reporting out to the CEO, like, some CEOs definitely do get into the details, but at least the initial conversation should not have all those details. Like, give them the high level of what they need to know and then have the other pieces available.

Brian: Sure, sure. And remember, too, I think it’s—what you’re doing may be simply, can you help me eradicate any ridiculous choices that I might make? And can you accelerate the process of experimentation in a feedback loop? So, get me to the point where I can feel confident I’m not making a totally guess-based answer here and I’m not shooting myself in the foot, but it’s enough to go learn something. So, like, “Fine, let’s go that way for a while until we learn something else. Let’s not spend more time deciding which way to point the ship. It sounds, like, left-ish is the way to go, so let’s go left-ish until something tells us to turn right a little bit.”

And that might be the job of the interface, the dashboard, the tool, the report, the whatever you’re creating as an analytics person or giving a forecast that might be enough to create value for somebody. So, tell me—I’m just cur—your wife is an economist, is that right?

Mike: That’s right. Yeah.

Brian: Tell me about dinner-time arguments. Do you really good stories about—I would love to hear how someone would a qual perspective talks to an econom—kind of a secret thing; I’m really fascinated by economics. Like, I like Freakonomics and Steve Levitt other po—people I’ve mostly admire. I love the shows; I’ve learned so much about, kind of, how the world works, at least when we model it in that way, but I also come from the UX background, and I’m—this is a fascinating thing for me. So, talk to me a little bit about what dinner is like, or breakfast or wherever these—[laugh].

Mike: Yeah. I mean, she actually talks to me about a decent number for, like, analyses that she’s trying to work on. I actually have enough statistical background that I can follow pieces of it. I will say that a lot of models that economists use from what I learned in statistics are in some cases, not as statistically sound. [laugh].

They do some things that—economists definitely agree—I mean, she’s got her PhD from U. Chicago, so it’s like, a good solid economics PhD—but—and they definitely have all within that field agreed that this is how to do this analysis, but kind of like within user experience, like, when we evaluate a Likert Scale question, we will take the mean of it, you’re technically, from a statistical standpoint, not supposed to take the mean of ordinal data, so you’re not technically supposed to take the mean, you’re supposed to take the mode of something like that to get the average of a Likert Scale because it’s the most frequently answered entry. But within the field, we’ve agreed that’s what constitutes knowledge. This is getting into all the philosophy of science, epistemology of, like, what do we say is fact. But it’s the same thing was in statistics and economic models. So, that part of what we talked about where she’ll explain something using the term that they’re using in economics, and I’m like, “Well, that sounds similar to this thing that I learned in this statistics class of structural equation modeling or something else,” and then I find out that it is, but they’re just calling it—or giving it a different name with an economics.

And I’m like, “Okay. Wow.” [laugh]. Sort of worked through some of that. But I haven’t done any of the actual hard [writing 00:38:13] of stats since grad school; I just know enough of the, like, more of a statistical theory at this point than the actual how to go and do the different calculations. Which is part of what makes it easier to work with the data science teams, too.

Brian: Sure, sure. From your experience at Klaviyo, is there anything you think the user experience or the design community needs to know about data? Like, we talked about this from the other di—we talked a lot about what we think what kinds of user experience knowledge could be beneficial in the data product space and more of the technical crowd; what’s the reverse of that?

Mike: Yeah. So, I kind of like to think of it as everything old is new again. So, when user experience was getting started in the ’80s, again, had been—there were some human factors people, some psychologists, whatever else, but the important thing is, they didn’t know anything about how to program a computer; they didn’t know anything about computers in general; they just knew that humans were using those computers and they need to make that software easier for people to use. What they found, though, is that in order to effectively make that software easier for people to use, to make it useful to people, they had to learn a minimum amount about that medium in order to start crafting those different pieces of the experience that we’re preparing to provide value to people. We’re running into the same thing with data applications where it’s not enough to just know that numbers exist and those are a thing, or to know some graphic primitives of line charts, bar charts, et cetera.

As a designer, we have to understand that medium well enough that we can have not necessarily a deep conversation with our partners on the data science team—or analysis team—but at least enough of a conversation where we can find that shared language. Like, what I was talking about with having that conversation with my wife at dinner, we’re using different terms that mean the same thing, so finding that shared vocabulary, as well as understanding the limitations of that medium. I also like to think of, like, oil versus charcoal, for example. Like, very different medium from a painting perspective. If you don’t understand, kind of, what the limitations are in, kind of, how blending different pigments together differ across those two mediums and you’re not going to have very good art. Same thing in terms of you’re not going to have a very usable system if you don’t understand enough about the data.

Brian: Mike, this has been really fun to chat with you. Thanks for coming on here. Any—two questions, or really one question? Either I wanted to open it to you for any, like, closing thoughts here, or is there a question that I should have asked that I didn’t?

Mike: Oh, yeah, I don’t have a great one for that one. In terms of closing thought, I guess, the biggest one is all the [unintelligible 00:41:01] data science people [unintelligible 00:41:02]—like, as a data science person, it’s not enough to just look at, kind of, the analytics of how people are using the systems that you’re helping design or, kind of, whether or not they’re opening your dashboard; you really do have to observe what decisions they’re making and focus your conversations not around what data people need, but what decisions need to make and how they need to make those decisions, really. So, because some decisions are going to be more important to have that level of rigor and other decisions are going to be—they just need some level of direction, and they just need just enough confidence. So, knowing the difference between those.

Brian: Great. Thank you so much, where can people follow your work? LinkedIn? Website? Tell us how to get in touch.

Mike: Yeah, I’ve been lately with my websites; it’s all [laugh] on LinkedIn right now. And it’s just mikeoren [unintelligible 00:41:56].

Brian: Mikeoren. Great. I will definitely [link that 00:41:58] up there. And Mike, thanks for coming on Experiencing Data.

Mike: Thank you very much.



This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe for Podcast Updates

Join my DFA Insights mailing list to get weekly insights on creating human-centered data products, special offers on my training courses and seminars, and one-page briefs about each new episode of #ExperiencingData.