Registration Closes Oct. 3rd for My Final 10-Week Training Seminar of 2021
Designing Human-Centered Data Products is back, but space is limited. Work with me and a small group of data product leaders who want to learn to build more useful, usable, and indispensable analytics and ML solutions. Live sessions begin Oct. 4, 2021. Details/register

071 – The ROI of UX Research and How It Applies to Data Products with Bill Albert

Experiencing Data with Brian T. O'Neill
Experiencing Data with Brian T. O'Neill
071 - The ROI of UX Research and How It Applies to Data Products with Bill Albert
/

Episode Description

There are many benefits in talking with end users and stakeholders about their needs and pain points before designing a data product. 

Just take it from Bill Albert, executive director of the Bentley University User Experience Center, author of Measuring the User Experience, and my guest for this week’s episode of Experiencing Data. With a career spanning more than 20 years in user experience research, design, and strategy, Bill has some great insights on how UX research is pivotal to designing a useful data product, the different types of customer research, and how many users you need to talk to to get useful info.

In our chat, we covered:

  • How UX research techniques can help increase adoption of data products. (1:12)
  • Conducting 'upfront research': Why talking to end users and stakeholders early on is crucial to designing a more valuable data product. (8:17)
  • 'A participatory design process': How data scientists should conduct research with stakeholders before and during the designing of a data product. (14:57)
  • How to determine sample sizes in user experience research -- and when to use qualitative vs. quantitative techniques. (17:52)
  • How end user research and design improvements helped Boston Children's Hospital drastically increase the number of recurring donations. (24:38)
  • How a person's worldview and experiences can shape how they interpret data. (32:38)
  • The value of collecting metrics that reflect the success and usage of a data product. (38:11)

Quotes from Today’s Episode

“Teams are constantly putting out dashboards and analytics applications — and now it’s machine learning and AI— and a whole lot of it never gets used because it hits all kinds of human walls in the deployment part.” - Brian (3:39)

 

“Dare to be simple. It’s important to understand giving [people exactly what they] want, and nothing more. That’s largely a reflection of organizational maturity; making those tough decisions and not throwing out every single possible feature [and] function that somebody might want at some point.” - Bill (7:50)

 

“As researchers, we need to more deeply understand the user needs and see what we’re not observing in the lab [and what] we can’t see through our analytics. There’s so much more out there that we can be doing to help move the experience forward and improve that in a substantial way.” - Bill (10:15)

 

You need to do the upfront research; you need to talk to stakeholders and the end users as early as possible. And we’ve known about this for decades, that you will get way more value and come up with a better design, better product, the earlier you talk to people.” - Bill (13:25)

 

“Our research methods don’t change because what we’re trying to understand is technology-agnostic. It doesn’t matter whether it’s a toaster or a mobile phone — the questions that we’re trying to understand of how people are using this, how can we make this a better experience, those are constant.” - Bill (30:11)

 

“I think, what’s called model interpretability sometimes or explainable AI, I am seeing a change in the market in terms of more focus on explainability, less on model accuracy at all costs, which often likes to use advanced techniques like deep learning, which are essentially black box techniques right now. And the cost associated with black box is, ‘I don’t know how you came up with this and I’m really leery to trust it.’” - Brian (31:56)

Links

Transcript

Brian: Welcome back to Experiencing Data. This is Brian O’Neill, and I’m here with my friend and longtime… I guess we’re not colleagues anymore, but we spent time together for a couple years at Lycos, which was, like, Yahoo. Is that the right way to say it? I don’t know. It was something back in the early two—[laugh] the early 2000s.

 

Bill Albert, you’re the executive director of the Bentley University User Experience Center and I think of you as someone who really understands the place of research and creating better products and services for people. So, welcome to the show. I’m really happy to chat with you.

 

Bill: Yeah, it’s great to be here, Brian.

 

Brian: Yeah, yeah. So, as I mentioned, you’re an expert in user experience research, and you’ve written some books on this. And part of the reason I asked you is I remember early on in my career, when I first came to Lycos, which I think that was around, I don’t know, 2001 or 2002, sometime around that, that was the first time that I had seen that design was not an entirely subjective activity, that there are ways to quantify what we’re doing; there’s ways to validate choices, and I started to learn that I didn’t have to rely just on talent, practice, experience to make decisions, that there was a way to look at it more like, “Let’s try this and get some feedback on it, and then if it’s right or not, continue and make a change and as it may be.” And so it was the first time I’d seen a usability lab, both the protocol, but literally the facilities and what it’s like to bring someone into a test environment, and all of a sudden, it became a scientific thing. Like, “Wow, there’s actually science behind all this.”

 

And I was so green [laugh] at the time, but it was just fascinating to me that this stuff—and you and Dave Hendry, I remember working—he was a really sharp guy—and the two of you, the work that you were doing there, it really changed my thinking about all of this stuff about how it’s not—I don’t have to defend things just because, “Well, I’m the designer and that’s how I think it should be.” Now, I can use some data or some information to back up my choices. And it actually became—you go full circle, I think, and—I don’t know about maybe the other designers that you work with, but I find it almost more enjoyable when I put something out there and it’s wrong because that’s when the learning happens. And you’re like, “Wow, I never would have thought that anyone would have a problem with this thing.” And it’s this, the learning is so fulfilling, that you actually get out of the need to feel right about it, and it’s more just, like, “This is good enough to put out there and get some validation on now.”

 

And it’s really rewarding to see what you don’t even know to ask about. It’s really fulfilling, so I just wanted to thank you, first of all, for those early experiences. And so today, I really want to talk about some of the practices of research. It’s something that I still find UX teams are fighting to do. I find it as something that I think data science and analytics in the enterprise, these groups really need some of these recipes that practice of design and user experience give us because the evil enemy, to many of this group, is the no adoption, low-use no-use of solutions.

 

We’re constantly putting out dashboards and analytics applications—and now it’s machine learning and AI in enterprise—and a whole lot of it never gets used because what they call operationalization of the model, or self-service tools, et cetera, most of the stuff just hits all kinds of human walls in the deployment part. And so I want you to talk a little bit about where does research fit into this, and how do we prevent spending lots of time and money building outputs that don’t generate outcomes? Where does someone begin if they can—a leader feels that this is wrong; I’ve seen this happen; I’m tired of this; we’re not generating value; we’re spending all this time making stuff. What’s the antidote? Where do I start?

 

Bill: Yeah. Well, before we jump into that, Brian, I just want to acknowledge the intro. That was very nice of you to remember those early Lycos days and the work that we were doing back then. A lot of people think of user experience as this fairly new concept. We had a UX team in 1999 that we were called ‘User Experience’.

 

And what we were really trying to do, I think that, from our backgrounds, was to understand or bring more of a rigor into design and user research. And we learned a lot; we had a lot of fun, and I think that’s really been the foundation of my work since then. But what you’re asking about is really interesting. It’s almost—the way I see it—first off, slowly over time, kind of the argument of why should we do research or why do we need metrics, that argument is going away. More and more organizations get that this is fundamental to not only to design but really to their whole business strategy.

 

So, to me, I find when we’re talking to clients, that that’s becoming less of an issue. And I think, really, to directly answer your question, there has to be some kind of almost like burden of proof or evidence that a business analyst or a product manager needs to make on why this is important, why this is—new feature, functionality, or product is critical to not only enhancing, improving the end user experience but also from a business perspective. It can’t be just, “Anecdotally, we feel like we need to do this.” And sometimes people will just say, “Oh, our competitors have these features or widgets, so we need to be able to offer the same thing, even though they may never be used or people don’t care about them.” I think that’s a very limited, short-sighted way of doing it because the fact is, the more stuff you throw on there, the potentially more confusing and, kind of… complicated the things that people actually care about become.

 

And just to tie two threads together, when I started Lycos in ’99, I think around 2000, we got wind of the new website that also did search, called Google. And we became very interested in Google, and we had this huge, big complicated portal page that had everything from dessert recipes to fly-fishing spots.

 

Brian: Auto, travel, finance, sports. You know—[laugh].

 

Bill: Everything under the sun. It was just all about eyeballs and—

 

Brian: Content, yeah.

 

Bill: —and generating pages; all this stuff. And Google had basically a search box. And it was so different. And not only was their algorithms better, but we used to have this saying of ‘dare to be simple,’ and they’d sort of took that to heart. And still do.

 

And we weren’t, kind of, following that. So, it’s really important to understand giving exactly what people want, and nothing more. And being able to make those difficult conversations. And to me, that’s largely a reflection of an organizational maturity is making those tough decisions and not throwing out every single possible feature, function, that somebody might want at some point.

 

Brian: Sure, sure. That, I guess, part of what I’m thinking about here is, you know, some—the audience that I typically work with kind of splits at the top between product and business leaders at software companies, where they probably have a volume of customers or they’re hoping to have a volume of customers, and then you have the internal enterprise teams that are serving internal business stakeholders, so a lower quantity of literal humans and users that probably will interact with the solutions there. I think there’s a place for research in both of these because on the internal side, there’s lots of politics, a lot of times data science and analytics teams are being seen as new strategic areas for businesses that want to leverage AI, and machine learning, and all these technologies, but they’re also in the old IT camp and they’re seen as a service arm, you know like just, “You’re there to support the business; give them what they asked for.” Can you talk to me a little bit about the, “Just give us what we asked for,” versus the idea of being a problem-finder, and problem-space research? Because I think this is what designers and creators of solutions need to be just as much problem-finders as they are solvers.

 

And lots of people talk about how great—like, “We’re really good at solving complex problems.” And I’m always like, “How good are you at finding the unarticulated ones that no one wrote down in a Jira ticket or a requirements document?” Can you talk to me a little bit about what recipes does UX give us to uncover those things?

 

Bill: Yeah. I mean, I know I don’t want to dismiss that importance of fixing problems because we—

 

Brian: Sure.

 

Bill: —see that in the usability lab all the time.

 

Brian: Sure.

 

Bill: And what we’re doing is we’re trying to optimize a current design or current experience, right? So, we’re trying to take the edge off, reduce the friction. And that will help, and that’s good, but like you said, that’s not where it’s, kind of, research begins and ends. As a user researcher we really need to more deeply understand the user needs and the opportunities and we need to see what we’re not observing in the lab or we can’t see through our analytics, there’s so much more out there that we can be doing that we can be contributing to, to help move up not just a design or a product, but really the experience, and moving the experience forward, and improving that in a substantial way instead of just… not putting lipstick on a pig necessarily, but making more superficial, tactical improvements. And I really think it’s—what I would say to anyone involved in user research is to find those opportunities, and do that work, and make it known about the value that you can contribute doing that, and then you will get noticed more and then people will say, “Ah. Okay, this can actually drive our strategy, our whole product-business strategy,” instead of just seeing somebody who’s doing a new kind of paint job.

 

Brian: If we’re talking about, though, someone that’s not coming out of the user experience profession, so we’re talking about someone who’s really either in product management, or in data science, or analytics, and they’re in charge of some type of a digital offering: it’s going to be expressed as either an application, or different touchpoints; it could be some predictive analytics that are embedded in a CRM; it could take lots of different forms, but it’s a team that doesn’t normally have design or user experience on it, but they’re feeling the pain that comes with, you know, we go up to the plate, we swing the bat, and we strike out every single time. Like, we think we get it right and by the time we get up there, something’s off, and our customer says, “That’s not what I needed,” or, “I don’t know how to interpret the data, I don’t know what to do with this.” They’re not finding out that feedback until they’re already at the end. So, where do they start? Where would a non-designer or non-UX person start to leverage these techniques?

 

What’s it like—do you just—what do I say to my finance, my accounting stakeholder, the head of accounting at my company? Let’s say, we’re creating a predictive model to determine what prices salespeople should quote to our customers, right, and typically, the salespeople do this willy-nilly, you know, some gut checks and it’s more art than it is science, and we’re trying to bring some science into that. But when we build the thing, none of the salespeople ever actually write down the predicted prices on the quote sheets; they use their own numbers. Something is off in that experience. Where would someone start? Where would a data leader say, hey, this is the thing I need my team to go do? You need to go out and—what do I say to the sales? Like, what do I ask them? How do I start?

 

Bill: Yeah. I mean, to me, it’s both simple and hard. It is simple in that you need to do the upfront research; you need to talk to stakeholders and the end users as early as possible. And we’ve known about this for decades, right, that you will get way more value and come up with a better design, better product, the earlier you talk to people. So, you know you’re not building something and then this is your only time at-bat, and you’re swinging and missing.

 

This is you got four at-bats, and the first time you might miss, and second time you learn about the pitcher, and maybe you hit a single, and then you’re going to do better. So, I think it’s really critical that research is done upfront to inform the design so you’re focusing on the things that matter, that people really care about. And there’s no workaround for that. You can’t go on anecdotal evidence, you can’t just go on a hunch; it is really, really risky to do that. In fact, this is probably the area that companies that we work with, is the single biggest mistake that they make: they come to us too late. You know, basically, the thing is already baked and they just want us to do the little tweaks here and there, and tell them the simple fixes—

 

Brian: Right.

 

Bill: —and the whole idea of the product doesn’t make sense to people, isn’t going to deliver value. You got to fight for that work and get a small budget to do it. I mean, that’s just the—

 

Brian: Yeah.

 

Bill: [crosstalk 00:14:57].

 

Brian: —so, but if they’re sol—let’s assume that, like, hey, I’m sold. We struck out enough last quarter or last year, I don’t want to—I have limited time to show that I’m a leader, I have a team; I have limited time to show my own worth in the company, and everyone’s looking at us; we’re supposed to be leveraging machine learning, and AI, and all these advanced things, but if no one uses this stuff, the buck [laugh] falls with me. And so I need to send my data scientists and my analysts out to do this stuff. We’re not going to hire—maybe we’re not ready to hire a designer user experience person, but what would I tell my team to go ask a sales team? What do my data scientists need to ask my salespeople?

 

Let’s say there’s a team of 60 salespeople, five VPs, and an SVP. How often are they talking to them? What kinds of questions would they ask in this kind of scenario? And you could give me a real-life anecdote if you have an example of this, but I’m just trying to help someone picture it in their head, like, literally what is happening, how often, when?

 

Bill: Right. So, the first thing is to understand or ask questions about, kind of, their current context. What are they doing right now? What’s working and what isn’t, and why? And then to start to probe and understand or identify opportunities.

 

“Okay, you know what? This I like, this feature I use all the time, but this one? No. I don’t like it because it’s way too clunky and time-consuming. But what I really would love is if these two things could kind of work together, that would really save me.”

 

And you’re like, “Aha. Okay, good to know.” And the second person says the same thing. So, you’re basically looking for understanding that current experience and identifying those pain points and opportunities, then you can go back and come up with some very simple sketches, just to illustrate conceptually how this might work. And you can go back to the same people or a different set of people, and it can be small sample size, very lean, lightweight research and say, “Listen, you told us this last time. Here’s a few different ideas that we have. Which of these resonate with you, and which don’t? And why? And if we had to build one of these out, which one should we do it? And why? Or would you like to see a combination?”

 

So, it’s very much an iterative research design process, very tightly interwoven, to understand, to make sure that what you’re doing is you’re solving some problem that people have, or addressing some opportunity that they’re telling you about in, really, this almost like a participatory design process.

 

Brian: Is there a guideline for how many people? Because I can already hear it now, and I’ve heard this before: how many people do I need to ask? There’s 60 people on the sa—and these guys are on the road. And the SVPs, there’s one in every continent because we’re a global company. How many people and how often do I need to do this before it’s time to start making stuff?

 

Bill: [laugh]. Yeah. So, sample sizes can be very, kind of, a complicated question.

 

Brian: You’re talking to a math audience, like—

 

Bill: Okay.

 

Brian: —all the statistics people, so let them have it. [laugh]. If there’s math behind the answer.

 

Bill: Yeah, no, there’s definitely math behind it. So, if you look at what the goals of the research—so, for example, usability is based on small sample sizes, traditionally, because our outcome variable in a way is problem detection. And if we’re just trying to detect problems, we don’t need that many people to do it. So, one minus one p to the n is the formula for problem detection in terms of how many you need. And if p is a probability of identifying an issue is 30%, you only need five people to capture 80% of the problems.

 

Brian: Okay.

 

Bill: And that’s kind of very simple, and we’ve seen that a million times in usability testing; we test with five to ten people. Now, that works great for problem detection, but when we’re dealing with measuring preferences, things that are more subjective in nature, we require a much larger sample size to get something statistically reliable. So, in the work that we do, we aim for, typically, a margin of error of about plus or minus, let’s say, 4 to 5%, which would bring us to a sample size of, let’s say, 300 to 500. So typically, that type of research, we’re doing online with 300 to 500 people. Now, it gets a little tricky, and this kind of goes to your question, about—it’s really not just, like, everybody is the same.

 

You mentioned, like, different roles, different locations, we look at—or our goal is to have at least 100 people per distinct user group. So, if you told me that there’s senior salespeople and junior salespeople and they use the product very differently, I’m going to want at least 100 in each of those groups. And we have people in three different regions, and so now we’re talking about 200 times 3 regions, we’re now at up to 600 people if it’s preference-based. So, we look at basically the desired margin of error and number of distinct groups and use a rough thumbnail to come up with a desirable sample size. And it can get pretty unwieldy.

 

And the other thing I’ll mention and then I’ll let you jump in, it’s really about cutting, slice and dicing the data. So, if we only have 300 people, but you want me to find all the salespeople in North Dakota who are left-handed, now we’re going to be down to only two people. So, that’s its own kind of issue, but what are we going to do with the data? How are we going to look at it? Now, if you’re a data science, data analytics person, you’re probably used to much, much larger numbers than that, but that’s typically, from a research perspective, the numbers we’re talking about.

 

Brian: What are you doing differently when you’re testing five to ten people, maybe you’re developing a custom application or something like this, and you’re running a usability study for an hour with five different people, versus the thing you’re doing with 300 people, obviously, there’s some scale and time issues here. Are there different techniques we’re using when we’re going for something that’s, you know, 300 people?

 

Bill: No, totally, yeah. It’s a totally different study. So, when we’re doing small sample size like the ability evaluation, our goal, our charge, is to observe behavior and to see where people trip up and understand why.

 

Brian: How do you do that?

 

Bill: Just by listening and watching. By giving people real-world tasks and having them perform those tasks, and seeing their behavior, and understanding or seeing what’s working and what isn’t.

 

Brian: Is this literally like, Bill takes 50, and Jane takes 50, and someone else takes 50, and you’re watching 51-hour sessions—

 

Bill: No.

 

Brian: —in real-time? Like, help someone understand what the—

 

Bill: Okay—

 

Brian: —format looks like.

 

Bill: What I describing is small sample size. So usually, let’s say anywhere from let’s say, 10 to 30 [unintelligible 00:22:51] in a research study. It’s about observing behavior, probing, asking very deep questions, understanding the why. And that works out really well, kind of, traditional qualitative-based user research, and usability with the goal of problem detection.

 

Brian: Got it. And that’s probably one facilitator, maybe a note taker—

 

Bill: Yeah.

 

Brian: —and the participant, so we’re talking about three humans in a room together.

 

Bill: Yeah, one-on-one.

 

Brian: Repeat 10 to 30 times, something—

 

Bill: Yep.

 

Bill: —like that. Okay. And then what’s the one in the hundreds, what’s happening there?

 

Bill: Okay, that is almost always going to be online. So, it’s going to be in the form of an online survey—

 

Brian: Okay.

 

Bill: —using Qualtrics, or Survey Monkey, or Google Forms. There are other online tools that allow you to do that, the Optimal Suite to do, like, card sorting studies, if you’re interested in information architecture, more integrated solutions like UserZoom that, kind of, capture both behavior and allow you to embed survey questions. [crosstalk 00:24:00]—

 

Brian: Those are most self-reported survey—

 

Bill: Self-reported.

 

Brian: —was the main thing you were talking about.

 

Bill: Yes.

 

Brian: Okay. Got it.

 

Bill: Now, there is, not to confuse matters, there are some hybrid techniques where you can get a lot of data very quickly in the one-on-one format, you know, setting up a bank of laptops and having a whole group of people observe people use something. Some organizations to that, but far more common is doing something online because it’s not cost or time—make sense to individually interact with 300 [laugh]to 500 people.

 

Brian: Right. Obviously.

 

Bill: So, yeah.

 

Brian: Can you give me an example of a before and after? Maybe a client that decided to invest in this on a project, and how it changed the trajectory, and what the business value there was. And then tell me if that’s a special case, or is that a typical kind of result? Like, if I invest in this, what’s that look like?

 

Bill: Yeah, I’ll give you a nice example; actually I wrote about this in my book because it was one of the best examples that I’ve been a part of. So, Boston Children’s Hospital came to the UXC a number of years ago, and said, “We need help with our donations page.” So, individuals who want to make a contribution to Boston Children’s, are taken to a page, and it wasn’t really well designed. And they said, “We think it’s a little complicated, but what we really want to do is encourage people to make monthly recurring donations.” And so what we did was we looked at their page and then we also were looking at two other children’s hospitals.

 

Like I described before, we brought in individuals that would be prospective donors to Boston Children’s, and had them go through the process, and looking at how they make a one-time versus a recurring donation. And the recurring donation process was a little bit different and very confusing. And we saw how it was working on another website, and we made a series of suggestions on how to improve it. And I think they took most of the suggestions we made. And about, I don’t know, maybe three, four months later, I get an email from this guy that were our main contact there.

 

And he’s like, “Bill, you won’t believe it. We have, like, 600% increase in the amount of recurring donations.” And it was a lot of difference with a lot of money. And what I love about that was, it was a very simple study; the issues we identified were very obvious, the solutions were also obvious. They implemented it, they were able to measure the impact, we could see their ROI, and it was for a great cause.

 

And it was nothing nefarious; we weren’t, like, doing some things that other organizations might do to solicit more donations. So anyway, it was just a such a happy, good story that had all the elements to it. And I think that’s really important is to hopefully be able to measure the ROI of UX. And this was, like, he gave us the metrics and I was floored. And I thought that they would get something, kind of a slight bump, but it was much bigger than any of us expected.

 

Brian: Can you tell us a little bit about—help us picture the before and after? There was some before state, you learned something, there was an after state. Broadly speaking, what changed?

 

Bill: Gosh, that’s a hard question because it was so long ago. [laugh].

 

Brian: Oh, okay. [laugh].

 

Bill: I don’t remember. I remember that there was one little interaction, and it was—a lot of times when people make a donation, they want to make it in the honor or in memory of somebody. And it was—people really wanted to be able to do that, and before it was very difficult to understand how to do that. And that was one small thing we made to, kind of, make it more personal and more obvious; like, hey, I can give 20 bucks a month to Boston Children’s in memory of somebody or whatever. So, you know, that was just one small example.

 

But I don’t remember all the details of… but I do remember, it was a fairly simple changes. And that’s the thing with user experience, you know, is the devil is in the details. It’s one, it can be just a few words, or a few pixels here and there, that make all the difference.

 

Brian: Yeah, yeah. And sometimes it can be a nasty, oh, man, the whole, [laugh] the whole engineering, the architecture is completely set up for a mental model that is not what’s actually in people’s heads. Just, you know, sometimes it is that, but you’re right: sometimes it really comes down to what is a “trivial,” quote, engineering fix, or a technology fix. It’s rather trivial but it could make a huge difference in someone’s adoption. I wanted to ask you a little bit about, I don’t know if you’re doing much work with machine learning and artificial intelligence, and whether or not the process and methods of doing research change when you’re doing that.

 

And if you’re not necessarily changing the kinds of research, I’m just curious, if you were going to a situation where you’re working with a technology solution that delivers a probability, a probabilistic outcome, a range of choices—maybe the choices can be override by the user; maybe they can’t. Maybe some of the choices are wrong some of the time. This is the nature of predictive models and stuff. Is there a way we—are there different questions we should be asking users in the research field when we’re dealing with a probabilistic technology, a solution that’s going to give ranges of answers and we don’t always know what it’s going to spit out because it’s a dynamic environment?

 

Bill: I think that fundamentally, the answer is no, our research methods don’t change because what we’re trying to understand is technology-agnostic. It doesn’t matter whether it’s a toaster or a mobile phone, and the questions that we’re trying to understand of how people are using this, how can we make this a better experience, those are constant.

 

Brian: Mm-hm.

 

Bill: What I would say is, what we want to understand, let’s say in terms of dealing with a probabilistic model is, do people understand what it’s conveying? Do people get it? Is it delivering value? So, the thing, the output from that model, we want to understand certain basic things about that. And it doesn’t matter whether it’s coming from machine learning AI, or it’s just a complicated piece of content.

 

And I don’t mean that to dismiss that, but it’s, from the end user’s experience, everything is about I have a task to do, or I want some experience. How is this helping me do that? Is this giving me something that’s really useful or not? Or is it confusing me, or what have you?

 

So, how we approach it is almost like other things. But it’s certainly a really interesting, kind of, fertile area of research now because a lot of people don’t fully understand the value of it, so it’s important to really look at that.

 

Brian: Yeah. I think, what’s called model interpretability sometimes or explainable AI, I am seeing a change in the market in terms of more focus on explainability, less on model accuracy at all costs, which often suggest using advanced techniques like deep learning, which are essentially black box techniques right now. And the cost associated with black box being, I don’t know how you came up with this and I’m really leery to trust it because I don’t understand how it works. I’ll take the less accurate thing that’s clear about how it came up. We checked this, we checked that, we think this because of that.

 

That seems to generally be something that I’m hearing more. I did want to ask, how you deal with a user—and I don’t know if you’ve been in this situation before where… I’ve literally heard about this with marketing analytics, for example, we say that we want decision support, and analytics and predictive analytics are both methods for delivering decision support, but in reality, what’s going on sometimes is we want the data that’s going to validate decisions we already made. We’re less interested in hearing that it’s not right. And so, you see data teams that are all about the facts, like this is—we crunched the numbers and we looked at all these different facets, and the story is not great but here it is. And then it doesn’t get used because it doesn’t reinforce existing positions.

 

Is there something that this qualitative research can help us do here to maybe start changing the culture there, if this is like this, [laugh] where our internal business users, maybe, don’t want to hear bad news? They don’t really want the facts even though they say they do. I have heard this come up, and I’m just wondering your take on that.

 

Bill: That’s a really powerful aspect of being human, so it’s hard to go against that. But I think as a researcher, what we need to do a good job of is explaining things in different ways and acknowledging people’s current perspectives or beliefs that they have. And I’m now starting to do some work in data visualization around that and especially around data storytelling, and how we take data and we provide interpretations of it, and explaining things in multiple ways and being both simple and as transparent as we can be to help break through some of preconceived ideas or notions, and confirmation bias, and all that stuff. So, it’s really tricky, and I mean, but I think at the end of the day, you know, you’re going to believe what you want to believe, and there’s probably only so much that we can do from a research or design perspective to truly shift somebody’s—

 

Brian: Yeah. Is there an example you can give me about how you might change the delivery of the story based on the audience? It sounds like you’re saying, the data storytelling—

 

Bill: Yeah.

 

Brian: —maybe there’s different ways to communicate the same thing differently? How do you frame that? How do you approach that?

 

Bill: It’s really understanding you’re—at the end of the day, understanding your audience in all facets. What are their motivations? Where are they coming from? Informing them of the data you may have that is not kind of congruent with what they’re thinking. So, an example of some research that we’ve just sent out for publication—so it hasn’t even been out yet—but it’s really along these lines of, because COVID now, it’s been politicized for so long, is to look at whether Republicans and Democrats perceive COVID data visualizations the same way.

 

So, we would show them COVID data visualizations, and flu or influenza data visualizations of the exact same data, and have them basically describe it and perceive things like slope increases or decreases and proportions of bars and things like that, that much more perceptual as well as more subjective, interpretive. And very interesting research about, kind of, how we may see things. Or in this case, the findings were Republicans and Democrats see the visualizations the same way perceptually, but describe them differently. So, if I showed you an increase in COVID cases and deaths from last fall, depending on your perspective of COVID, you might see that as sharply increasing, or only moderately increasing.

 

Brian: Okay. Based on your political leaning, your answer might have been ‘sharply’ or ‘moderately.’

 

Bill: Yes.

 

Brian: Can you tell us some of the early findings, or we have to wait and read the—we have to wait and read.

 

Bill: Yeah, I mean, the [unintelligible 00:37:06]—I mean, then there’s a lot of nuance to that research, but the bottom line is that, in this case, Democrats described COVID, that just responding to looking at these visualizations, is more sharply increasing—because we did this study last year—and Republicans described it as more moderately increasing.

 

Brian: Got it.

 

Bill: Right? But when they looked at the flu data, they both described it the same way, even though they used the exact same—

 

Brian: Wow.

 

Bill: —data.

 

Brian: Interesting.

 

Bill: So, it’s to your point about how we look at or interpret the same data differently depending on our worldview.

 

Brian: Right. Yeah.

 

Bill: You know?

 

Brian: Yeah. I will definitely link up if you provide us a link, I’ll put that in the [show notes 00:37:55] to where people can watch for that. That sounds like an interesting study to get into. One, kind of, closing thing and then I want to give it to you to share any other information or insights you’d like to share. But talk to me about analytics on analytics.

 

So, I know that because of this beastly problem of low adoption that, kind of, plagues this space, I often ask data scientists, and data science teams, and leaders, “How do you know if your stuff’s being used? And what are the measurements?” And a lot of times, if I hear anything at all, which not always… any tracking of what’s happened in the past; they’re just on to the [laugh] next project. But the ones that are, it’s like, “Oh well, we installed a analytics package in Tableau,” or whatever. And they’re tracking usage just by quantitatively, like page views within Tableau dashboards across the entire enterprise, and then that way, we know if people are using it or not, so therefore it got value. What are we learning from analytics on analytics, versus qualitative stuff? Both of these have some value, right, but they’re not for the same purpose. Can you share your thoughts on that?

 

Bill: Yeah. So, [laugh] this has been going on for so long, like this problem, and what I would encourage the listeners to do is to put it into their process that before they even start the project, they’re going to have a way of collecting metrics that really reflect the success of the product, from both a business and a user experience perspective, and to make that mandatory, and to do the extra work. It’s so easy and so tempting to say, “Hey, we shipped it; it’s live,” or, “We’ve launched it,” whatever. “Let’s have the cake, pat ourselves on the back. Job well done. On to the next project.” There has to be something, some motivation, some senior leader has to say, “Hey, wait a minute. We have specific metrics and goals that we need to achieve, and one of them is around usage in the value of it.”

 

Brian: Yeah.

 

Bill: And—

 

Brian: “What did we get for that investment?” [laugh].

 

Bill: Exactly. Exactly. And it’s amazing that I’m even saying this, in a way, because to me, it’s just so… it’s so obvious, yet a lot of organizations don’t do that, or they assume that they have to do it and… people will find it eventually and use it and when have you. But I would just push people to—and from a budget standpoint, usually you’re talking about, “Hey, give me 1% of your development budget to do the research.” 1% to answer these fundamental questions of making it better and measuring how good it really is.

 

Brian: Right, right.

 

Bill: You know, that’s so critical to the success of the product. Why wouldn’t I have 1%—

 

Brian: Yeah, yeah.

 

Bill: —do that.

 

Brian: “Because it slows us down.” And it’s like, “Yeah, but your enterprise project’s been going on for 24 months.” I mean, I think that’s a lot of it is just simple sunk cost bias. “We’ve been working on this two years. No way we’re turning the ship around now.” So now, the only point is to ship something at the end, at all costs, because someone’s you-know-what is on the line, value be damn—it’s just like—

 

Bill: Yeah.

 

Brian: —you know what I—I think that’s not an uncommon thing, especially in really large-scale enterprise projects. It’s just there’s so much sunk cost there, but that’s a whole ‘nother, another [laugh] can of worms. But—

 

Bill: Well, when we were at Fidelity together, there was a case of that where there was an actual product that we had tested right before it was going to go live, and found that the current experience was way better than the new design. And they just stopped it.

 

Brian: Oh, they did stop it?

 

Bill: Yeah. They—

 

Brian: Easily, or was that a difficult conversation?

 

Bill: I wasn’t in the room for that decision, [laugh] but the decision was made where, why knowingly ship something that’s worse? I know there’s sunk costs; it was, like, a million bucks. It was a l—it was not a little bit of money.

 

Brian: Trivial. Yeah.

 

Bill: And so, they—we ended up making some changes and it tested much better, and then they launched it. So, it wasn’t the end of the world, but that was, to me, was a great reflection of an organizational maturity, where they could—

 

Brian: Yes.

 

Bill: Hit the pause button, say, “Let’s not be stupid here.”

 

Brian: Yeah, I forget, there’s some designer, some design thought leader, UX thought leader that has these kind of organizational maturity models for design-centric culture, and one of the high ones is the willingness of the organization to actually not put something out if it’s not good enough. And it’s pretty rare that you find—I don’t remember that particular, I probably wasn’t working on that project, but—or maybe it was the thing I made; I probably made the one that—

 

Bill: No [laugh].

 

Brian: [laugh].

 

Bill: I would have [crosstalk 00:42:52].

 

Brian: No one told me though. Why didn’t you tell, Bill? Come on, I thought you were my friend. [laugh]. But that maturity level, to say, “It’s not good enough. We’re not going to jeopardize what we’ve done to date just to put this out. We’re going to hold it back.” Hopefully, we don’t wait that long to find out it’s that far off, but I think that is a sign of maturity that we care enough.

 

And that’s probably more of a product company approach to things. But at any rate, this has been really fascinating, really great information here. Where can people follow your work? And anything else besides the political datavis study coming out? Are you working on any new texts or anything like that?

 

Bill: Mm-hm. Yeah, yeah, I always have a lot of things going on. So, third edition of Measuring the User Experience will be out later this year. If you’re interested in working with the Bentley UXC, go to www.bentley.edu/uxc for our professional services. And we also have a lot of education training around user experience, user research. Find me on LinkedIn; connect. Yeah.

 

Brian: Got it. And where to find out about your books? Do you have your own website or something? Where is all that good stuff?

 

Bill: I don’t know. We have an out-of-date website, I think called measuringux.com but it’s—

 

Brian: Oh, okay. LinkedIn is best way? Just find you on LinkedIn or something like that?

 

Bill: Yeah. Yeah. Just go to Amazon and search “Measuring the User Experience” and you’ll see.

 

Brian: And just to clarify for listeners, so Bentley University, it’s here in Massachusetts, I used to think that you only ran the educational part, you know, training people in the human factor—I forget it’s a human factors or HCI program that’s there, but Bentley actually has a commercial arm. You’re almost like a consulting firm that does usability testing and research for digital products and things like this. So, did I get all that basically correct?

 

Bill: You go it. And thank you for the plug. Yeah. [laugh].

 

Brian: Yeah, yeah. No, I—because I used to think it was just all educational in nature. So yeah, check that out and we’ll definitely put a link to it in the [show notes 00:44:54]. And, Bill Albert, it’s been really great to catch up with you. Thanks for coming on.

 

Bill: It was a lot of fun, Brian.

 

Brian: Good, good. Well, thanks again. Cheers.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe for Podcast Updates

Join my DFA Insights mailing list to get weekly insights on creating human-centered data products, special offers on my training courses and seminars, and one-page briefs about each new episode of #ExperiencingData.