Next DPLC Live Webinar & Discussion @ 1pm ET on Feb 27, 2024
The Data Product Leadership Community is excited to host Karen Meppen of Hakkoda and DPLC founding member to discuss Immature Data, and Immature Clients: Are Data Products the Right Approach? We'll hear from Karen and then participate in an open dialog. Like all DPLC live sessions, it's fully recorded and transcribed too, and you can keep the chat going in our 24-7 Slack. For members only.
Apply to Join the DPLC Today

131 – 15 Ways to Increase User Adoption of Data Products (Without Handcuffs, Threats and Mandates) with Brian T. O’Neill

This week I’m covering Part 1 of the 15 Ways to Increase User Adoption of Data Products, which is based on an article I wrote for subscribers of my mailing list. Throughout this episode, I describe why focusing on empathy, outcomes, and user experience leads to not only better data products, but also better business outcomes. The focus of this episode is to show you that it’s completely possible to take a human-centered approach to data product development without mandating behavioral changes, and to show how this approach benefits not just end users, but also the businesses and employees creating these data products. 

Highlights/Skip To:

  • Design behavior change into the data product. (05:34)
  • Establish a weekly habit of exposing technical and non-technical members of the data team directly to end users of solutions - no gatekeepers allowed. (08:12)
  • Change funding models to fund problems, not specific solutions, so that your data product teams are invested in solving real problems. (13:30)
  • Hold teams accountable for writing down and agreeing to the intended benefits and outcomes for both users and business stakeholders. Reject projects that have vague outcomes defined. (16:49)
  • Approach the creation of data products as “user experiences” instead of a “thing” that is being built that has different quality attributes. (20:16)
  • If the team is tasked with being “innovative,” leaders need to understand the innoficiency problem, shortened iterations, and the importance of generating a volume of ideas (bad and good) before committing to a final direction. (23:08)
  • Co-design solutions with [not for!] end users in low, throw-away fidelity, refining success criteria for usability and utility as the solution evolves. Embrace the idea that research/design/build/test is not a linear process. (28:13)
  • Test (validate) solutions with users early, before committing to releasing them, but with a pre-commitment to react to the insights you get back from the test. (31:50)

Quotes from Today’s Episode

  • “If you’re giving a ton of attention and time to change management, it may mean that the solution itself was not designed properly from a UX standpoint, and potentially even a business stakeholder standpoint.” Brian T. O’Neill (05:44) 
  • “You need that outside perspective, and that’s what you get when you go in, and you shadow or you observe people in the wild is, you get to see what their environment is like and what’s shaping their decisions. So, the more you can get [especially] the really technical people doing this kind of work, over time, you start to develop empathy, and empathy is really at the root of designing really good services.” Brian T. O’Neill (10:42) 
  • “Customers aren’t good at handing us well-defined problems; they tend to give us solutions, and we have to interpret the problem from the solutions.” Brian T. O’Neill (16:20) 
  • “When we don’t know what the benefits and the outcomes are, and we’ve only defined the technical parameters of the data products that we’re making, we’re likely to go down the wrong path of building an output instead of designing for an outcome.” Brian T. O’Neill (17:24) 
  • “A data product always sits on some kind of timeline because the person using it did something before they started using the data product, and they will do something after they’ve touched the data product. So really, there’s just jobs to be done, tasks and workflows. Your data product is always sitting inside one of these things that happens over time. So, there is no escape from a user experience. There’s always going to be an experience.” – Brian T. O’Neill (20:54) 
  • “You can’t put efficiency into the innovation process. That’s not how it works. What we can do is shorten the iterations of our work and get our work in front of people sooner, and work in lower fidelity so that we don’t run down the wrong path and overcommit both time, money, and sunk cost bias into the solutions that we’re making.” – Brian T. O’Neill (23:56) 
  • “We don’t want the big reveal where no one has any idea what we’ve been working on, and then bang, out of nowhere, we show them the deliverable. Most of the time, that’s not going to work because you’re making a lot of assumptions that you know exactly what the customer needs and wants and is willing to actually use. It’s a very risky way to go about building solutions.” – Brian T. O’Neill (31:05) 
  • “We don’t test people. We test solutions with people. So, you’re not testing the user, we’re testing the design of the data product or the dashboard, and we’re hoping that users can use it. If there’s a problem there, it’s probably a problem with the design, not with the person that’s using it.” – Brian T. O’Neill (32:12)

Links

Transcript

Brian: Welcome back to Experiencing Data. This is Brian T. O’Neill. I’m going to be rolling solo today. I’m going to be talking about 15 Ways to Increase Adoption of Data Products. This is based on an article that I published to my mailing list. For those of you listening to the show, if you’re not on the list, you can get on that over at designingforanalytics.com. I do publish weekly insight on Tuesdays actually, alternates. On the first Tuesday, it’s an article, and then the second Tuesday, it’s a podcast release, and so forth. So, I hope you’ll join there, if you’re not there already.

 

But this is going to be a two part episode. Since there’s 15 different ideas here, I think it’s too much for one. So, we’ll cut this into two parts here. So, I’m going to talk about the first seven or eight today, and then we’ll do the remainder in a future episode. So, let’s hop in with these right now. I’m going to talk to you, I’m actually going to tell you, the first eight of these right up front, and then we’ll go into detail on them.

 

So, the first one is to design behavior change into the data product. So, that means instead of thinking about it as something you do after the fact, we design for the behavior change from the beginning. The second one, establishing a weekly habit of exposing technical and non-technical members of the data team directly to end-users of the solutions. This means getting gatekeepers out of the way, middle people out of the way, and really having tech people directly seeing and observing customers in the wild, using solutions or using the—doing things, quote, “The old way.”

 

The third one, changing funding models to fund problems and not specific solutions. The idea here is that instead of funding a large project to, say, build a data platform, you fund a problem space and assign a team that owns ownership of say, ‘reduce attrition by 5% in the next six months.’ And you fund that team, you fund that problem, and then they figure out what the right thing is to do. It’s a different way to think about it, one we’ll go more into detail in a second on that one. Number four, hold teams accountable for writing down and agreeing to the intended benefits and outcomes for both users and business stakeholders, reject problems that have vague outcomes defined. You’ve heard me talk a lot about this on the show. I think that one’s kind of pretty straightforward here. This has to do with really focusing on and making sure we have clarity on benefits and not clarity just on the technical outputs we’re creating.

 

The fifth one, approach the creation of data products as user experiences instead of a thing that is being built that has different quality attributes. So, we can also measure the UX in terms of user experience outcomes. So, if we did a good job on this data product, how would it improve the lives of users? So, this is, again, thinking about a data product as an experience and not necessarily a noun or a technical artifact that comes off of an assembly line.

 

The sixth one, if the team is tasked with being innovative, leaders need to understand the Innoficiency Problem, shortened iterations, and the importance of generating a volume of ideas—both bad and good—before committing to a final direction. That one’s going to take some time to go into, but it has to do with understanding that innovation is not something that you can make efficient. It takes waste, it takes trial and error, and it takes not necessarily using Agile software development processes, but we have to understand and embrace that you can’t create new things without creating some bad new things first, and learning how to accept that if innovation is actually what’s being sought.

 

The seventh one here is to co-design solutions with—and not for—end-users and a low, throwaway fidelity, refining success criteria as the solution evolves. So, this is to embrace the idea that the research, design, build, test process is actually not a linear process, and you may have to go quote, “Out of order,” and that’s actually normal. So, this is kind of antithetical with the way we build the technical side of, like, machine learning a lot of analytics solutions. We talk about building data pipelines, and then you do the modeling, and then you put it into an interface, and then you ship it, et cetera, et cetera. Those tend to be more linear. This process is not, and that’s something that we have to also embrace.

 

And then the final one for this first part of this episode is to test—and I really mean validate—the solutions with users early on before committing to releasing them, but with a pre-commitment to actually react to the insights that you get back from the test. So, the important thing is not to test the solution, but to decide that we’re willing to react to the feedback that we get if we do test the solution. Otherwise, the test was just a cost, right?

 

So, let’s jump back to these. I’m going to go into a little detail on each one, and then we’ll wrap up for this first part of this two-part series. So, number one, again, designing behavior change into the data product. I’m not a big fan of change management as a separate step in the process. I think if you’re giving a ton of attention and time to change management, it may mean that the solution itself was not designed properly from a user experience standpoint, and potentially even a business stakeholder standpoint.

 

So, the idea here is that we understand that behavior change is actually, quote, “A requirement” of the data product. It’s something that we need to actively address, which may not have anything to do with the modeling work, the data science work, the pipelining, the engineering. It probably doesn’t have a lot to do with that kind of work. It’s probably going to be taken care of more in the research phase and the design phase. And you do have a design phase whether or not you call it that or not because as I always say, there’s no null design choices, so every solution has a design.

 

Every solution has an experience, whether we put intention behind it or not, so we have to intentionally go out and plan for the behavior change. How are we going to incentivize people? Where’s the resistance right now? How might we need to message about this new solution that we’re putting out into the world? These are all factors that, as many of, I think, listeners know, this is some of the hardest stuff to get right. The building, the tech part, tends to be fairly easy, but getting people to care or to change the way they’ve always done things can be really difficult.

 

So, there are disciplines that do this work. My roots as a designer, as many of you know, but user experience researchers, service designers, this is ethnographers, human factors specialists, I mean, a lot of these people are doing very similar work. There’s different terms for these things from the academic side and from, kind of, the software space, but this is why UX teams exist and software companies is because we know that this stuff is difficult. And it’s a mixture of psychology and behavioral science and understanding how people use technology, and all these factors come into that, particularly if you’re building data products for non-data users, and that’s primarily what I’m thinking about here.

 

So, we’re not talking about building a tool for data scientists. But even then, data scientists have their way of working, too, and they may be resistant to doing it the new way until they can understand what’s in it for them. So, it doesn’t even really matter so much if the audience is super technical or not. There could be a behavior change challenge in front of you, regardless of the job titles of the people who are going to use the solution.

 

The second one, going back to this one, establish a weekly habit of exposing technical and non-technical members of the data team directly to end-users of solutions. No gatekeepers allowed. This is nuts-and-bolts stuff in the software industry when we talk about designing software products. You got to get the middle people out of the way. And this can be really hard in large enterprises, and I’ve been in situations where the sales team can be a real blocker to getting customer access because they’re worried about this kind of research work scaring off a renewal of, say, a contract and things like this. There can be all kinds of reasons. Or, “We know what’s best for them,” or, “That’s our department, and we just need you to give us this thing, and we’ll take care of getting it distributed to the staff or whatever.”

 

This stuff just doesn’t work. And I think Marty Cagan, I was listening to a video talk that he did on this, and this is what he would call as a non-negotiable if you want to be a product-led organization. Maybe you don’t care about being a product-led organization, but I agree with him that if you really want to get good at building data products that matter and then actually produce value and that are easy to use, if not delightful and empowering, you have to talk to the people that are going to use them. It’s just—it’s that simple. You need that exposure time to them.

 

So, exposure time. What am I talking about there? I’m talking about a routine habit of either interviewing or simply observing people doing their work. So, if it’s, you know, the accountants doing end-of-year work or whatever that looks like, and they need to run numbers and process taxes or whatever it may be, it’s actually shadowing—it’s spending some time, usually in a one-on-one or a two-on-one format, and going out and observing what’s it like to be an accountant at the company.

 

What’s it like to be in sales at the company? Asking them that wh—you know, if they have the sales team has a weekly meeting, and they pull up the dashboard, and they look at the stats and then the boss yells at the st—you know, “Why is this number so low?” sitting in on that meeting and getting permission to observe how they’re using that information there. This kind of stuff is invaluable in terms of surfacing unarticulated problems because that sales team, they’re inside of a jar, and they can’t read the label inside of the jar, they just—what’s normal to them as normal to them, and they’re not going to probably think to tell you about that in a very omniscient third-party kind of way. It’s very hard for people to do that.

 

You need that outside perspective, and that’s what you get when you go in, and you shadow or you observe people in the wild is, you get to see what their environment is like and what’s shaping their decisions. So, the more you can get, especially the really technical people doing this kind of work, over time, you start to develop empathy, and empathy is really at the root of designing really good services. And that’s that ability to see things from the other person’s perspective. We’re not talking about sympathy, feeling bad for somebody, we’re talking about putting ourselves in the shoes of the person that we’re trying to serve. And again, this is a core tenant of doing great design work is trying to see the world through someone else’s eyes. And it’s really hard to do that if you’ve never actually seen the person’s eyes, and you’ve never done their job, you’ve never observed them doing their job, and there’s so much that you don’t know that you don’t know.

 

So, get rid of the middlemen or the blockers. This is again, sometimes this takes executive leadership to step in and kind of clear a path there. I mean, literally, I remember one client I was working with, we had to go… I think it was, we had to go to the CTO to literally finally get a name. It’s like, “Could we just get an Excel spreadsheet: name, email address, phone number, company name. We need 25 of these people. We got to go out and get this work done, otherwise we’re going to go way off track here because we don’t know what we’re designing for here.”

 

And it was difficult to get that but eventually, we did get that, and we were able to remove some of those blockers there. So, it may take some work, it may take senior management stepping in it may, and they may have to do some work to clear out the fears and whatever those concerns are that the blockers have about a team going and talking to their team members or subordinates or whatever it may be. I would say that’s probably the most basic thing that you can do right now is to start doing that work and put it on a regular cadence. I remember when I was an employee at J.P. Morgan, I worked a lot on financial services, like a portfolio applica—you know, websites for portfolios, managing your stocks, your funds, all that kind of stuff, as well as active trader software, so all the charting and visualization and stuff in that space.

 

And the act of going to the trading desk, and putting on a set of headphones, and just listening to the traders call in, the ones that were placing trades via phone, and just getting an idea of how they talk. What are their concerns? What kind of questions were they asking the employee traders that were taking the calls? That was a required activity when we were there. And this isn’t unusual in most mature software organizations. They’re going to have routine activities like this to go out and spend that—get that exposure time there. So again, to me, the most beneficial thing is getting those technical people to do that work.

 

Number three, changing funding models to fund problems, not specific solutions. This, of course, requires that the team doesn’t want to just make stuff and whether or not it gets used or not, they don’t care. If your team doesn’t really care about the value of the work, and they only care about the writing of the code or the modeling or whatever it may be, this is going to be—it’s not going to make any difference if you do this work. But the idea here is that if you have a group of smart people—and I think most data people, most data professionals, kind of fall into that pretty smart kind of category—what would happen if you just gave a small tiger team—and it doesn’t mean—there’s—sure there’s going to be you know, 15, 20, 100 other people outside of that, but you give this core team a problem and a budget instead of a solution and a budget. Because the problem is, sometimes the solutions, especially the really big ones, end up not solving the problem because a lot of times the budget for the problem is actually a budget for a solution, not a problem space.

 

The customer handed you a problem, but it’s actually a solution, not a problem. So, this idea of, like, “Give me a dashboard.” Well, a dashboard is a solution to some problems. But a lack of a dashboard is not actually a problem. A problem is, “I don’t know where my sales team should be focusing our efforts.” That is a sales problem. “I don’t have a dashboard,” is a solution that sounds like a problem.

 

So again, what if you were to fund the problem of the sales team is wasting too much time calling the wrong people. We want to see some metrics and prove here, let’s say it’s—of all the calls we make, we have an increase from 10 to 50% in terms of the prospects feeling like we touch them at a warm point in their buying journey. So, the deal moved to the next stage or there was a next action that was planned for as opposed to a failed call, which is, “I’m totally not interested, please don’t call me again, I’m totally not—I don’t even want to talk to you,” that would be a fail. So, what if we could get a 5X increase in that? Here is a budget for the data team to help the sales team go and do that.

 

And what we might find is, it’s possible that the issue isn’t a lack of data. The issue is maybe the sales team is using car salesman techniques. And maybe that’s not the data science team’s job, but maybe we find out the issue really isn’t that the dashboard is bad. It could be that their approach is good. The way they’re handling the calls is the problem. They’re already calling the right people, but their approach is wrong.

 

And that’s an extreme example. That assumes that the data scientists are good salespeople and that they would be able to identify a selling problem on the phone, but the point there is that we’re not just going to fund this project that feels like it’s a problem and is actually a solution. Customers aren’t good at handing us well-defined problems; they tend to give us solutions, and we have to interpret the problem from the solutions. And this is tricky. It’s sometimes called ‘The Presenting Problem.’, “I don’t have a dashboard, and so I don’t know who my sales team should be calling.” That is the presenting problem. There’s a little bit of solution embedded in it, and we need to work back to what the actual need is there. So, fund the problem space.

 

Number four, holding teams accountable for writing down and agreeing to the intended benefits and outcomes for both users and business stakeholders. Reject projects that have vague outcomes defined. You probably know, like, I think it’s Amazon uses this idea of, like, writing a press release, sometimes, at the beginning of a new initiative, so what’s the story behind this new thing that we just launched? What are the benefits? Who’s it for? Who would care?

 

And this exercise gets us to think about the future end state of what it is that we’re going to make, and why should someone care. When we don’t know what the benefits and the outcomes are, and we’ve only defined the technical parameters of the data products that we’re making, we’re likely to go down the wrong path of building an output instead of designing for an outcome. Whether you call these OKRs, or success metrics, or whatever language we want to use here, there’s a lot of different ones here. I think the goal here is that you probably have some quantifiable metrics in this, and you might have some qualitative ones in here as well. And that can be things like going back to our sales team. The sales team generally reports that they feel more confident when they’re on the phone.

 

And I know that sounds really squishy, but if you’re the head of sales, and you’ve got a bunch of stressed out staff, and projects aren’t closing, and buying cycles are increasing, but your data product is enabling them somehow to feel more confident and feel like things are getting better from a tone perspective, the confidence perspective, et cetera, that probably can be measured in some way, but even if it remains more of a qualitative goal, that can still be a really valid thing to work on if that is the thing that’s keeping the stakeholder up at night. The point is, have we actually uncovered that that’s what’s keeping the person up at night? Have we come up with some way to measure this? And have we all agreed about that?

 

It may be that you revisit some of these success metrics and progress metrics as the project continues because, as I’m going to talk about later, sometimes we design and build stuff in order to figure out what do we actually need to design and build. We don’t necessarily pre-plan it, and then we execute it, and then we ship it. That’s often not how it works because we’re learning about what’s needed and how it’s going to be used when we’re doing it. And if the thing, the data product is quite innovative or different, we might really be a new territory where we start to create new problems, or we change where focus is needed, and all of a sudden, there’s something someone’s never thought about before that now has become something they have to manage. And it may be worth their time to manage this new thing that never existed before because it gives this downstream benefit that’s worth it.

 

So, the point is, we got to accept that these things may change. I don’t think you just start building with no idea at all about what the benefits and the outcomes are going to be. There should be some stake put in the ground as a benchmark to get going, and then you revisit it as necessary. But the point is that the team has that agreement on it, and if someone asks, “Why are we working on this and what’s the value of this thing,” we can give that to them, and we can explain in a few bullets what the benefits are, and how we’re going to measure it, and what the intended success criteria looks like. And if you don’t have that, you say no.

 

Number five, approach the creation of data products as user experiences instead of a thing that is being built that has different technical quality attributes—like an SLA or data governance, or whatever the things may be—and measure the UX in terms of a UX outcome, which is, if we did a good job in this data product, how would it improve the lives of the user? This idea of a UX outcome comes from Jared Spool, who is back on Episode 54, of Experiencing Data. He’s a very, very well known UX thought leader here. But the point here is I like to think of this as, like, a timeline and think horizontally. Like, even if you build a data product, a data product always sits on some kind of timeline because the person using it did something before they started using the data product, and they will do something after they’ve touched the data product.

 

So really, there’s just jobs to be done, whatever language you want to work. There’s tasks, there’s workflows. Your thing is always sitting inside one of these things that happens over time. So, there is no no-user-experience possible. There’s always going to be an experience, and if we think about our task and our mission as building some kind of data product experience and not just a data product—which sounds like something that can sit on a shelf; maybe it’s a GitHub shelf, but it’s a shelf, nonetheless—whereas an experience is something that happens over time. It can’t be put on the shelf.

 

So, this changes our perspective because it really gets us to think, again, about where are they coming from, and where are they going? And then we have to think about—and this is, you know, the product management’s job, but where’s the boundary, right? We don’t want the scope to go crazy, so at some point, we have to say this is where the data product kind of ends. At that point, we’re really using some downstream application, or that’s really part of the front-end, if it’s—you know, if you’re building a Lego brick, like, a platform thing or something like that, there’s got to be some boundaries here. But the point is, you’re not doing this work in isolation, and we’re really focused on improving the lives of the person that’s going to use this thing that we’re creating.

 

This gets a little bit more abstract when your data is, for example, optimizing a workflow that already exists, and you’re simply optimizing it by putting some data into the software or something that sits in the ba—it’s in the background. The customers don’t actually—aren’t interacting with the data directly. There’s no direct interface for the information. So, there are exceptions to all these things. In general, this whole episode, I’m talking about data products that typically do have some kind of user interface to them, and they’re being delivered over a screen, most likely, of some kind. So, just for context, I wanted to mention that I am aware that there are more transparent forms of analytics tools that can, you know, improve efficiencies and things kind of quote, “Behind the scenes,” so to speak.

 

Number six, if the team is being tasked with being innovative, leaders need to understand the Innoficiency Problem, shortened iterations, and the importance of generating a volume of ideas, both bad and good ones, before committing to a final direction. So, Blair Enns, who runs a business called Win Without Pitching, it’s a sales training organization for creative marketing firms, independent design and creative consultants, people like myself, he coined this phrase which I love: the Innoficiency Problem, which is, quote, “To be ignorant of the principle and to think an organization, department, or individual can increase either innovation or efficiency without decreasing the other. It cannot. Efficient innovation is an oxymoron.” His point is, you can’t put efficiency into the innovation process. That’s not how it works.

 

What we can do is shorten the iterations of our work and get our work in front of people sooner, and work in lower fidelity so that we don’t run down the wrong path and overcommit both time, money, and sunk cost bias into the solutions that we’re doing. But this Innoficiency Problem needs to be understood. And this is especially true for very risk intolerant companies or companies that are effectively, they survive to protect the brand. The brand has a ton of value, and they really don’t want to take a lot of risk, but then they sometimes will spin up an innovation department, for example, and maybe things don’t go super well. And I think if you’ve put a lot of process and program management people in charge of running the company, this stuff is going to feel very… foreign and inefficient, and it’s going to just look like a waste, like, a bunch of people are just screwing around because they’re not going to understand that trial and error may be required to come up with something that’s truly new and making—having an impact on the business. So, that’s literally just a pill that has to be swallowed by leadership.

 

On the other side of this, too, is this idea of generating a volume of ideas. And this comes from Episode 106 of the show with Jeremy Utley, who’s the Director of Executive Education at the Stanford d.school. And he wrote this book—and his idea is called Ideaflow.

 

And the big takeaway that I took from his book, which I really liked, was teams that can generate a volume of ideas before committing to a particular direction. And the basic strategy here is that before we go off and run and build something, how many ideas have we actually explored, including bad ideas? So, instead of worrying about coming up with ten great ideas for something, Jeremy’s idea is, come up with 100 ideas, including bad ones. The game should be to generate a volume of ideas, and not to create a limited set of, quote, “Good ideas.” Because the volume of ideas gives—especially when we do this in a team format—it gives others a chance to reflect on even the crazy outside, things that may come up, which may spin off yet a third idea.

 

So, it’s not that one of the bad ideas ends up being a great one, it’s that the bad ones can inspire new ideas to come out. And especially when we have a cross-functional team that’s doing this kind of work together, this is where we can start thinking about, we can have those, kind of, moments of inspiration because we all see the world differently, and we have our different hats on, and this is where a lot of innovation comes from is when we take things from one industry and put them into another, we take this idea that—there’s that classic example, I think it was on that episode with Jeremy where we talked about how they get the extra grease off the potato chips. And long story short, that was inspired, I think, by a violinist, someone who played music, and they were thinking about strings vibrating, and instead of trying to soak up all the extra grease on the potato chips, they actually shake it off by vibrating the food. And this idea would not have come until this idea of thinking about music. And in this particular example and this idea of this vibrating string, that that might be a way to remove the grease.

 

And they may have gotten some of that story incorrect because I’m just pulling this from memory right now, but the point is, I think you can understand even if that wasn’t true, the idea is that having a bunch of different people with these different perspectives is what might allow this kind of idea to vibrate the chips to get the grease off to even have a chance of ever being considered by the business. So anyhow, just to summarize that when again, we need to understand the Innoficiency Problem, we can shorten our iterations to try to improve learning cycles or accelerate those learning cycles so we’re not just wasting for no reason, but we’re actually learning, and then we have that importance of committing to generating volumes of ideas, which means we include the bad ones and the good ones before we commit to a final direction.

 

Number seven—we got two more for this part one of this episode—co-design solutions with—and not for—end-users and a low, throwaway fidelity, refining the success criteria as the solution evolves. This goes hand-in-hand with the idea that design is also not a linear process. So, this idea of we do problem definition, and then we design the thing, and then we build it, and then we test it, and then we ship it, that process, you may bounce between those different things, you may have to do some design work in order to figure out what the problem is, at which point you then revisit the problem, and then you have to redesign the solution, then maybe you take a few more steps with building it, et cetera. The only way to know that you need to do that is by co-designing these solutions with users.

 

And again, this also goes back to the number six, which is accelerating our iterations and our learning cycles. We design it with the customers soon, low fidelity, and we get feedback sooner so that we can then make those adjustments. So, we may have to do a couple different starts. It may be that as we research and begin to design our initial—maybe you’re sketching a dashboard, and you’re trying to understand the visualizations up front instead of at the end of the project, and you might have to do three or four of those. And each time you’re showing them the new solution, you’re learning more information that they never exposed to you, probably because they never thought about—they had to see something in order to react to it. Until they had a solution to look at, it didn’t get them thinking about the problem that they have the same way.

 

And this is why requirements are friendly lies because so often, we start to learn about the requirements quote, “Too late” in the process. I want you to learn about them earlier in the process by co-designing with customers because you’re going to get—if you’re getting that stuff in front of them, you’re going to start getting really useful information back sooner. So, we got to understand, it’s not a linear process. It’s normal, it may look really inefficient, and it may look expensive somehow, but when you compare this to the cost of simply running from A to Z and not stopping, and then you ship this thing, and then it doesn’t get used, which is so often what happens out there, we need to compare that to that. What’s the opportunity cost of the team being committed for three to five months on this project that did nothing. All it did was spend money, and we actually lost three to five months of time we could have been working on something that actually would have gotten used.

 

So, there’s staff cost, there’s the technical costs, there’s a lot of things to compare that to. So, we have to embrace that out-of-order phased work, and we need to work in low fidelity so that we can enable getting that feedback sooner, and we don’t wait so long before we show customers. We don’t want the big reveal where no one has any idea what we’ve been working on, and then bang, out of nowhere, we come up with this giant reveal. Most of the time, that’s not going to work because you’re making a lot of assumptions that you know exactly what the customer needs and wants and is willing to actually use. It’s a very risky way to go about building solutions.

 

I think you’ll know when you’re doing it right when you get to the end of the project, and nobody is surprised, and it’s kind of like, “When’s it going to be done? We’re ready to use this now.” They’re itching to use it, they already know what it’s going to be, they’ve seen it, they have an idea how it’s going to work, and they’re just waiting to get it. That’s a much better indicator that you’re on the right track.

 

Number eight—and this will be the last one in part one of this episode about 15 Ways to Increase Adoption of Data Products—test. Test, test, test. But not QA test. I’m talking about validating solutions with users early before committing to releasing them. But with a pre-commitment to react to the insights you get back from the test. And this is really important, this last part.

 

First of all, there’s a couple things here, we don’t test people. We test solutions with people. So, you’re not testing the user, we’re testing the design of the solution, we’re testing the dashboard, and we’re hoping that users can use it. If there’s a problem there, it’s probably a problem with the design, it’s not a problem with the person that’s using the dashboard. So, that’s the first thing.

 

Secondly, you can come up with a test plan about all the things you want to learn, things you want to validate, like were they able to figure out, like, which targets to call using the dashboard or whatever—if we go back to our sales example there—but we need to be willing to take the learning and put that back into the solution, which is a core concept in product development, right? If we’re just testing, getting the information, and then we continue down the original path regardless of what we learned because we’re in denial, there’s really no reason to test. My advice to you would be, just skip the testing because you’re not going to do anything with the information you’re getting back, and if anything, you’re probably just going to frustrate some people who actually felt like, “Wait a second. Nobody understood what that metric was, that KPI that we’re showing them. Nobody tried to drill down. Nobody cared about all that stuff that we did. Wait a second, what are we doing?”

 

If you’re testing and getting that information, and then ignoring it, that can be really frustrating for some of the team that actually does want to create an outcome and have their work matter. So, if you’re in a leadership capacity, it’s on you to commit to reacting to the insights that you get back from the test. And maybe you have to do that over phases, but I think it’s really important to pre-commit to that information. And this helps us remain objective about the things that we made. And it may mean that the thing we made isn’t as great as we thought it was.

 

Frankly, as a designer, I love testing stuff, especially the stuff I’ve made, whether I’m the one actually facilitating the test or not. And the reason why is because it’s so fascinating to learn what works and what doesn’t and why. And when you get feedback that like, “Wow, no one saw this button here. No one knew to click on the date thing,” or, “Nobody understood the filtering or whatever the heck it was on this chart,” when we understand why that was and when the facilitator digs into that, and we get behind it, you’ll never forget those insights that you learned back. And you’ll have something like, “Wow, I’m never going to do that again.” Or, “Wow, if I do that, again, I got to remember I got to do this other thing because in their world, this equals this. And in my world, no one cares about that.”

 

That it’s fascinating. And it takes some of the subjectivity of design, and maybe what some of you, when you hear the word design, you might be thinking about art, and these interfaces are just—only talented, creative people can do those kinds of things. This actually takes it down more into heuristics, and best practices, and objectively correct and incorrect decisions, at least in specific contexts. It says, this works and this doesn’t. Why? We’ve tested it. We have the information now to know that doesn’t work. And that can be really comforting if you’re trying to get better at design, when you realize that you can objectively make good decisions here.

 

It doesn’t mean there’s only one solution out there, it just means that the one that you’ve tested in this particular context with these particular users is or is not working this way. And when it’s not, the facts you get back about why it’s not working and getting the user to talk about that, it’s just really invaluable stuff. And I actually love that feeling of, kind of, being wrong, but having the knowledge to know what went wrong there. It’s just, I don’t know, it feels awesome personally, for me. Maybe it won’t feel awesome to you, but that’s how it feels to me.

 

So anyhow, I hope this is useful. I’m going to put a link to this article. If you want to go ahead and read the whole thing, I go into more detail there. And there’s, obviously, all 15 are in the article, but for today, I’m going to leave the episode here and hope that you’re able to go out and take some of these ideas and put them into use in your own work.

Again, if you have any questions, you can always head over to designingforanalytics.com/podcast. You can also leave a question for me—anonymously, if you want—right through the browser; you can record a little audio memo. I’m happy to try to answer that on the podcast. And until next time, see you later.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe for Podcast Updates

Join my DFA Insights mailing list to get weekly insights on creating human-centered data products, special offers on my training courses and seminars, and one-page briefs about each new episode of #ExperiencingData.