You probably rock at building enterprise ML and analytics applications, software products, and dashboards—but if buyers, stakeholders, and users just aren’t seeing the value, your problem isn’t your tech. Similarly, the solution is not in your code, pipeline, model, or GitHub. However, there are tactics and strategies from UX design, product management, psychology, marketing and other fields that you can apply so your technical work has impact, creates delight, and generates value.
Are you responsible to ensure that real customer and business benefits are derived from your machine learning and analytics applications—but adoption of your solutions is often low?
Are users not relying on your AI and analytics solutions—even though you gave them what they asked for?
Are you data products typically experienced in a dashboard, self-service software tool or product?
If so, these 15 tactics and strategies for data and product leaders will help you close the gap between the technical work your team is doing and what users and stakeholders actually need from you data products.
Below are some concepts derived from 17 years of UI/UX design consulting, my Experiencing Data podcast guests, and some of the data product leaders in the Data Product Leadership Community (DPLC) I launched in 2023. My general feeling when I hear the data community say “generating value with data” is that, if the data product involves humans in the loop, and most do, you first have to solve the adoption problem before any value can possibly emerge. After all, businesses are just collections of humans working together. If they’re ignoring your fancy ML and analytics solutions, then you’re effectively just churning out outputs that have no economic value.
As with most things worth doing, if addressing low adoption was as easy as snapping your fingers, “everybody would be doing it” and it wouldn’t be a problem for so many. However, getting started is usually the hardest part—so focus on taking the first step, and embracing “imperfect action” as Harry Truman once said.
Note: before you jump in, you may want to familiarize yourself with my definition of data product so you have context.
1. Design behavior change into the data product
Change management means you’re aware that change is hard and may need to be managed in the creation of your new data product. The new way to think about this is designing the solution to minimize or eliminate the need for thinking of change management as a separate, managed activity. The problem with change management is that it can sound like, if you build a data product, so long as there is a change management stage, the solution will get adopted and getting folks to use it is the responsibility of the change management team. This is a gamble. If the food doesn’t taste good, you probably aren’t going to change-management your way into getting people to eat it. Designers, particularly UX and service designers, can help design the solution for adoption from the start—to reduce dependency on “after the fact” change management activities.
2. Establish a weekly habit of exposing technical and non-technical members of the data team directly to end users of solutions - no gatekeepers allowed.
It is especially important to have tech leads participate in these sessions and for management to “clear the path” between makers and users. Marty Cagan once talked about this as one of the “non-negotiables” if you want to be a product-led organization (check out this talk where he gives a candid, and highly accurate take on Agile and how it is != “doing product”; I also interviewed him on Ep. 61 of my podcast). As a data leader, you may not share the goal to be “product-led,” but if your goal is adoption of ML and analytics—which is the same of product teams—then the advice is sound. What you’re doing long term here is building empathy for people like Marta in accounting, not the faceless “accounting team.” Most people probably don’t care about the accounting team much (sorry accountants) but when highly technical people such as data scientists and engineers get to see Marta struggling to use their amazing ML model or tool, they start to see how their work is—or is not—having a real impact on others.
Additionally, leaders may already be aware that with the rise of remote work, and as I discussed in my episode with Kyle Winterbottom, the best talent usually wants their work to make a difference—which begins by getting it used. The old idea of “make technical/complex stuff because it’s cool” seems to be falling out of favor. You will know it’s working when your technical staff start pushing back on requirements and requests of your users and business partners. Things like, “How will that help you make a decision about X though? What decision are you trying to make with that info? Why do you think that feature will help you?” They’ll also be able to begin to know what it’s like to be Marta, what their goals/aspirations are, where data could help Marta be successful, and to begin to bring Marta and their team ideas that were unsolicited. This can help transform the data team into a team that is now seen as a center of innovation vs. a “cost of business.”
3. Change funding models to fund problems, not specific solutions, so that your data product teams are invested in solving real problems.
An example of this might be to help the sales team to know where to focus their outbound calling and contact efforts (aka reduce time wasted contacting the wrong prospects). Note that I did not say “build a propensity model and put it in a salesforce dashboard.” The latter is an output, not an outcome, and it presumes that this design solution will lead to the behavior change in the sales team. It’s also a good idea of what it means to be “project” and not “product” oriented. There may be multiple ways to do that, from dashboards to models to automated solutions. How improvement is measured could be quantifiable but also qualitative. Funding the problem space also means the sales and data/tech teams are more aligned now on the problem to be solved, and not on a specific data output. This may require staff to look at their role as bigger than their “hired hands.” While it may say “data engineer” on the business card, if the incentives and goals are clear to this IC, they can begin to think more like the business and end users. Reward teams that can come up with multiple solution vectors, particularly if they can generate a volume of possible solutions, understanding some may be wrong, incomplete, impractical, or impossible. The other thing that this model does is force collaboration between the stakeholder/users like the sales team, and the designers (i.e. the creators of the data product). It’s easy to go off in isolation and build a model and plot it on a dashboard. If their goal is to own the problem of optimizing sales outreach, it necessitates more cross-functional interaction between teams. (That’s generally a good thing in product design). I believe I first heard about this idea of changing funding models from Regions Bank CDAO, Manav Misra on Ep. 97 of my podcast.
4. Hold teams accountable for writing down and agreeing to the intended benefits and outcomes for both users (see #5) and business stakeholders. Reject projects that have vague outcomes defined.
There are many tactics for this, but almost surely, these should be written down and disseminated amongst the team. Some Amazon teams I’ve been told use the “write the press release” exercise at the beginning of a new initiative as an exercise to focus on what the benefits of the new solution would be before any code is written or any UIs are designed. However you do it, the important thing is that the requisite “why conversations” have been had, and that quantifiable results are defined. You might call these OKRs. Whatever you call them, the key idea is that they are measurable, even if qualitatively. While quantifiable numeric metrics are always useful, a success metric could also be that “80% of the sales teams, when interviewed 3 months after launch, generally feel like they’re wasting at least 20% less time.” If measuring is hard, the key thing to remember is that most people, especially technically people, conflate measurement with accuracy. They should read Doug Hubbard’s book How to Measure Anything, and listen to my interview (Ep. 80 of Experiencing Data).
5. Approach the creation of data products as “user experiences” instead of a “thing” that is being built that has different quality attributes. Measure the UX in terms of UX outcomes: “if we did a good job on this data product, how would it improve the lives of the user?”
This idea of “UX outcomes” I first heard from Jared Spool who I interviewed on Ep. 54 of Experiencing Data. Even if you choose not to measure your UX outcomes, getting the team to think about their data deliverables as “experiences” will help them realize that their work usually sits inside a “job to be done,” and never in isolation. Almost always, the humans in the loop will be doing something before they interact with the data product, and afterwards. The question is where the UX starts and ends. Understanding what a user is doing or thinking prior to using a data product (particularly a self-service tool) influences design choices. Similar, for decision support applications and dashboards, it’s really key to understand what the “next stop” of the user is. Not only the stops, but how far away the decision/action is. After all, decision support dashboards that rarely drive actions are essentially vanity metrics and just add to your dashboard desert. An example for this could be a ML model expressed in a website that recommends health care plans to an insurance shopper. The work to be done is not “model development,” but rather, might be something like “reducing the confusion around shopping for insurance, enabling faster decision making on plans with clarity and confidence.” This may necessitate that the data scientists work with the UI/UX designers, engineers, digital product manager/strategists, and others—not just a data team—because the data science work is essentially a “dependency” or capability that sits inside of an overall user experience. Having the data scientist participate in the overall problem/solution space (the end to end experience of shopping for plans) also gives a chance for the data professional to provide other avenues to consider as well as practical considerations like, “that will take a long time to build because we don’t have the pipelines in place, but we could to X instead in a few months.”
6. If the team is tasked with being “innovative,” leaders need to understand the innoficiency problem, shortened iterations, and the importance of generating a volume of ideas (bad and good) before committing to a final direction.
Blair Enns at WWP coined the phrase “innoficiency problem” which is “to be ignorant of the principle and to think an organization, department or individual can increase either innovation or efficiency without decreasing the other. It cannot. ‘Efficient innovation’ is an oxymoron.”Blair comes at this mostly from the standpoint of creative firms being hired to do marketing, design, and other “creative” work but it can be applied to anyone doing “innovation” work in my opinion—not just design. If not obvious already, the idea here is that innovation by definition requires some waste. The question is what type of “waste” is occurring. For data products, we don’t want to find out at the end of the project or product that the solution was neither successful nor innovative. This is too late, and leads to technically right, effectively wrong solutions. One of challenges for enterprise data teams that belong to a cost center is that the appetite for innovation may be even lower than it is for product and marketing teams.On the topic of shortened iterations, I’m not talking about Agile here. I’m talking about increasing the cycle time of designing, building, and shipping a “small thing” such that customer exposure happens as early as possible when change is still possible. The goal here should be to maximize learning on a regular basis—not to necessarily ship consistently good iterations of work. The goal is agility, not Agile.
Finally, the other big idea in the innovation space that I think is relevant to the adoption of data products is the concept of “IdeaFlow.” This idea (and a book!) comes from Jeremy Utley of the Stanford d. School (Director of Executive Education) who I interviewed on Ep. 106 of Experiencing Data. The big idea that I took away from Jeremy’s concept of IdeaFlow is the getting teams to generate a volume of ideas before committing to any particular solution direction—without judging the ideas or requiring ideas to be “good.” By focusing on generating a volume of ideas (quantity, not quality), we open ourselves to new approaches, as well as the opportunity to let a “bad idea” trigger a neighboring idea that may be a “good one.” This is another place where having a cross-functional team can be valuable; users, stakeholders, data pros, designers, engineers, analysts, SMEs, and product people all see the world differently and each group is likely to be biased by their own worldview. Involving a multidisciplinary team likely facilitates the emergence of a “volume of ideas.”
7. Co-design solutions with [not for!] end users in low, throw-away fidelity, refining success criteria for usability and utility as the solution evolves. Embrace the idea that research/design/build/test is not a linear process and you may revisit each phase “out of order.”
The most important idea here is that users are involved throughout the process of designing the solution, and that you took notice of the “with” and not “for” in brackets. This means no more throwing solutions over the wall to a build team who disappears and then comes back with a “reveal.” It means committing to smaller increments and iterations of work, getting feedback earlier. If this sounds like something only relevant to software product design in general, keep in mind that this is where you may begin to learn that “model accuracy isn’t as important as general direction and explainability” for example. “How accurate does it need to be?” That’s probably impossible for the average user to answer without having context. Design puts the predictive power of a model into a context that they can relate to, such that you can then glean useful information from them to know, “is this helping?”
8. Test (validate) solutions with users early, before committing to releasing them, but with a pre-commitment to react to the insights you get back from the test.
Embrace a “test the solution, not the user” mentality whereby apps, dashboards, APIs, and UI/UXs in general have to pass a specific level of usability at an early stage, before they get greenlit for production. Additionally, the testing of your design solutions or prototypes should be occurring at a stage where you’re still willing to make changes based on what you observe. Between actual cost and sunk cost bias, the longer you wait to test a solution with users, the more you will be resistant to letting go of a solution that isn’t working. This is why project managers and program management shouldn’t be running product development. The goal is not to ship a project on time. The goal is to deliver a benefit/value (ideally, on time) to a customer/user where the benefit has been defined earlier as some success metric that the team committed to. A technically right, effectively wrong solution, shipped on time, doesn’t actually serve anyone. It’s not good for you, and it’s not good for them (where “them” is just about everyone else in your org: customers, stakeholders, your team, etc).
9. Visualizing solutions at the end of the bulk of the tech work is a recipe for failure.
This means “building the plots” and doing “data visualization” at the end of a data science project is too late. In general, the later you actually pay attention to UI/UX, the less you can control or change because of sunk cost and technology investments that creators don’t want to toss out. At the end, you’re usually able to change the surface ink only. “Look and feel” matters, but if the underlying data, UX, or insights are wrong, it doesn’t matter if they were plotted with the right chart type nor “polished.” Visuals are excellent means of communicating “what we meant.” They can also be low fidelity. While software solutions that have heavier data viz / charting requirements , explainable AI UIs, or built-in complexity may require the help of a professional designer, there is a lot that non-designers can do in the form of sketching and low-fidelity designing to communicate with stakeholders and users. The other benefit of early visuals is that they can actually serve to “surface requirements” in the product. In other words, while a technical team may think that they have plotted out all the requirements of building a ML application for example, what you may find during the early design phase is that the technical plan is incomplete. In order for users to get the intended value of the solution, additional technology, interfaces, or UXs need to be built – otherwise, it’s like building a luxury destination on an island that has no transportation options to get people to it.Additionally, with ML/AI solutions, I am routinely hearing how model accuracy is not as important in many use cases as having a UI that is trustworthy, usable, and useful. Users want to know how the AI worked to come up with the predictions and sometimes, how it works—so they know what its capabilities are and how to use it to their advantage. If you’re routinely seeing low adoption rates, even when you’re aware that your models need to be explainable at the UI stage, it may be time to hire professional UI/UX design resources to help you.
10. Minimize the perceived change in behavior you are asking for of users in the design stage when possible.
Note the distinction here is not about the size of the data product in terms of technical complexity, and I’m not talking about Agile either. This is about designing a solution that works to minimize the “ask” of the user to change their status quo. Even when your solution is “better for them,” than their current way, there can be a significant switching cost involved for users to change their behavior. This goes beyond the literal work and can include people’s identities being strongly tied to their work. “Automating” what used to be a job that relied on experience and intuition may feel–and be–disempowering to the “user” of your solution. This is why it’s important not to design for the “business,” because the business is impersonal and faceless. Carl on the other hand, is a real store manager in Boise, Idaho. Your design may need to ask something small of Carl. (And as you know by now, involving Carl in the making also maximizes the chance of adoption!) Carl being happy also spreads the chance of him proselytizing your work so you don’t need to “convince” his colleagues.
11. Create a customer forum or chat room and leverage their knowledge throughout the design and deployment.
This comes from DPLC member Shane Roberts of Google. In the design world we sometimes talk about these as “design partners,” who are regular customers we form relationships with such that we have access to users for rapid feedback, particularly when it may be outside of a planned research initiative. Getting members who may appear resistant to change to participate in your chat room may provide invaluable feedback–so don’t just limit participation to “friendlies.” Just be aware of group dynamics (such as a manager and subordinates in the same group) and don’t let this be a replacement for 1x1 research. Of course, interactions don’t have to be limited to chat rooms either!
12. Say no to bad-fit projects and customers/clients.
Several data product leaders in the DPLC have requirements that must be met before they’ll take on the work. Your terms could mandate, for example, direct access to end users. No routing of questions through management or some middle third party. (Execs: that’s your job to clear the blockers). Saying “no” to the party requesting your help can be done in a way it’s in service to them. “We’d love to help you, but we don’t want to waste your time and money building a solution that is not adopted by your users, and based on experience, this is what is likely to happen. If you can get us the access we need directly to your end users and ____, that would make this possible.”
13. Consider building champions with your investors, board or executive team about key data product initiatives. Their enthusiasm may trickle down to “resistors” of new technology or data products you’re trying to get adopted.
This idea comes from DPLC Founding Member Klara Lindner, a service designer at diconium data. Lightly edited for content, Klara shared this story in one of our Slack threads: “There was a moment, where we had our old dashboard and our new dashboard running in parallel, and most on-the-ground staff still used the old one, mainly because they were in firefighter mode and were constantly solving issues that had to be resolved ASAP. When our investors flew to Tanzania and got shown the new version, they got so excited that they wanted to have access to the new dashboard on their phones, even though they weren’t actually the target users. Staff took note of their excitement, gave the new system a chance, and started to invest time to understand how it was actually a better solution.”
14. “Make it usable and available first, before making the math better.”
This idea comes from DPLC Founding Member Marnix van de Stolpe who I also interviewed on Ep. 129 of Experiencing Data. In a Slack thread, Marnix shares the following:
- Make sure it is actually available to use if someone wants to use it (as opposed to creating, e.g., a model with all data available, of which some data turns out not to be available at all at the moment you need it)
- Make it easy to use so someone will want to use it (as opposed to handing over a table name, or some set of data in, say, a spreadsheet that only makes sense to a data team (includes debug columns, columns in an illogical or wrong order, etc.)
- Make sure you evaluate the impact continuously so it is clear when your solution is performing better than the status quo (as opposed to building, something like a model that optimized so it has a tiny error, but does little or nothing to help users achieve their actual goal)And then:
- Improve the math, running it side by side [with the previous version], to check the impact, and then switching to the improved version so your impact goes up (all while your users don't notice anything changing because the output is identical)
15. Leverage tactics from behavioral economics and buyer psychology to encourage behavior change…like marketers do.
Why We Buy is an example of a newsletter [for marketers] that puts out short tips and examples about what makes customers buy, rooted in behavioral economics. Your own solution may need to be marketed as well, even if internally without any money being transacted. As I stated in my definition of “product,” to be a product, there has to be an exchange of value between two parties. In addition to a literal financial purchase of a data product, another form of this “purchase” is when a user lets go of their old way (i.e. they pay a switching cost) to embrace using a new data product. As such, understanding buyer psychology can also apply to internal data products. Psychology is actually a basic concept in UX design, and one that will help you build better decision support applications whether you’re a design or data professional. Keep in mind that insights from behavioral economics and buyer psychology can be used in both messaging about the data product, as well as how you write copy inside of the user interface (UI). After all, a lot of UI design, outside of the data visualization side of things common to data products, is actually about copywriting!
In closing, I wanted to share one other big idea, and that is that user adoption is actually not the goal at all. At least, it should not be your long-term goal.
The reason is because over time, increased usage of your data products may actually indicate there is a “tax” being paid instead of a value being received. The way I like to advise clients to think about this goes something like this:
If you are struggling with low to zero adoption of your services, that’s a pretty good sign that something is definitely wrong. Either you’re solving no problem, the wrong problem, or you understand the problem, but the solution is not useful, usable, or desirable.
Once you have “some adoption,” then you’ve got progress. Hooray! The next step, assuming you are tracking “usage” by counting something (i.e. page views/sessions against the solution etc., or analytics on your analytics!), this is where it becomes important to understand “are the usage numbers we are seeing reflecting users’ goal time…or is it mostly tool time?”
This framing, which I also credit to Jared Spool, refers to the idea that “goal time” means “user time spent achieving progress on the job to be done.” Tool time generally refers to work that is mostly about manipulating the data, the user interface, customizing the solution, importing/exporting, and other “verbs” that are all friction points. A great example of tool time is a data scientist who is really hired to do modeling, but spends 80% of their time “preparing data to be modeled.” For them, the tool time (tax) is data engineering and preparation. In general, well designed solutions reduce tool time and increase goal time. So where does that leave us? Effectively, back at the beginning—where I said that ongoing direct-access to users is a non-negotiable.
Particularly when revenue or cost changes cannot be directly tied back to the data product, we need to observe users using the working data product to see if the time being spent is “mostly goal time” or “mostly tool time.” This can only be done qualitatively, because analytics on your analytics isn’t going to tell you “why did users on average spend 8 hours a week using the new decision support dashboard?” Analytics is also not going to tell you “is 8 hours of use perceived as good by users? A tax? Somewhere between? Could we reduce the actual or perceived time by making it easier to use?”
In short, we have to continuously understand the “whys” behind the adoption metrics to validate that users and customers are getting the value we intended. Of course, data product managers should also be tracking direct business impact from the data product using the metrics defined earlier too. However, there are many times when measurement of business value from decision support tools is hard to accurately measure. Similarly, it’s possible that a solution might be performing well against the stated business metrics, but behind the scenes, a small fire might be burning: the people using it might hate the solution or find it occupying way more of their time than they expected. If your “successful” solution is actually putting a giant tax on the end users, that’s potentially a risk in the long term, or you’ve just exchanged one problem for another.
So what’s next?
Get started. Take imperfect action. Embrace UX design as an intentional act when building data products (i.e. no more “byproduct” design!) Realize that “we might have to design some bad data products before we design some good ones.”
Finally, remember that the best time to start using these strategies was probably yesterday, but the next best time to start is now!
Photo by Vlad Hilitanu on Unsplash