089 – Reader Questions Answered about Dashboard UX Design 

Experiencing Data with Brian T. O'Neill
Experiencing Data with Brian T. O'Neill
089 - Reader Questions Answered about Dashboard UX Design 
/

Dashboards are at the forefront of today’s episode, and so I will beresponding to some reader questions who wrote in to one of my weekly mailing list missives about this topic. I’ve not talked much about dashboards despite their frequent appearance in data product UIs, and in this episode, I’ll explain why. Here are some of the key points and the original questions asked in this episode:

  • My introduction to dashboards (00:00)
  • Some overall thoughts on dashboards (02:50)
  • What the risk is to the user if the insights are wrong or misinterpreted (4:56)
  • Your data outputs create an experience, whether intentional or not (07:13)
  • John asks:
    How do we figure out exactly what the jobs are that the dashboard user is trying to do? Are they building next year's budget or looking for broken widgets?  What does this user value today? Is a low resource utilization percentage something to be celebrated or avoided for this dashboard user today?  (13:05)
  • Value is not intrinsically in the dashboard (18:47)
  • Mareike asks:
    How do we provide Information in a way that people are able to act upon the presented Information?  How do we translate the presented Information into action? What can we learn about user expectation management when designing dashboard/analytics solutions? (22:00)
  • The change towards predictive and prescriptive analytics (24:30)
  • The upfront work that needs to get done before the technology is in front of the user (30:20)
  • James asks:
    How can we get people to focus less on the assumption-laden and often restrictive term "dashboard", and instead worry about designing solutions focused on outcomes for particular personas and workflows that happen to have some or all of the typical ingredients associated with the catch-all term "dashboards?” (33:30)
  • Stop measuring the creation of outputs and focus on the user workflows and the jobs to be done (37:00)
  • The data product manager shouldn’t just be focused on deliverables (42:28)

Quotes from Today’s Episode

  • “The term dashboards is almost meaningless today, it seems to mean almost any home default screen in a data product. It also can just mean a report. For others, it means an entire monitoring tool, for some, it means the summary of a bunch of data that lives in some other reports. The terms are all over the place.”- Brian (@rhythmspice) (01:36)
  • “The big idea here that I really want leaders to be thinking about here is you need to get your teams focused on workflows—sometimes called jobs to be done—and the downstream decisions that users want to make with machine-learning or analytical insights. ” - Brian (@rhythmspice) (06:12)
  • “This idea of human-centered design and user experience is really about trying to fit the technology into their world, from their perspective as opposed to building something in isolation where we then try to get them to adopt our thing.  This may be out of phase with the way people like to do their work and may lead to a much higher barrier to adoption.” - Brian (@rhythmspice) (14:30)
  • “Leaders who want their data science and analytics efforts to show value really need to understand that value is not intrinsically in the dashboard or the model or the engineering or the analysis.” - Brian (@rhythmspice) (18:45)
  • “There's a whole bunch of plumbing that needs to be done, and it’s really difficult. The tool that we end up generating in those situations tends to be a tool that’s modeled around the data and not modeled around [the customers] mental model of this space, the customer purchase space, the marketing spend space, the sales conversion, or propensity-to-buy space.” - Brian (@rhythmspice) (27:48)
  • “Data product managers should be these problem owners, if there has to be a single entity for this. When we’re talking about different initiatives in the enterprise or for a commercial software company, it’s really sits at this product management function.”  - Brian (@rhythmspice) (34:42)
  • “It’s really important that [data product managers] are not just focused on deliverables; they need to really be the ones that summarize the problem space for the entire team, and help define a strategy with the entire team that clarifies the direction the team is going in. They are not a project manager; they are someone responsible for delivering value.” - Brian (@rhythmspice) (42:23)

Links Referenced:

Transcript

Brian: Hi, this is Brian T. O’Neill. Welcome back to Experiencing Data. Today, I wanted to chat with you solo again. This time, we’re going to jump into the meaty topic of dashboards.

I haven’t talked too much about dashboards alone because to me, dashboards are just one design medium for expressing a user experience to a user. It’s kind of like just talking about text messaging or just talking about email notifications or something like that. However, they appear all the time in analytics work, obviously, and so I had thrown out an email to my mailing list, the Insights mailing list, which I occasionally ask questions to you also—who are some of my readers, I assume—and it was about whether or not I and we need to talk more about dashboards, and it seems like several of you thought we did. And I’d asked what some of your challenges were, so I want to answer some of those reader questions today, in this episode, and also just give some of my overall thoughts about how I think about dashboards in the context of data product design in general, and user experience.

So first, I think the term dashboards, the term is almost meaningless today, it seems to mean almost any home default screen in a data product. It also can just mean a report. For others, it means an entire monitoring tool, for some, it means the summary of a bunch of data that lives in some other reports. The terms are all over the place. I recently talked with a startup founder in the health insurance space—they’re building a SaaS—and he said, you know, I really need to get—our dashboard needs to be redesigned.

And so, you know, he sent me a video demo to take a look at this tool, and it was an entire application. But the words he used to describe this was, “Our dashboard.” But in reality, they required an entire application to deliver the value to the customer, which is not necessarily a bad thing. So again, my point there is that these terms could mean really different things. But I’m going to kind of glide past that now, and give you some of my overall thoughts.

By the way, if you ever want to respond, or—[first of all 00:02:36], get some of the insights that I like to share out of my mind and in my writing, feel free to join that list over at designingforanalytics.com/list, that’s where you’re going to be hearing these questions sourced from today.

So, before we jump into those questions, just here’s some overall thoughts. So, first of all, I’d like you to reframe the dashboard definition in terms of users of dashboards, right? So, some better questions to ask instead of, kind of, talking about the dashboard of the entity, the only thing that we need to get right, is instead talking about the right questions to ask, such as, “What does the user”—and I’m going to use the word ‘I’ like—“What do I need to pay attention to right now? How easily is that information being pushed to me? What surprise insights might be useful to me now, even if I’m not looking for them necessarily when I log into this tool?”

And I will say for the context of this entire episode, I’m talking about dashboards as in software that is delivered to a user on a screen, not a PDF, a paper report, a static document; we’re talking about some type of interactive tool or product or application, so just wanted to put that tangent there. Other questions, “Is this the right information for me at this time in my workflow? Does this information help me understand if more of my time is even warranted here or not?” In other words, “Is there anything to see here or should I go home?” We want to be answering that question as well.

“Who is this dashboard designed for and who is it not designed for?” Does the team understand that? Is it clear the persona that we’re targeting with this work? And it doesn’t mean only one person can use it, but the general rule of thumb that I have here is that you really need to design it for somebody because if you try to make it for everybody, you’re probably going to average a B-or C most of the time, and a lot of that stuff just ends up not getting used. So, I’d rather you build something that’s really kick-ass for a very specific persona, and then if it happens to help some other people or they have to struggle with it, that’s okay. But we really want to target somebody so that the work actually does create some value.

And then another question here is, “What is the risk to the user if the insights are wrong or they’re misinterpreted?” And part of this has to do with fighting the status quo. So, what is the, if they weren’t to use this tool at all, what’s the old way that this user is going to make decisions? If it’s not with data, or if it’s with some other method of accessing data, or maybe just a subset of that data because they can’t possibly analyze on their own the work that the tool would do for them, we need to understand what that status quo is here and what we’re ‘fighting’ against. And ‘fighting’ is kind of in quotes here.

But the user may see a risk here. It could just be on their time, it could be on their reputation, it could be fear of their management. There’s all kinds of different risks here, but we want to understand what some of those are here so that we can make the user feel empowered when they’re using this tool. Because if they feel good about it and if they feel like you’re on their side—you being the data product team that’s building the dashboard or the application or whatever it is—if they feel like you’re an advocate for their work, they’re more likely to trust and use that and also maybe spread that word to others if you’re trying to change the status quo about how things are done, particularly around decision-making. So, let’s worry less about what’s called a dashboard and isn’t.

So, the big idea here that I really want leaders to be thinking about here is that you need to get your teams focused on workflows—sometimes called jobs to be done—and the downstream decisions that users want to make with machine-learning or analytical insights. This is what matters more than the dashboard; the dashboard is just a stepping stone on the way to some decision that’s probably not actually made within the tool. Some tools, you do make decisions, right, and then the insights there, and the next action you take can be within the same application or whatever. It doesn’t really matter; it’s just important to note that the decision may be made downstream and quote outside of your area of control, but we need to be aware of that so that we’re connecting the data product work to the point at which value is created or not, which usually isn’t some type of decision that is made outside of the context of your thing. We want to inform those decisions.

So, another idea here to think about, know that your data outputs create an experience, whether or not that is an intentional experience or not. So, outputs can be things we all know, PDF reports, push notifications, email notifications, a single dashboard screen, little bits of insight or blurbs that sit in another application such as a CRM or a sales tool or whatever it may be, or it could be an entire application with multiple templates and screens. And now I’m talking to you out there who build, you know, Software as a Service applications and commercial data products, right? All of this stuff, though, these are just mediums for expressing the design. So, the choice of these mediums and the design of them, and the interactions that I have with the mediums and between the mediums, this determines the overall experience that someone is going to have.

And the user experience, the UX, is what may have a lot to do with whether anybody can and will use the insights to make informed decisions. So, as a leader, I want you to realize that it’s really hard to have business value if there’s no use. So, you need to design data products with usability and utility in mind, and this includes dashboards and all those other stepping stones we talked about—whatever the right medium is—if you ever hope to create business value. You can’t jump to business value if nobody cares. Nobody likes it, nobody wants it, nobody can use it because it’s too hard, this stuff matters, assuming we’re not building totally automated systems and that you have humans in the loop that matter, that are empowered to make their own decisions regardless of the data, et cetera, et cetera.

So, no use equals no business value, therefore you need to get the usage part down right, first. Another concept here that’s really important with the dashboards—and again, this includes these other mediums we talked about—we don’t really directly design the user experience. We design these artifacts, these mediums, and a UX emerges from those, and hopefully, it’s the one that we intended. We can have an intention about what it should be, but that’s not always the one that emerges. And that’s why part of design is about testing this experience with users.

Generally speaking, early is better and lofi is better so that we don’t over-commit to the wrong thing too early. We don’t want to build a bunch of plumbing, data engineering, all kinds of back-end stuff, models, et cetera, without really knowing what they’re going to be willing to use and find valuable later. So, we can try to influence the user experience, but just because we designed, quote, “An intentional UX,” doesn’t mean that’s the UX that’s actually being experienced. So, we need to measure that, we need to look at it, and we need to do that before we ship it, and it’s, quote, “Too late.” The worst thing you can do is measure it at the end, find out the insights, but there’s no interest or willingness by anyone—or a budget or time—to make any of the changes.

That is a waste of time, so if you’re going to test this experience, you also probably want to know, what are we testing, and are we willing to make changes based on the findings there? Because if we’re not, you’re just wasting time, right? And it’s saying, “Well, we shipped it, so we’re on to the next thing.” Fine. That’s not really a product way of approaching things.

That’s not a customer-centered way of approaching things that’s about shipping outputs and assuming that the score that customers are keeping is based on how many outputs and when did you ship them and that’s it; it doesn’t matter if anyone gets any value from them. That’s not the game I like to play, I don’t think it’s the game most of you that are listening to the show are playing either. So anyhow, moving forward.

Last kind of overall thought here, before we jump into questions is that dashboard doesn’t mean data vis. So text, words, sentences, phrases at the right time and the place may be all you need to deliver really valuable insights. So, a very simple example of this is, you know, if you have a monitoring application that’s looking at utilization, or it’s watching for errors—think of like an IoT system that’s collecting data from all these different objects—tell me when there’s something of importance that I need to go look at because I assume status quo is systems are running operationally normal and there’s nothing; I don’t need to just go in there and look at it unless I want some extra comfort. So, the point is a simple message that something needs your attention may be a great user experience, even if it doesn’t begin with a chart or some type of data plot that’s showing some evidence. And the worst thing you can do is begin with the evidence and make them come up with the conclusion from it.

And so the visual design and the date vis, all that stuff really does matter when we’re talking about are presenting evidence, but evidence is not conclusions. I already have a whole episode on my CED Framework, which means Conclusions, Evidence, Data. This is the UX Framework for designing analytics applications; I think it’s episode 86. If you go to my website designingforanalytics.com/ced, you can read about it there, and there’s a link to the podcast episode as well.

So, I went into detail on that; the data vis does matter, but we need to realize that data vis and dashboard is not synonymous, even if a bunch of your BI tools and software applications you use make it seem like these things are synonymous. I look at the plots and the evidence is just part of the toolbox of different mediums that we might use to help the user get their work done and provide insights. Okay? So, let’s jump into some of the questions here.

So, John wrote in, and he said—and I think the question I had asked the audience was, you know, what are your challenges right now with dashboards? What’s urgent or what’s really, you know, difficult for your team? That’s kind of the context here. And so John writes in and says, “What exactly is the job the dashboard user is trying to do? Are they building next year’s budget or are they looking for broken widgets?” And then he actually had a second question, he said, “What does this user value today? Example, is a low resource utilization percentage something to be celebrated, or something to be avoided for this dashboard user today?”

And so I think he’s saying figuring out the answers to these questions is difficult there. My response to this is that first of all, research, and specifically user experience research, is key here. And this, whether you’re building a commercial software product or whether you’re an internal enterprise data team that builds internal applications, decision support tools, things like this, you need to be doing research and it needs to be an ongoing thing that you don’t just do project to project. I’d say project to project is better than doing nothing at all, but what we’re trying to do here is get into the heads of our users and our customers, or quote, “Internal customers,” if you’re on the enterprise side because we can’t begin to anticipate their needs if we don’t really understand what it’s like to be them. And so this idea of human-centered design and user experience is really about trying to fit the technology into their world from their perspective as opposed to we build something in isolation and then we try to get you to adopt our thing, which may be out of phase with the way people like to do their thing today.

This is a much higher barrier to adoption. It’s much easier to try to fit it into the way they like to work now. So, the only way to reliably do that that I know of is to be spending time with these users, to be listening, to be observing them, and to get into the minds, to develop the empathy there so we know what it’s like to be them and we know what it means to provide value to them because we have this empathy that’s been created. So, the other idea, I would say, here is that we shouldn’t really be at the building and implementing stages, building pipelines, Tableau, application, UI elements, all this kind of stuff if we haven’t done the work yet, to understand the problem space enough. This doesn’t mean we need to go back to waterfall and that no code and no technical work can begin until we’ve done all this upfront design work.

That’s not what I’m saying. But we do need to have some idea of the problem space because it’s real easy to convince ourselves that if we just adopt Snowflake, or we just build out this pipeline, or if we only had real-time data instead of static data—that’s what they said they wanted, so let’s just give it to them and that will solve everything here—there’s a lot of risk associated with those kinds of choices. And when they’re hundreds of thousands or millions of dollars types of choices, really hard to re-steer the ship. And usually what happens in big companies is, the game that everyone’s playing now is are we shipping the new thing on time, on a budget? That’s what’s now defining success for the project: Are we shipping code every two weeks?

Because apparently shipping code means we did some good work. Not really, but that becomes the game that the company is playing. So, if you’re a leader, you want to reframe that discussion around the problem space and what value is and make sure that the team is playing the right game, which is centered around the customers and the users. So sometimes, we need to design a little bit to define the problem space better because even just talking to users and stuff, things change when users get to see stuff and play with it. We want to design low-fidelity work when possible and increase the frequency of these feedback cycles to help de-risk and clarify what people actually need.

Because sometimes when they start seeing artifacts—and I’m sure many of you experience this—you feel like you get all this wonderful insight, but it’s coming nine months too late. If you only knew that nine months earlier, you could have done something about it. So, we want to avoid over-committing to the wrong things upfront. So, this also can be really bad for team morale, and this is what makes valuable employees sometimes just put their head in the sand because they’re tired of working on stuff that doesn’t ever ship or doesn’t get used because we’re always finding out too late what people really wanted. You don’t have to do it that way.

It’s probably going to feel slower to do all this kind of research and design stuff upfront. And the reality is when you’re in the middle of doing that work, sure, it might feel slower because you don’t see the engineering and modeling gears cranking and you don’t see stuff spitting out soon that’s showing proof of concepts and all of this, but if you step back or step up to the thousand or ten-thousand-foot level, you’re de-risking all of that work and you’re raising the chances that all that great engineering and technical data science work and analytics work that your team is probably really good at is actually going to matter. And the more that stuff matters and gets used, the more people want to stay there and stay on your team and keep working with you because they can start to feel the impact that they’re having. So anyhow, it’s really important to try to accelerate the learning and accelerate the feedback loops as much as possible, working low fidelity, getting stuff in front of users, getting feedback early and often.

So, the last thing here, leaders who want their data science and analytics efforts to show value really need to understand that value is not intrinsically in the dashboard or the model or the engineering or the analysis. This is the labor theory of value and all this kind of stuff, that because we spent a bunch of time on it and it was really hard, or because it’s AI or it’s a machine learning model, that therefore there’s some inherent value that’s attached to that thing. Value is totally subjective in the eye of the user and the stakeholder. So, we need to be doing the research to understand how do they define value. And there’s usually some type of success metrics that go with that, but they’re not on the surface a lot of the time.

A lot of times we need to extract those things and research helps us do that. And if we can extract those things and develop a shared definition of success that the makers, the team working on the data products, and the stakeholders agree to, it’s a lot easier to play the same game together and to score that game together and to say, “We got to B-this time, but we know what A looks like because we’re agreed on the scoring system here and we know the direction that we’re going.” So, we really need to properly diagnose what does value mean, the subjective thing, in the eye of the people that matters. And that should be the customer first and foremost, not the stakeholder. I’m kind of talking to the enterprise world here.

We do talk about internal customers and this is a whole ‘nother discussion about whether or not we should even be using the word ‘internal customer.’ The point here, though, is that the user of the system is the closest thing to determining whether value exists, not the stakeholder. The stakeholder may have some wants, but playing this scenario out such that we’ve aligned what the users need and want, the customers’ voice if they’re not the user of the system, and the stakeholders' needs, this can be tough. I’m not saying this stuff is really easy, but if we can get the alignment there and everybody can agree on how we’re measuring success here, and it’s not just give me what I asked for. Because that’s often not what they need, a lot of the time.

And I’m sure many of you have seen that they asked for this thing, and then you gave it to them, and then they didn’t use it. The diagnostic part was not done properly because there was no research done. This is better for everybody in the long run. It can be hard to do this stuff and I’m not going to l—[laugh] I’m not going to lie about it. It’s a lot easier to sit in your corner of the world and just respond to tickets and requests to build dashboards and outputs.

Some of your team may really liked doing that work, too. I want to talk to the people out there who want to have impact in their work, to make sure the work gets used these dashboards, these applications, that you feel like you’re creating value and you’re helping people move from the shooting from the hip and the gut, to actually having some real evidence to inform decision-making here. So, got to go do that work. It’s hard, but it will make a difference, and they will trust you more. And the wheel continues to turn. The snowball builds, and you get that momentum going for the team, which can be really valuable.

So, the next question here comes from [Mareike 00:22:01]. And she says, “I think a big challenge with dashboard design is to provide information in a way that people are able to act upon the presented information. Many times the ‘so what?’ Seems to remain. Oftentimes, several people will work with the dashboard, but they often do not know how to translate the presented information into action. It is often expected that the dashboard should prescribe a particular action in contrast to just informing and improving their own decision-making. So therefore, it would be interesting to learn more about user expectation management when designing dashboard and analytics solutions, or distinctions of designing exploratory versus predictive analytics solutions. It seems users want prescriptions rather than an exploratory tool.”

Okay. So Mareike, thank you for the question. Again, here, I would totally agree with you. I think that a lot of analytics tools are taxes on the user because they require a level of effort to go in and figure out, “A, is there anything here interesting at all, and B, can I actually use that information to make a better decision?” So, my general feeling is that while there are different technologies here, descriptive analytics versus more predictive and prescriptive analytics that are informing next best actions, et cetera, there are ways to design experiences that allow us to work with traditional descriptive analytics that still can inform decision-making.

This gets back to the research stuff I was talking about earlier. We need to understand how they make decisions today. What data if any goes into those decisions now? If we’re introducing new data, then I would want to be getting that kind of data, those discussions, these questions, going with them early to understand what their concerns are, their belief systems are, what the risks are associated with using new data they haven’t perhaps thought of as being relevant to their work. I’d want to get them involved early.

And again, this can be through showing them, either having participatory design activities by showing them low-fidelity mock-ups and ideas about how you think that your team can solve the problem for the user, but we’re not waiting until we get a final polish solution in front of them to figure out what is it that they’re actually going to be willing to use? So, do you need to say that things are changing more towards the predictive and prescriptive? We all know that machine learning is here to stay. We have all the ingredients to do this type of work these days. I mean, there’s still plenty of difficulties involved in that, but the general track here is that we want to build out better data products that have the insights built right into them and people are self-enabled to perhaps override a machine-driven piece of intelligence, but they are informed by that, they can look into where it came from, we’re using models that are interpretable where we can understand why the model is telling us what to do.

The point though here is that even if you use a predictive or prescriptive solution here, in some cases, the user is still going to want to do some of the work to understand how did the machine come up with this recommendation here. And that kind of human algorithm, the workflow that they would go through to do that might be really similar to the type of work they would do if they were doing it the old fashioned way with just looking at historical data and then having to come up with their own decisions about making a future decision. So, they’re not necessarily totally different, and the practice of going out and [figuring 00:25:43] what those things are can be done through user experience research, through designing in low fidelity, through getting those feedback loops increased, more frequently occurring, and making sure that we’re reacting to that data and we’re not just doubling down on yesterday’s technology decision that we made there. The more we have empathy for them, the more we understand the constraints that they have, the fears that they may have, what it means to improve their life in a small way by improving the decisions that they make all day every day—or maybe it’s once a quarter or once a year, I don’t know what it is—the more we’re connected to that, the more we can provide a solution that matters. And that might mean a solution that’s good enough, using, quote, old fashioned analytics or BI technology, just historical trend data, that’s nicely correlated, that helps them compare the evidence that they care about this time right now, to have confidence, all those kinds of things. The more we can do to increase the time that we spend with customers—and again, I’m using the word customer here, but the end-user is really what I’m talking about—the more we’re doing that, the higher your chances are that you’re going to build something that works.

The exploratory tool feels really good to the technology team because it feels like, “I won’t have to answer more questions like this if I just give them a tool to do it themselves.” Totally true, totally fair point in some cases. The issue is that a lot of times, the tools that we provide to users are built really fast and they’re usually built around what was the easiest thing to push out quickly. And so, you know, the way customers think about that, you’ve seen these jokes before, it’s like, “Well, just show me all the customers who bought this thing, and then followed up a month later and bought this other thing.” And it’s like these users or stakeholders think it’s like, select all records from table one, join on table two, and you’re done. And in reality, this data is, like, all over the place, there’s a whole bunch of plumbing that needs to be done, and it’s really difficult, right?

So, the tool that we end up generating in those situations tends to be a tool that’s modeled around the data and not modeled around their mental model of this space, the customer purchase space, the marketing spend space, the sales conversion, or propensity to buy space, like, who should I call next on my sales list or whatever. Data models and mental models are not the same thing and it’s usually a bad sign when I’m auditing a tool or something like this is when I see the information architecture for a tool is directly modeled off the object model of the database or the data store or whatever it may be. That’s usually a sign that something is wrong. Not always, but oftentimes, it is a sign that something is wrong.

So, if you’re going to give people an exploratory tool, you still need to do the homework of understanding what do they want to explore because really, they don’t want to explore anything. The exploration task, unless you’re building a tool for analysts, the exploration part is actually probably a tax, especially—if that’s not their job to sit around and explore data all day, it’s probably more of a tax than a benefit. The reward is at the end of the rainbow, so we need to be informed about what the rewards look like. What does an insight look like? Is it clear when they get to an insight with a tool like this? And then how do we design an exploratory tool that helps people get to these different pots of gold that sit at these different rainbows, where the rainbows are each different types of workflows and activities that they may want to perform using your tool?

Instead, a lot of times I just find it’s like, here’s a tool. You can change out the different chart plots. Maybe you could change the period or whatever. And yeah, some of that might get you going a little bit and you might show some short-term value. The problem is when we keep iterating on top of a proof of concept or something that was easy to build based on the way the data was stored and how our BI tool can access it, or our shiny app, or whatever the heck it is.

This is where we get into trouble here because the next thing you know, we’ve now built this giant application over all this time, it’s really expensive and hard to go and change it, but we’ve learned a ton over the last year, as we’ve watched, people try to use it, but now it’s really expensive to go and change it and no one really wants to swallow the pain of that. So, again, I can’t hammer this home enough. There’s upfront work that needs to be done well before there’s technology sitting in front of the user, and if you can get the technology implementation work going a little slower or perhaps a little bit further behind the design and exploration space, you’re going to improve the value and the quality of the implementation work, you’re not going to have to rebuild stuff as much because you’re going to be catching the flaws in the experience and you’re going to be catching the issues with the exploration tool early enough that you can make a difference and that you don’t end up making decisions now that are going to have a big price to pay later on. Like let’s say—well, here’s a dumb—like, a simple example, but everyone’s talking about, “I need to see sales data from the last quarter,” or something like this, and then you find out nine months later that the only reason they need to see that right now is that they want to be able to compare it to the previous quarter in order that they can do a readout to some executive. And the way they’re tracking that is, you know, Q4 last year to Q4 this year.

But instead, your data store only sample the data at this frequency for the current year and then for every year past that it’s a different frequency so you no longer can actually do this comparison that they wanted. All because what we heard on the surface was, “I need to see last quarter’s sales. Can you show that to me?” But we didn’t understand really what they were trying to say is, “I need to show the boss whether or not sales were up against the thing that they care about, which was the previous quarter.” Now, we’re finding out too late; our whole architecture is based on, you know, this year’s data or streaming data for this year, but it doesn’t—you know, we’re not paying the license to get streaming data or, you know, whatever, to access all the historical data that we need.

Again, I’m just riffing and trying to give you an example here. And it may be a bad one; I’m not an engineer. I don’t get paid for the [laugh] implementation part. But I have seen this problem before where the early, large architectural decisions are made that have a definite impact on the user experience that we’re able to enable, and it’s because the front end was seen as this kind of low-level fruit. It’s like, “We need to have this giant architecture in place and dashboards are just these little things that sit there like faucets at the end of the plumbing system, and those don’t really matter. You can do whatever you want once we get the architecture right.”

Uh-uh-uh-uh. To me, it’s like, you need to have some idea of what you want to come out of those faucets in order to inform what the plumbing should be. It’s a last-mile first type of approach. It’s a different approach. So Mareike, I hope that helps you on the designing exploratory tools versus predictive tools.

I’m not pooh-poohing all exploratory tools; I think they can be really useful, especially if you understand classes of problems. It’s really hard to understand classes of problems, though, in someone else’s domain that you don’t live and work in all day long, unless you’re going out and interfacing with the customers and the users to understand it. Okay? So, that’s kind of my riff on that.

And then, lastly, here, James writes in. I think, just for context, I think James is a designer here, but he says, my number one challenge is to get people—I assume his team—to focus less on the assumption-laden and often restrictive term ‘dashboard,’ and instead worry about designing solutions borne from fresh user customer research, focused on outcomes for particular personas and workflows that happened to have some or all of the typical ingredients associated with the catch-all term, “Dashboards.” First of all, I liked the question, James. It sounds like we share some ideas here. I liked that your team seems to be trying to focus on outcomes over outputs here, but in reality, it sounds like what’s going on, though, is that the team is focused on outputs not on outcomes.

So again, this gets—what do I mean there? I’m talking about completing the sprint on time, project management success criteria, shipping code, or counting the number of check-ins that we had, measuring project management, engineering velocity, these are all progress metrics that might give us a raw idea if we’re sort of on the right track, but they’re not really success metrics, right? So, I guess my question for you would be who owns the problem? And again, I think data product managers kind of should be these problem owners if there has to be a single entity for this. When we’re talking about different initiatives in the enterprise or for a commercial software company, it’s really sits at this product management function.

But really, it’s not just that one person. What, I think, part of the role there is that the product owner, this product manager needs to get the team to feel a sense of ownership over the problem space, not just a single person. So, one thought here is how well is the team shining a light on the enemy? And are we all in agreement on how we will measure that success? So, what do I mean when I say ‘enemy?’

I’m not talking about a person; I’m talking about non-usage of the tool. I’m talking about bad decisions based on bad data. I’m talking about tool time, it takes two hours to do something that we think could be done in two minutes if the experience was done properly. So, we need to visualize the struggle or this enemy wherever we can.

What do I mean by that? I mean that sometimes seeing one user frustrated with a dashboard or tool can change the hearts and minds much more than log stats, looking at how many people logged into the tool and did something, or whatever. So, we want to shine a light on what the enemy is there, and it’s something to do with, probably, a poor design choice, a poor user experience choice. It could be, the data is bad that’s in the system, or it’s incomplete or has errors, or whatever, I don’t know what it is. But we need to shine a light on that in order for us to be able to see what a better future looks like.

So, do we or does your team have a shared vision of this enemy? And does it have a clear strategy for going forward with this particular initiative, or product, or whatever the solution that you’re working on? A good way of knowing whether you have that or not, is whether or not all the team talks about is the list of outputs and use cases that they need to design for, but there’s no sense that if I went around and asked the individual contributors on the team, “How will you know if you did a good job? Besides shipping the features at the right time, how will you measure whether you did a good job in this particular initiative?” If they can’t answer that, it suggests they don’t understand what the strategy is, and a lot of times that’s kind of opaque to be honest.

So, the number one tip I guess I would have for people in this data product management function, whether you have that title or not, is to stop measuring the creation of outputs and instead focus on the user workflows and the jobs to be done. Make sure the enemy is clearly visible there. Make sure the success metrics are really clear so that the team knows how do you keep score in this game? Are we counting touchdowns? Or are we counting home runs?

Like, we want to make sure that we all understand what does it mean to do a good job in this space. And that definition of good job is something in the eyes of the customer or the user of these things. And we don’t want to wait until we ship to find out what that is.

So, how do you do that? I just did a LinkedIn Live. As of this recording—it’s April 2022, but I think it was last week—I did a LinkedIn Live session about conducting usability studies which are a way to objectively measure the design of something and the user experience of something so that you can tell.

The whole team can then have a shared definition of how do we know if this design is working, especially early enough that we can do something about it? And if you’d like data, this is a great way to also get some data to quantify whether or not something that might seem very qualitative or subjective, it doesn’t—design can have very actionable, quantitative definitions of success. We can measure it. So, if you’re interested in learning how to measure it, feel free to go check out that LinkedIn Live where I go in-depth on that particular module of my training program.

But anyhow, so usability testing is one way to do this, and part of the reason I mentioning that to you—and again, this is in the LinkedIn Live—is that the act of coming up with a usability test requires that we have a script and a plan for what we want to test. We don’t just sit somebody down the tool, let them cruise around. That’s a different type of research. In usability study, we’re giving them tasks to perform with pass-fail criteria. And in order to come up with those tasks that are worth testing, we need to know something about the users that probably came through research, right?

So, the benefit that comes here is not just that we have a script to go run a test with, it’s that we’re envisioning what success looks like before we even have that design done and ready to test. We’re defining what success means through the different tasks that we’re going to give users. Can you or can you not decide how much you should spend on marketing next quarter in your Google AdWords campaign, or whatever the business question is that we want to answer. There can be a pass-fail for that; there can be a way to objectively measure that in the design early enough that we can begin to make changes to our data product if it’s not going to do a good job of that. So, to me if the team has alignment on the success metrics here, it’s easier to get aligned on outcomes.

Otherwise, it’s just, you get back into that thing where everyone is responsible for their little slice of the pie. “I’m a data scientist; my job is to do the modeling. I don’t know how that model gets used in production. Not my role,” et cetera, et cetera. Good luck with that approach.

I think that’s a high-risk approach if people are checked out like that. I think when people understand how their slice of the universe fits into the big picture, I think the work is probably more rewarding for the makers, and it’s also better for the customers and the users. So, one final comment on this thing is that designers and researchers can really help inform this problem space—what to test, how to test it, what the metrics should be—because researchers know how to talk to users and more importantly, they know how to listen and interpret the words, the feedback, the emotions, the stuff coming out of their mouth. You don’t need to have professional user experience people—designers, researchers, et cetera—on your team to get started doing this work. The benefit of professionals is, of course, the payoff is sooner, you do it faster, you do it right, and the ideas tend to spread if you have the right UX professionals on your team because they know that design is a team sport and part of the work is proselytizing this approach to data product design and development, is getting the rest of the team on board with it. It’s getting the data scientist out there participating in the research, the analysts as well, the engineers as well because the more we can collectively get everybody focused on the users, the better it is for the company, for the team, for the value that we create for the users, and for the impact that we want to have. Okay?

So, that said, again, you don’t have to hire professionals; you can learn how to do this, there are steps that an interested team that’s, quote, “Design curious,” or, “UX curious,” if you’re ready to take steps, get going. You don’t need to have a degree to get going, you don’t have to be great at it, you just need to have a willingness to do the work, to be curious. And so anyhow, there’s plenty of assets on my website, if you want to learn how to do that. I do have training, et cetera, you can check out my self-guided course or the seminar if you really want a prescriptive, you know, step-by-step guide to doing that type of work. But get going, don’t worry about doing it right. Just get started with building a habit there, and you can always get better at doing it. Okay?

I also wanted to say a couple things with James' question here talking about this product manager or data product managers role here. It’s really important that this person is not just focused on deliverables; they need to really be the ones that summarize the problem space for the entire team, they help define a strategy with the entire team that really clarifies the direction the team is going in. They are not a project manager; they are someone responsible for delivering value. And again, this person may have a different title at your company. I’m using this word because product management in the software world, it’s a very defined and mature role.

I think that role, it’s critical in a software company and it also would be critical in many enterprise data teams that I’ve seen, data science and analytics teams. It’s usually missing. Sometimes it’s called an analytics translator. I’ve seen all kinds of other terms used for this, but the behaviors and the responsibilities of this person is what I care about, whatever you want to call this role. Someone needs to own this stuff. If you want to routinely produce value with your data products, somebody needs to manage that product.

And I’m not talking about the people either; I’m talking about the data product, the solution space, whether it’s a platform or an application or a tool, whatever that thing is, they need to be creating that shared vision. That’s I guess the best way I can say it there. Part of the way you do that as making sure you have the right blend of minds at the table from the outset. And so for me, the small inner ring team that I talk about in my training is that usually, it’s someone responsible for the design and the experience part that the user is going to see in the last mile. We need that product person who understands the business value we’re trying to create, the overall constraints on the entire initiative, what else is going on in the organization that may be related to this project, and then you also need a technical lead who knows what’s possible, what shaped the data’s in, how much plumbing work may need to be required, what are the dependencies that we have, all those kinds of things.

Sometimes there might be someone from engineering on that team and someone with a more data-specific, a data scientist or something like that, but it’s at least probably three bodies. It may not be one body for each role, but I like to have those lenses on or those different hats, so you need to be having at least three of those hats there. And then optionally, maybe a fourth. Again, a domain expert might be your fourth or fifth person there if that team does not have the in-depth domain experience that’s been built up through research, you might need to have a domain expert who’s involved along the entire way. Okay? So, this diversity of perspectives, from business, from the user side, from the technology side, this is what helps us push out data products that actually get used and produce value.

And then the final thing that this person should be—this data product leader should be doing is making sure this is the right problem for our team to work on right now, there’s always a choice about what we’re not going to do as well and there’s also the choice of, yes, this is a big request, but is there another opportunity that perhaps we could ship out an equal amount of value in half the time with half the resources if we put our priority over here. And that’s a constant juggling act, and that’s why this person needs to be talking to all the different facets of the business, and kind of balancing what’s going on in engineering and the implementation side with what the business needs and all that. And this is product management 101, I’m sure, for many of you, but knowing whether or not it’s the right thing right now and challenging those kinds of decisions, I think is part of the role there. And this, I think, when you have a good, strong product-minded person there, it takes you out of the drive-thru data factory where you can order whatever you want at the window and pick it up on your way out, and it’s on to the next project. That’s a very project-oriented approach. That’s not about building a solution that’s never really done, but instead is a continuum of bad to great [laugh] and we’re somewhere along the lines of that quality spectrum there.

That’s the product kind of approach to thinking about things as it’s making stuff that matters, about creating a better future for the customers and the users of these tools. Very different than just giving people what they asked for because they submitted a ticket or they placed an order at the drive-thru window. Okay? So, that’s my kind of spiel on data product management. James, I hope that gave you some, at least some of my opinions, if not an [laugh] answer to your thoughts and questions.

So, if you like these kinds of insights, feel free to join my mailing list at designingforanalytics.com/list. That’s where I publish and sometimes ask you all questions. And if you need some help designing or redesigning a mission-critical data tool or data product that leverages analytics and machine learning, feel free to check out my website designingforanalytics.com/services.

I have a bunch of different ways you can work with me there as well as a bunch of free content. Obviously, you know about the podcast and the mailing list. There’s also lots of different articles, a self-assessment guide for analytics products there, so a bunch of free stuff there to help you out as well. Okay, thanks. Until next time, hang in there and appreciate the support.

Feel free to share this episode with a friend if you found it valuable, and always appreciate reviews and the little messages you all send me sometimes on email and LinkedIn that you’re listening. It’s really great to get that feedback and I appreciate the support. Okay, thanks.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe for Podcast Updates

Join my DFA Insights mailing list to get weekly insights on creating human-centered data products, special offers on my training courses and seminars, and one-page briefs about each new episode of #ExperiencingData.