086 – CED: My UX Framework for Designing Analytics Tools That Drive Decision Making

Experiencing Data with Brian T. O'Neill
Experiencing Data with Brian T. O'Neill
086 - CED: My UX Framework for Designing Analytics Tools That Drive Decision Making
/

Today, I’m flying solo in order to introduce you to CED: my three-part UX Brian O'Neill by Liza Vollframework for designing your ML / predictive / prescriptive analytics UI around trust, engagement, and indispensability. Why this, why now? I have had several people tell me that this has been incredibly helpful to them in designing useful, usable analytics tools and decision support applications. 

I have written about the CED framework before at the following link:

https://designingforanalytics.com/ced

There you will find an example of the framework put into a real-world context. In this episode, I wanted to add some extra color to what is discussed in the article. If you’re an individual contributor, the best part is that you don’t have to be a professional designer to begin applying this to your own data products. And for leaders of teams, you can use the ideas in CED as a “checklist” when trying to audit your team’s solutions in the design phase—before it’s too late or expensive to make meaningful changes to the solutions. 

CED is definitely easier to implement if you understand the basics of human-centered design, including research, problem finding and definition, journey mapping, consulting, and facilitation etc. If you need a step-by-step method to develop these foundational skills, my training program, Designing Human-Centered Data Products, might help. It comes in two formats: a Self-Guided Video Course and a bi-annual Instructor-Led Seminar

The next public seminar begins October 16, 2023, with registration beginning October 5, 2023.

Quotes from Today’s Episode

  • “‘How do we visualize the data?’ is the wrong starting question for designing a useful decision support application. That makes all kinds of assumptions that we have the right information, that we know what the users' goals and downstream decisions are, and we know how our solution will make a positive change in the customer or users’ life.”- Brian (@rhythmspice) (02:07)
  • “The CED is a UX framework for designing analytics tools that drive decision-making. Three letters, three parts: Conclusions; C, Evidence: E, and Data: D. The tough pill for some technical leaders to swallow is that the application, tool or product they are making may need to present what I call a ‘conclusion’—or if you prefer, an ‘opinion.’ Why? Because many users do not want an ‘exploratory’ tool—even when they say they do. They often need an insight to start with, before exploration time  becomes valuable.” - Brian (@rhythmspice) (04:00)
  • “CED requires you to do customer and user research to understand what the meaningful changes, insights, and things that people want or need actually are. Well designed ‘Conclusions’—when experienced in an analytics tool using the CED framework—often manifest themselves as insights such as unexpected changes, confirmation of expected changes, meaningful change versus meaningful benchmarks, scoring how KPIs track to predefined and meaningful ranges, actionable recommendations, and next best actions. Sometimes these Conclusions are best experienced as charts and visualizations, but not always—and this is why visualizing the data rarely is the right place to begin designing the UX.” - Brian (@rhythmspice) (08:54)
  • “If I see another analytics tool that promises ‘actionable insights’ but is primarily experienced as a collection of gigantic data tables with 10, 20, or 30+ columns of data to parse, your design is almost certainly going to frustrate, if not alienate, your users. Not because all table UIs are bad, but because you’ve put a gigantic tool-time tax on the user, forcing them to derive what the meaningful conclusions should be.”   - Brian (@rhythmspice) (20:20)

Links Referenced:

Transcript

Brian: Welcome back to Experiencing Data. This is Brian T. O’Neill, and no guest; no guests today. You’re stuck with me. I’m going to roll another solo episode. Some of you said that you found some of my previous solo episodes helpful, so I might be dropping these in a little bit more often.

Today, I want to talk about the CED UX framework—that’s three letters, C-E-D. This is an article I’ve actually written on my website a couple years ago that actually outlines the framework. If you want to go see the written version of that right now, the link to that is designingforanalytics.com/ced—just three letters—and I’ll give that again at the end of the episode here.

But originally, I put this together with design leaders and product leaders. Typically, this is probably going to be more on someone who’s building a SaaS analytics platform decision-support application, monitoring applications, things of this sort. There’s no reason this couldn’t be used for internal enterprise analytics tools as well, so I kind of want to just break this down and give you an idea how to use the framework, what the point of it is, and all that good stuff. So again, if you’re using Tableau or Power BI, or whatever your tool stack is, it doesn’t really matter, it’s tool-agnostic, I would say that this is primarily not for ad hoc, one-off dashboards or things like this. This is really for someone who’s building a decision-support application that’s going to be used on a routine basis. Again, whether that’s for paying customers or for internal use, the assumption here is that this is going to be an application of some sort that’s relied upon fairly regularly.

So, with that in mind, what is the CED framework? Well, first, I want to tell you why I even came up with this framework. So, I think a lot of times what happens is if you’re a designer working in this space, by the time you get involved with the project, sometimes it’s, “How do we visualize the data?” And I think this is the wrong starting question. That makes all kinds of assumptions that we have the right information, that we know what the users' goals and downstream decisions are, it’s too late to be thinking about design at that point.

And so it’s not to say data visualization doesn’t matter. It just means that there is other work that needs to happen prior to that. And if you’ve taken my seminar or seen any of that, you’ve come to learn that design isn’t just about the ink part. And we tend to associate analytics with data visualization a lot of the time, but when we think about it from a UX standpoint and understanding unspoken needs, unarticulated needs, we need to step back a little bit.

So, this UX approach here is going to heavily consider stakeholders. So, these are people that may not be users of the tool, but they have some direct interest in the outcomes of what the tool is supposed to provide. Obviously, end-users, we’re deeply trying to understand that stuff before we jump in so that the output itself is actually going to be useful. So, with that UX framework, UX mindset here, the question is, how are we going to improve people’s lives if we get this tool, right? That’s kind of the difference here. It’s not how do we visualize the data, but how are we making the world better for this set of customers or users who are actually the ones touching the tool at the end of the day? And if we can measure that and anchor quality to that, we’ll be on the right track.

So, what is it: CED? The CED is a UX framework for designing analytics tools that drive decision-making. Three letters, three parts: Conclusions; C, Evidence: E, and Data: D. There are some data pyramids, and there’s some other mental models for this kind of thing. I like three, there’s a reason I picked three. Let’s jump into each of these here.

And I did want to say, too, that you should assume here that this solution, this tool that we’re building here has an intention to be explanatory or declarative in nature. So, what do I mean? I mean, we’re not giving somebody a toolbox and saying, “Please go ahead and build yourself a new bathroom, and here’s all the tools and raw materials. You figure out how to put it together yourself.” That’s not what this is for.

I suppose maybe you could try to use it for that, but it was never [imagined 00:04:43] for that. It was under the intention that in our data is insights, and we know that there’s probably repeated types of insights that people are going to need, and the goal is for our users not to have to spend a ton of effort and time extracting those from this information. The goal is not to go in and explore data. The goal is to inform some decision that’s downstream. So, explanations, being declarative, conclusions, these are the ideas that we’re trying to drive to the forefront here.

So, I’m typically thinking, “How do I reduce the amount of time someone needs to actually spend in the tool at all?” A lot of the consulting work that I do is going to focus on that. And so low usage doesn’t always mean there’s a problem. Low usage, if you just look at quantitative metrics may actually be a sign that you’re doing something really well. So again, let’s dig into this.

So conclusions, the first part of this. Another way to think about this—this came from, actually, Gadi Oren, who was a guest on this show. I forget which episode, but he liked to think of the data, these kinds of systems that are supposed to look for insights as generating opinions as well. So, you could replace the conclusions part with an opinion. But the system has an opinion about what’s going on now or in the future.

It doesn’t really matter, but the point here is that the system, the software, is doing some work to generate some insight for us, we’re not just relying on human interaction to do that. And so the conclusion is the thing that we want to surface in the experience as early as possible. This is the most important part of the solution is the idea that when I log into this dashboard, or this tool, or whatever, or maybe I didn’t even log in because it pushed the conclusions or the opinions at any at the right time, the idea here is that the insights are being pushed at me in the form of software-driven conclusions. So, they’re trying to facilitate a decision. It means that we know how we’re measuring KPIs such that we can show interesting changes and contrasts, we don’t just show—a simple example of this would be we wouldn’t just show 87 on the dashboard; we would show 87 as compared to some boundary that’s important, some benchmark, some qualitative range, something like this. Why? Because in order to make a decision, we probably need to compare that 87 to something else that matters.

And so all of this UX thinking that goes into this, the design work here, that’s the stuff, that’s the meat and potatoes of figuring out, well, how do we help the system generate an interesting insight, a conclusion, or an opinion? Well, it usually means that we have to have an opinion about something, we have to put a stake in the ground, such that the software can say, “Hey, this is weird. This is interesting. This is an insight. This is something that you should be paying attention to.”

So, another thing about this is these conclusions are probably being surfaced early in the experience, but this doesn’t mean only on the dashboard. For example, a conclusion could again be something that’s pushed to you in the form of a notification. It could be simple words and text, it doesn’t necessarily mean there’s going to be data graphics. In fact, I think most of the data graphics fall under the E part of the framework, which we’re going to get to next.

Some of these conclusions, if you’re trying to imagine what conclusions could be like, I tend to look for things like unexpected but interesting changes in the system. Expected changes, especially for looking at, like, periodic data where there might be expectations or ranges, and maybe people are trying to make sure that nothing is out of whack and telling people, “Nothing’s out of whack.” And then the evidence being the E part which we’re going to get to, that’s the back it up. But the point here is we’re aligning these conclusions back to what someone’s goals were, which we learned about during the research process. And that’s really getting into how do you structure your daily activities and your quote, “Design and research work” to do CED.

Well, it does require you to do customer research, user research, to understand what are these meaningful changes, insights, these things that people want. So again, unexpected changes, expected changes, change versus benchmarks, qualitative ranges, understanding maybe what other people have done, like if you’re working in a team environment there may be a social aspect to this, recommendations, next best actions, all of these things are system-generated conclusions or opinions. And they may not always be predictions because they could be in the past, so I don’t want you to think that conclusion means oh, it has to be ML or AI or something like that. It doesn’t.

And the final thing I want to say about the conclusions piece is if you get this stuff right, it’s possible that from a UX standpoint, the conclusions are the only thing that customers really need to work with. The software, the application, the data product is doing its job in part because I don’t have to go any further than what the conclusions tell me. So, this tip of the iceberg, if you think about it, kind of like an iceberg sitting in the water, the part that’s sticking above the water is this conclusion, and underneath it, there’s all kinds of number crunching and evidence and charts and stuff that you can look at to understand how did it come up with that. But over time you may learn that some customers don’t need all that stuff, or at least they don’t need all of it all the time. Some of the time, they may need it, but what they really came there for was that insight, and sometimes the end of the UX can simply be digesting the conclusion.

So again, if you think about, like, delivering an insight via notification at the right time, they may not need to go in and understand how did you come up with that, Computer? Tell me how you came up with that. That evidence, the interpretability of what’s going on in the system may not always be required. Okay. So, that’s the Conclusion, part C in the CED framework.

Second part Evidence, E. This is probably what I see most of the time, you know, when I, for example, conduct one of my UI-UX audits, and this is typically a service that’s more for software companies that have a tool that’s, you know, hard to sell, or it’s not—the user adoption is low, or people are complaining about usability, et cetera. A lot of times the culprit here is that they’re shoveling tons of evidence at the customer, but there’s no insight in there. They’re expecting the user to go do the job because the tool is, quote, “Exploratory in nature.” And yes, there is a time for that, but I think unless your customers are analysts, most of the time, exploration may fall into the realm of curiosity to tax.

So, there are times when it may be necessary, there are times where it might be a curiosity, and there can be a lot of times where it’s simply a tax that you’re putting on the customer, an unwanted use of their time spent digging through the evidence, trying to figure out what the conclusion is that’s supposed to be supported by all this evidence that you shoveled at them. And so one of the reasons—and I forgot to mention this in the conclusions part—if you’re familiar with Kahneman book, Thinking, Fast and Slow, and System 1 and System 2 thinking, System 1 kind of being this gut visceral reaction that we have to things, and then we sit and think about the conclusion, we start to process and think about the data, we move into that System 2 thinking. These kinds of ideas sort of align with that: The conclusion, the insight, being that System 1 thing, which may be delivered with words or a really powerful visualization that tells immediately what’s going on, and then System 2 may be this more evidentiary part where you’re actually showing all the workings and you’re showing all the background information and letting people slice and dice data and change charts and all this kind of stuff. That’s more of your System 2 thing. We don’t want to leave the System 1 stuff out because people are going to form that conclusion quickly and then try to support with data. I want you to try to push more of the insight to the surface of the experience when it’s possible and when it’s appropriate to do so.

So, exploration—we talked about this—not all exploration is bad, and I think there are times, like, for example—a good example of this would be like Fidelity has, I think, a fairly good retirement planning tool. And so this tool uses simulations to predict how much money you’ll need at retirement, and it runs lots of different I think it’s called Monte Carlo simulations where it runs a whole bunch of different scenarios to give you an idea how much money might you have based on these savings rates, these investments, these one-time assets, you know, bequeathed or something like this. And you can pin some of these things, and you can say, “Well, assume this is growing,” or, “Assume this amount of money is not going to grow any further; it’s a one-time thing.” That’s allowing you to play with the parameters of this predictive model here, and that can be a good thing, and this is the time to do that exploratory stuff.

But the conclusion here, the opinion, maybe some initial score, some idea that says, “Hey, you’re 82% on track for retirement under these goals and these boundaries that you’ve put in place at some earlier time,” or, “Our business KPIs say that we should be between 80 and 120 at all times, so if you’re inside the bounds, that’s a green light.” So, the point here is not all evidence discovery and playing around with the tool and exploration is necessarily bad, but most of the time what I see is those interactions are not well designed, and we’ve pretty much put all of the interaction requirement on the customer, and we’re leaving them with a whole lot of what we call tool time, right? There’s that tool time and gold time concept—which I believe came from Jared Spool, who’s also been a guest on this show—we want to reduce tool time, wherever possible in the experience so that people are focused on their goals, their workflows, the downstream decision, the outcome that they want, and not focusing on tooling activities.

So again, big challenge I see here is a lot of the tools and data products and analytics tools that I look at that need help—and this goes, too, for internal ones as well—I’ve seen these in my private seminars as well when we look at Spotfire dashboard collections and things like this, we’re sometimes dumping tons of evidence. And I think some of this gets into how we build prototypes and how, you know, the first thing you may do is eventually query all the data, try to pick a chart to plop it into, and then you have this giant mound of data and you have the entire world in front of you. And then you have to figure out okay, what can we take away because obviously, no one can make sense of this million records of stuff all plotted on top of each other, even though someone kind of said, like, “Well, I’d like to look at how our entire set of projects is doing and which things are on track and which things aren’t,” you quickly find out that no one can really process all that stuff, and so then we start to adjust the queries and all this. And that’s a very different kind of approach to building it, and we’re starting usually with way too much stuff, there’s no conclusions in that stuff, and we’re now trying to hack it back from some dump of evidence.

And to me, that’s a much harder way to work on it, I think it can work if you have a really good design sense and you’ve had some experience designing things, there are ways to build good data products working backwards from that kind of model of real-time prototyping or real-data prototyping. And I’m a big fan of using realistic data in our design prototypes, but you can get very locked into your early decisions, your early code, your early efforts, and not want to go back to the drawing board working this way. And that’s the issue here is you get sunk cost bias, where it’s going to be harder and harder to turn the ship around if it’s headed in the wrong direction. So, this evidence thing here, we really want to tone that down, at least in terms of where it falls in the experience. It doesn’t mean that stuff’s not needed, it just means it shouldn’t be the first part, it should really be the second part of the experience. It should be the thing that supports those conclusions, right?

So, another thing here to think about is that this is probably the place where you’re going to have lots of heavy charting and table grids and drill-downs and model features and explanations and all that kind of stuff. So, if you’re landing people—and I’m making a broad generalization here—if you’re just dumping people right into that kind of stuff immediately, and there’s no part of the experience where they kind of get a high-level view of what’s going on or monitoring the stuff they care about, and you’re not pushing that signal, those conclusions, to them early in the experience, something is probably off. So, that’s just an easy visual way to think about whether it’s on track is just how much data and visualizations and infographics and plots are you pushing at them right in that experience? Is that evidence or is that conclusions? And try to step back and look at and ask yourself, “Am I really showing them an insight here, or am I giving them the evidence for them to go and explore something and figure it out themselves?”

And I’ll tell you, if you’re going to do the exploration thing, that experience itself probably needs to be designed as well. The interaction design of doing the exploration also needs to be designed. It’s not just dump raw tool on them, some plugin, pop it in a chart, here’s some filters, go to town on it. Yeah, maybe some of the time that works and it’s not always going to make sense on some projects to build a custom solution, and I totally understand that, but I think you have to be careful with this feeling that, “Hey, we gave them this exploratory tool, and they can pin different parts of the features and rerun the model against it, and blah, blah, blah.” It can feel like, “Here’s this cool tool that we built, look at all the things that can do.” And now we’re back to focusing on outputs instead of outcomes.

What’s the change in this person’s life, the outcomes, the better future that we want to have? It’s not the tool. It’s something that happens as a result of what the tool is doing for the customer. Change the focus on the outcomes and then relook at what you got and ask yourself, are we facilitating outcomes or are we just pushing outputs at this person?

Right, so we’ve done Conclusions—C—Evidence—E—the last part is Data. What do I mean here? I think of this as kind of like this is the bottom of the totem pole. This is where the raw data is. This is where data extract is or export. This is where export to Excel is.

This is where, like, check all my connectors, what are the data sources behind this? How are the statistics computed? This is all that other, kind of, ancillary stuff that’s probably low-frequency use kind of stuff. It might be first-use kind of things, which is like, “How did you come up with that KPI?” And, “What does customer attrition really mean and how are we measuring that?” That’s where all this goes stuff goes.

This is where diagnostics go. This is where a place to maybe, if you really needed to get into the weeds of how the model was working and maybe you have some, quote, “Advanced view,”—which is usually moniker for, like, we just kind of dump everything that comes out of the API into this page and we call it the advanced view—but the point is, there’s this third tier which probably isn’t evidence and it’s probably all this other kind of stuff, tooling integrations, whatever. You don’t always need this tier. I think most of the time, there’s probably going to be someplace for this kind of stuff.

If it’s small, you might be able to kind of work it into the evidence part, but I kind of like trying to think about the project is, you know, you’ve got a pie, you’ve got a hundred percent. How are you going to slice it up? And most of the time, I’m hoping you’re spending, you know, 45% on Conclusions, 45% on Evidence, and maybe 10% on the Data part. Something like that is kind of where the level of design and user experience effort should be put into this tool. So again, exports, freshness of the information, a place to run diagnostics, a place to understand how things were computed. That’s that low-level data stuff, right? This is a place where I’m not going to expect to get a lot of insight.

Sidebar here. If I see another table in a BI tool that’s supposed to visualize stuff and I see 40 columns of information with 50 rows per page and paginating out to infinity, I’m telling you, nobody can make sense of that kind of stuff. So folks, let’s get it together with these giant data tables that we’re putting in tools. If you want that stuff, fine, give someone the Excel spreadsheet. That’s usually a crutch. That is usually a cry for something else that a need has not been properly surfaced, if we’re expecting a human being to make sense of giant tabular sets of data like that. So anyhow, going to come back off my rant.

This might be the place though, where you would see an interface like that if you needed to spot-check something or look up some information quickly in the table, maybe to verify some weird outlier that you didn’t expect or something like that. That’s the place for this whole data, this third tier, low-level, you’re on the bench—you know, if you’re playing soccer, this would be the players on the bench. They’re only getting called in, you know, the last five minutes [laugh] of the game. So, that’s the CED with Conclusions, Evidence, Data. I want you to spend a lot of time and effort on the Conclusions part.

You might have been surprised that I said 45, 45, 10 on the distribution of effort, and that might be wrong. It’s probably more like 60, 30, 10 in terms of the effort, even though the GUI, the outputs and stuff might, there might be a lot more user interface design work at the evidence part of the user experience because that’s where a lot more of the interaction design, sharding, infographics, all that kind of stuff, probably sits more at that tier. So, you might think of it that way, more UI work on the Evidence part, less UX; more UX on the Conclusions part, maybe a little bit less on the UI; you might not have any really fancy UI there at all because it might just be, you know, simple sparklines, simple bar charts with text, text that’s telling you what’s going on in simple plain English. Let’s not be afraid of tools that actually generate text. Sometimes text responses or a simple explanation of what’s going on can be really powerful.

And think of it just the way you would do if you’re doing an ad hoc presentation—I don’t talk about that a lot because that’s not my area of expertise is you know, data storytelling when you’re actually having human beings narrate what’s going on and it’s a performance, right? It’s a show. You’re playing these songs, and the song is called, you know, “Quarterly Sales Report,” and you’ve got your slide deck there, that’s a very different type of experience because it’s narrated. But the point here is the nice thing about those things as you can go in and annotate the slides really well and you can put a conclusion right above the chart. And you see this all the time with static—or not all the time; in fact, you don’t see it enough of the time, but good presentations will have a conclusion that goes along with each of the plots.

And it might say, “Sales were up twice as much as expected, especially versus last year.” Boom, conclusion. Then you have the chart showing the information. You don’t say sales stats, chart, and then in a tiny text at the bottom, you write out the conclusion about what happened. Its conclusion first—what does it mean? Why do I care?—and then the evidence part, the charting, and all of that.

So, we’re kind of… in a way, we’re taking that model and trying to put it into software. And that can be much harder to do elegantly, you have less visual design control, you have way more scenarios to deal with, you can’t perfectly plot the data every time, there’s going to be times when probably the charts look wanky because some data set went out of the expected bounds, and so it looks funny. It’s a much harder game to play in a lot of ways. I think it’s more fun though because the challenges are, they’re just different. And you’re building a self-service kind of tool, so you really have to get to know the customer to do that well.

So, CED kind of gave me the sense of how I distribute the time there. If you want to learn more about this, again, I recommend heading over to the article. There’s actually an example of an imaginary advertising tool that I’m kind of using to model this out. So, you can go, kind of, read about that at the article, which again, is designingforanalytics.com/ced, three letters. Feel free to share that if you like it.

And if you really want to learn the steps, the actual design activities, and things that inform the CED framework, so you can do this, so how do you do the research and gather what the needs are? And how do you define the problems and the KPIs? And how do you get into the heads of the stakeholders and the users? And how do you do all this stuff? There’s two ways that I help with that: I have a training program called, “Designing Human-Centered Data Products,” which you’ve probably heard me mention on the show before.

Two formats for that. There’s the self-guided course format of that, which is videos along with a new ebook that I’ve just put together, and then there’s the instructor-led seminar, which uses the exact same curriculum, same videos, same ebook, but it also comes with me. So, we have live office hours calls every single week. There’s now a practicum, some exercises that I’ve been developing, along with input that I’ve been getting from customers like you over the years since I’ve been doing the seminar, and then obviously, you have the cohort of other members there. So, the seminar publicly runs twice a year, and you can join us as an individual, or I can work with your team privately if you prefer.

And the links to both of those—so the self-guided video course the link is designingforanalytics.com/thecourse, and the seminar is same domain slash theseminar. So, check those out, and if would like to get—when these articles come out, when I produce this kind of content, if you like getting that kind of stuff, you can join my Insights mailing list. It’s totally free, just head over to designingforanalytics.com/podcast and you’ll see a form there, so just pop your email there and you will get articles as I write them, and my latest thoughts, as well as transcripts and stuff about the show. So, until then, I hope the data is behaving, and remember, if you think good design is expensive, try bad design. All right, take care.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe for Podcast Updates

Join my DFA Insights mailing list to get weekly insights on creating human-centered data products, special offers on my training courses and seminars, and one-page briefs about each new episode of #ExperiencingData.