CED: A UX Framework for Designing Analytics Tools That Drive Decision Making

Prefer to 🎧 listen than read?

I cover the CED framework in episode 86 of my podcast.

A three-part UX framework for designing your ML / predictive / prescriptive analytics UI around trust, engagement, and indispensability.

As you continue to design interfaces and experiences into your analytics tools that rely more and more on machine-based analysis and prediction, the challenge within the design starts to change. Whereas before, we might be dealing with information architecture and organizing all the metrics and stats into [hopefully] the right locations and UIs for eyeball analysis, with predictive and prescriptive analytics, your UX goals start to shift.

Most of you are probably dealing with analytics services and decision support tools / products that are not entirely black box. In fact, some of you may be dealing with the opposite problem, such as explainability of the black box, or getting people to trust “a new way” of getting decision support information. I am hoping the framework answers the question, “when I need to present predictive/prescriptive analytics to users in a useful, usable, trustworthy, and valuable way within my software/app, how should I approach it?”

The answer is a three-part design framework called CED that I have seen work well to help make declarative analytics useful and usable, without overloading the user, and without entirely relying on a black box that the user cannot see/trust/understand. I specifically mention “declarative” as this framework is specifically for situations where your technology is providing some explicit declarations, predictions or opinions. You probably have a mix of both declarative and exploratory analytics in your product/service, and that’s entirely normal.

What’s the CED UX Framework?

CED stands for Conclusion, Evidence, Data. It is similar to the data [DIKW] pyramid you may know about.

It’s a design / UX framework intended to build customer trust around advanced analytics and provides a base “recipe” you may find useful when trying to figure out how to communicate the value and presentation of your software’s conclusions. I also find that this can be useful in establishing a UX/design vision for your decision support tool: while it may be “out of reach” technically, it may inspire how your team iterates towards a more human-centered analytics experiences. 

Before I show you how you might implement this in your product, let’s recognize that your analytics probably exist to provide evidence-based support for human-derived decisions. However, as logical as it may seem, you may know that humans don’t necessarily make decisions this way. Many of you probably have read Daniel Kahneman’s book, Thinking Fast and Slow, where he talks about System 1 (Fast) and System 2 (Slow) thinking. Your customers are usually making decisions using the former, until your analytics provide them with a surprise. System 2 is all about this world of analytical thinking. The framework here has both of these concepts in mind, providing a way for users to experience your service with System 1 thinking first, and System 2 thinking, if necessary.

What Might CED Look Like Implemented in Your Product?

CED, when implemented, may be spread over multiple steps/screens/time, or could literally be “one screen.” This detail, however, doesn’t really matter at this stage. The goal here is to express the value of your analytics in the tool/product in a way that naturally follows what your users need/want to do when they log in to your tool–regardless of the count of “screens” that may be required in the process. So, what you’re trying to do is progressively disclose more information as necessary, without necessarily imposing that load (or entire experience) on the user.

You do this by trying to “lead” with Conclusions first, support with Evidence, and if necessary, provide access to the Data. This probably means there is a small/short/brief experience/UI when delivering the Conclusion to the user. The evidence lurks “around the corner” and is your presentation of some of the justification for the analytics as well as the detailed prescription, if relevant. This may be a “drill-down” experience (“click for details…”), but in short, it is the next layer of content/data/words/text that explain what factors were included–and NOT included–in the Conclusion, to help the customer understand the confidence and ingredients that went into the conclusion. You might also provide a feedback mechanism in this stage. Finally, the weeds: the data. Beneath the Evidence is raw, or visualized data that may or may not be part of your software application’s UI/UX. This could take many formats, and how this is expressed in a UI is highly dependent on your particular product/service/use cases.

Let’s look at each phase separately, within the context of an example.

CED Implemented into an Example Scenario:

A campaign manager at a marketing company wants to spend ad budgets more intelligently. You’re designing a feature/product/service to help them do this with less manual tool-time and eyeball analysis.

Assumptions: The campaign manager (our user) is running multiple ad campaigns for their client on Facebook, Google, and other publishing platforms. They have 100s of different creatives (discrete advertisement designs). The campaign manager’s job is fine tune the client’s budget and spend it wisely: they need to decide what creatives to leave active, which to edit, which to deactivate, locations/audiences with whom they should spend more/less budget, etc. Now, putting aside there are cloud/tech players in many of these spaces already with varying tools to automate some of these steps, let’s assume that right now, this user simply has to look at the campaign stats to make these decisions. You’re coming up with a new product or analytics service to help them spend the client’s budget more efficiently with declarative/prescriptive solutions instead of forcing them to interpret charts, tables, and metrics on their own. Your data/tech/model is supposed to help them decide what ads to leave up, take down, modify, or retarget. For now, you’re starting with Facebook Ads platform (MVP mindset, right?) and you want to show some early value with your technology investment to date. So, in this first rev, you’re only able to offer analytics/insights on the creative, but not on pricing.

For now, it doesn’t matter whether it’s AI, a machine learning model, linear regressions, or some other tech powering these analytics. Take your implementation hat off and put your design cap on. Your goal is to think about the experience you want this campaign manager to have.

So, let’s put it into the CED framework.

(C)onclusion – The Beginning of the UX

One of the exercises I like to do with clients early on in the design phase is to ask them to present the possible values of prescriptive analytics to me in sentence formats. If the product/app could be so highly-tuned to just simply deliver me a text message with the right information at the right time, what insights could it generate? This is the “conclusion” drawn by the software / algorithm, perhaps accompanied with a prescription for what to do next.

Characteristics of a Well-Designed Conclusion:

  • It may not necessarily require any data visualization at all. It might be best expressed as simple text and words.
  • It could be experienced as a message (push, email, SMS, alert, etc.), possibly sent at a specific, informed, meaningful time. When is that? You have to study the user to know.
  • It provides enough content to potentially inform or inspire action–even if the customer isn’t immediately going to take any action.
  • If the system can categorize the content as an anomaly / unusual / unexpected, the message content may alter itself to provide slightly more reinforcement and acknowledge the fact that a human may find it “hard to believe” (remember System 1 and System 2 thinking?)
  • If necessary, it considers time: is there a window of relevance? Expiration? Deadline?
  • It might be the place / opportunity to add some fun, humor, or other “delighter” into your service.
  • It may require or benefit from having either a dedicated permalink/address so it can be shared or referred to later by colleagues or monitored for status/validity.
  • It may have a link/affordance to take action (possible in another tool)

So, in our marketing example, we might generate some ideas for Conclusions that looks like these. These could banner messages on your UI, or they could be delivered as emails:

  • Most of your ads (31/54) in Campaign XYZ should probably be deactivated. They are projected to continue underperforming the remaining ads by 2-3x costing the same to display. (Turning off those 31 might save the client $23k in ineffectual spend, which isn’t chump change). [Edit ads] [Full Analysis]
  • The verb “buy” in Campaign XYZ is outperforming other verbs on clicks in your creative headlines, regardless of the picture used in the creative–by 1.5x. Changing this could reduce CPA* from $3.45 to $1.32 and result in 412 more clicks if the full budget is spent. You could be a hero! [Edit ads] [Full Analysis]
  • [Appname] predicts that creatives with this image [IMAGE] will outperform all other creatives in the campaign and has paused all ads not using this image. 10 new creatives have been created using this image for your review/approval.[Review Suggestions][Full analysis]

(*CPA = Cost Per Acquisition)

Can you see how these messages are designed with the features of the Conclusion mentioned earlier? They provide little “evidence,” but they provide a course of action, and in the latter case, actually took some action already (creating new creatives for user approval).  Each message also provides a quick means to dive into the full analysis or take a logical action (usually editing the adverts, in this example). Why do we do it this way? It’s possible that users may eventually trust the message enough to take immediate action, without needing to validate the analysis each time. They may also not take action immediately, but may revisit the alert/message and be ready to do that at a future time and don’t want to have to log in to your service just to get to the “action” link. Granted, many users initially are going to want to explore the Evidence (the Full Analysis link), so, let’s take a look at that next:

(E)vidence – Providing Confidence and Trust 

(E)vidence is the second part of the experience, accessed from the (C)onclusion that was drawn earlier. The line between this, and the third stage, data, is a little gray, but most of design operates “in the gray areas.” However, there are some key characteristics here as well:

  • A page/screen or section of UIs providing a selection of data and visual support for the Conclusion. It explains why the software generated the conclusion, and how it arrived at that. It visually expresses the correlations found by the software.
  • Probably exists at a permanent or semi-permanent URL/location (sharable, but possibly may have expiration built in)
  • If you’re using an “explainable AI” framework, this is probably where you’d show some of that content
  • May be interactive or report-ish in nature (non-interactive)
  • Provides a method for humans to give feedback to either the system (e.g. training data), or to humans in the loop (you!), or both. (Soliciting feedback is particularly important early on and during the honeymoonphase).
  • May state known factors/features/variables that haven’t been accounted for that are potentially relevant to the user. (Acknowledge faults/gaps to build trust).

In our marketing campaign management scenario, this might mean the (E)vidence portion of the UX includes:

  • Visual evidence that certain ads/creatives are not performing well (performance metrics projected out for winners vs. losers, perhaps as small sparklines laid over the creative samples)
  • A list of factors notincluded in the analysis (e.g. “didn’t look at ads created in the last 48 hours; did not factor in creatives served on mobile devices (32% of all impressions); etc.)
  • An affordance to “accept” the recommendation or take the next step (link to the Ad UI, auto-create the recommended creatives, pause/stop the poor performers, etc.)
  • A list of features that were included in the model and confidence levels (“explainable AI”) expressed in plain language
  • Possibly a method to run the simulation with certain model features turned off (e.g. “don’t factor in campaigns we ran on the Google platform; LinkedIn and Facebook only”)
  • Links to deep-dive into the detailed data (stage 3) – which is covered next.

(D)ata – the last stage of the UX (hopefully optional!)

We covered (C)onclusion, which then leads to (E)vidence.  Most of your customer UX design effort should be spent in providing good value and UX to the customer in these first two categories such that they don’t need to go into the weeds of the third stage which is directly accessing the (D)ata that was used to generate the information presented in the first two stages.

If customers are routinely going deep into your product’s third stage–(D)ata–then it could signal that you have improvements to make in the prior two stages to make. Within this framework, the goal isn’t to “hide” the underlying (D)ata from your customers.  Instead, we want to put more of the heavy analysis on the software instead of on the customer. If you see customers regularly going deep, past the (E)vidence stage, then do you know why? Is it because:

  • They don’t trust the analytics (maybe they were burned in the past or the product hasn’t earned their trust yet)?
  • They didn’t understand the information or find it valuable and assumed they’d have to piece it together themselves?
  • The (C)onclusion or (E)vidence was perceived as being wrong, incomplete, unbelievable, or too high risk to believe on face value?
  • They need proof for somebody else (stakeholder, boss, etc.)?

Understanding these “whys” can help you figure out how to design the experience around  accessing the underlying data. This means deciding stuff like whether the UI requires any visualization, CSV exports, visible tables, sorting, filtering, etc. In general, I would not invest a ton of time and resources designing this part of your UI. As, such, you probably want to focus more on allowing the user to export the data to explore in their own tools such as Excel etc.
In terms of the web UI, this is a good place to utilize stock UI frameworks that may provide you with components such as data tables that have sorting and filtering functions built in.

One important thing with exports and making this experience as effective as possible: consider including the information from the prior stages as well (prior software Conclusions and Evidence). They may learn how your software arrived at these conclusions, and this in turn may inform their trust and understanding of your product in the future.

It is also worth noting that the risk level here may be low enough here that our customer—the campaign manager—simply isn’t going to take the time to do any eyeball analysis and may have already “moved on” to other job responsibilities. The required effort for them to manually analyze very complex data may simply not be worth the effort, and besides: this is why they bought your service, right?!

Anyhow, without specific domain knowledge and talking to the customer, it is difficult to provide specific UI features for our campaign management use case as we don’t know what the user is going to do with this information outside of your application. However, we can brainstorm some ideas that may be useful or may stir your own imagination:

  • Download the raw data by time period, by object (specific ad creatives), by campaign, by provider/source.
  • Provide metadata about the data (timestamps, when it was collected, the status of the data sources e.g. are they up to date)
  • Provide some additional computations on the data that you know they may want to do (e.g. additional worksheets or downloads with sums, averages, min/max, deviations, group totals, etc.) – effectively saving them some steps in Excel
  • If relevant and not proprietary, provide information on the models used in the software analysis, if applicable. Keep in mind: this may be more about building trust through transparency rather than helping them to understand or utilize the models on their own outside of your ecosystem.

CED UX Framework Wrap-Up

There’s no magic design bullet for every analytics product that involves providing decision support to users, but when you are introducing declarative (explanatory) analytics, there are ways to design your interfaces around an experience that progressively provides more information, only if required. Overall, remember the users’ goal may be to make the time spent in your tool as minimal as possible (they may want high value in the least time possible). As such, we want to spend more time getting the user a believable, accurate, and usable (C)onclusion, and making sure the design of the (E)vidence reinforces those (C)onclusions as much as possible. Stage three of the experience, manual (D)ata exploration, is the place you probably want to spend the least resources on, but we shouldn’t totally forget about it, as there may be times where this transparency is important. Let your customer research sessions and testing help to inform your process about how and where to spend more time getting the UX right. The more you understand the customer problem space, the easier it will be to understand how to present the information in your UI such that you’re delivering your customers an outstanding analytics UX. Good luck!

Photo courtesy of Filip Patock/Flickr.

More Free Insights:

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

More Free Insights: