The difference between design and Design

I am guessing if you're reading this, it's because there's room for your analytics service or data product to get better, and maybe you know that simply adding more data sources, algorithms, bug fixes, or [insert today's hype cycle tech], isn't necessarily all there is to making it better.

Like some of my clients, you may not know exactly what is wrong or could be improved in your service, but you know there's room to improve, and maybe you're concerned your service is getting too complicated or complex. Maybe you're seeing low engagement, growing attrition, sales that are harder to close, or workflows that are getting more complex. Maybe it's just "ugly" and doesn't seem elegant/valuable.

A few months ago via my social feeds, I saw a design thought leader mention something along the lines that most bad design out there isn't really "bad design," but rather, it is the result of "no intentional design." I think that tends to be true. In fact, it's even harder for most people to identify bad design now because so many plugins, repos, and libraries have made it easier to put reasonably good looking surface layers on top of any data product or analytics service. From charting packages, to CSS grids, internal design system/components, and third party tools, it's so much easier today to get a decent looking something out the door.

What I hope you'll remember today is that design isn't just about pretty UIs that look polished. While the paint and visuals do matter (despite what the usability police will sometimes tell you), they can also hide a multitude of UX problems that ultimately may lead to, or may already be causing business problems.

Good Design–what I sometimes call—"Capital D Design"—has the power to make your data sing, delight customers/users, bring new/better ROI to your organization, provide inspiration to teams, reduce complexity, reduce engineering cost, save time for users, and expose new value in your existing service. However, the big gains usually don't come from focusing on the surface level alone. Better data visualization cannot fix every data product and analytics problem.

Good Design starts with a deep understanding of your customers' and users' behaviors and needs (not just desires) and a clear definition of what a successful business outcome looks like from leadership that are then encapsulated in a clear product strategy by the product owner. This probably sounds all handwavy, but you'd be surprised how many times my clients and prospects cannot quickly and clearly state these things to me (or more importantly, state them to the team doing the execution). When they can't, it's usually a sign that there is more design than Design going on, and that there's a lot of room to improve the service.

What kind of [d]esign is your organization doing?

Dashboard Design: Is Maximum Simplicity the Goal?

Should maximum simplicity dictate success?

We all love usability these days right? "User experience is important."

Of course it is!

But, it doesn't mean that every user you show the design to is going to, or should immediately be able to, understand fully what you're showing them.

Why?

Most valuable things in life take a little investment/effort. Your dashboard can probably push boundaries and comfort zones a little bit as well, especially if it's a service that will have somewhat repetitive use by your users.

Before I continue, I will caveat that this advice is for largely operational dashboards, used somewhat regularly, and for people looking to take action on insights.

I'm not advocating for sloppy work, or making your dashboard needlessly complex. However, don't necessarily design with a goal of making the first impression as easy as possible. There is only "1" first time UX; but there will be many more "Nth" time uses of your service, right? As such, if you're designing primarily with the "Nth" visit in min, don't necessarily expect that users will understand everything on the first viewing. It is possible to design a high quality dashboard that doesn't necessarily get great user feedback on the first viewing.

Also, if the user hasn't put in a little effort to understand the value of the data/information you're showing, and you've done a good job presenting it as elegantly as possible, then don't necessarily water the design down to accommodate the newbies. It may be ok as-is, and you can set up your test plan/protocol to validate whether you've pushed it too far, or done a good job of providing rich information density that is relevant.

For many of my clients, their assumed path to dashboard simplification is "remove stuff" and use "drill down" to provide detail. Don't get me wrong; I love using subtractive techniques over additive ones when it's the best choice, but there are times when you need to add data to increase the value and insights. The drill down may not have the "summary" information that is helpful in knowing whether you should even *bother* to drill down.

As a product or analytics leader, your job is to find the balance between too much/too complex, and too simple or too low in information density. There is no single clear cut definition of what too much / not enough looks like, but don't just focus on concern around the design being "too complex." You might be making it "too simple." It's possible to design a product/service that is super easy to use, but with very low value or utility.

Maximum value, on the other hand, is obtained when an analytics solutions has the trifecta of utility, usability, and beauty/desirability. It is hard though, and usually quite expensive, to get a service to that level without intentionally designing it to be that way.

How can you possibly design your service effectively without these?

I'm working with a large, household-name technology company right now on a large project, and they struggle with one of the same things so many of my clients struggle with. Today's topic is articulating use cases and goals in an effective manner that allows your design and development to proceed with clarity and accountability.

If your service's strategy, use cases, and goals are not dictated clearly (or at all) and you're doing "feature-driven" development, you have a lot less chance of succeeding, and a lot greater chance of building stuff that has low utility to your customers. You also take on the code and design debt that comes with building junk that has to be refactored (or worse, a small handful of noisy customers likes what you did, and now you have to justify pulling the plug on their value so you can focus on the majority of customers who will sustain your service.)

Your goal as a product owner/manager in the data/analytics space is not to "display the data we're collecting." The job is to figure out how the data can be turned into a user experience that provides customers with value (as perceived by them). A big red flag for me on a consulting engagement is when the stakeholders I'm talking to cannot articulate the top 5-10 goals and use cases the service is supposed to facilitate. I call these the benchmark use cases, and if you can't state yours, and you're the business stakeholder, how can your team possibly be successful?

How can you know if your service's design is right?

How can you even measure the design / service for effectiveness when you don't know what a pass/fail UX looks like?

You can't. 

If your team doesn't know where the goalpost is, or the difference between good and bad UX, you're not likely to succeed. Your product/business/UX team need to be able to clearly state these benchmark use cases if you want to have a design that is obtainable, useful, usable, and measurable. If you skip this and just start making stuff, you'll just pay for the mistakes on the backend in the form of rewriting code, dealing with customer complaints, or in many cases: SILENCE. It costs more, takes more time, and is more frustrating for everyone.

Spend the time to make sure the top of your product development process begins with clear benchmark use cases and your engineers and designers will have a much better chance of delighting your customers.

Reader questions answered: “what are your top concerns designing for analytics?”

Today I want to respond to a reader who answered a previous email I sent you all about your top concerns designing for analytics.

Here's Évans' email:

+++++
In analytics, it’s not like a CRUD [Create-Read-Update-Delete] with a simple wizard-like workflow (Input - Validate - Save). It’s kinda hard to keep the user focused when there are so many things to see at the same time on different angle to make a decision. 

So, for me, the #1 concern is : how can you keep the the user focused on what he has to do.

When working with web applications, we can’t show all of the data, we need to limit the results. Otherwise, timeouts will occur or worse, it will use all of the client mobile data 🙂 So, a lot of this data is divided in different “pages”. Sure, we could add “sugar” on the dashboard and bring different “pastry-charts”, but it is not always desired by the client. When they know what data they have, they will prefer to "cross join" the full data. Maybe we should think outside-the-box ? One of my colleagues brought the idea of a “shopping cart” to pick specific data from these different “pages” to work with them after… we will try it eventually 🙂

Hope it could help !
Évans

+++++

So, I will tackle this the best I can not knowing anything about the product/solution he is working on, the team, and their success metrics.  I will make some assumptions we we work through the separate parts of this email that I can understand:

----

Évans: It’s kinda hard to keep the user focused when there are so many things to see at the same time on different angle to make a decision.  So, for me, the #1 concern is : how can you keep the the user focused on what he has to do.

Brian: So, why are there so many things to see at the same time? What informed that original design choice? How does the team know that the user is having trouble focusing? Did they evaluate customers performing certain tasks/activities? Or is this a guess? If I take this statement at face value, it sounds like there are assumptions being made, which may or may not be true. The thing is, we can get from a broad subjective opinion about the design to a specific, objective measurement of it. In plain english: if we know what the product is supposed to do for the customers, and we have measured customers performing those tasks, we can now objectively rate whether there is actually "distraction" in the UI and can then inform our future decisions. If the situation is that there are "lots of use cases," then this comes back to understanding your customer, and making hard choices about what is most important. This means using the eraser as much as the pencil, and understanding that the product is not going to support every use case equally well. Design is about making decisions, and the team needs to decide what needs the solution is going to best satisfy, and understand that it may mean making other use cases/features more difficult for customers to ensure that the core values are not compromised. There is not a magic solution to "doing it all well" with large analytics solutions and data products.  I typically advise my clients to get agreement on a few core use cases/product values, and focus on making those great, before worrying about all the other things the product does.

----

Évans: When working with web applications, we can’t show all of the data, we need to limit the results. Otherwise, timeouts will occur or worse, it will use all of the client mobile data 🙂  ... One of my colleagues brought the idea of a “shopping cart” to pick specific data from these different “pages” to work with them after… we will try it eventually 

Brian: So, while it is great that there is some concern being given to practical things such as mobile-data use/cost, remember this: I don't know any analytics solution where the goal is to "show all of the data." Regardless of technical issues around timeouts or delivering too much data to the client/browser, most users don't need or want this. (Of course, if you *ask* customers if they want this, almost everyone will probably tell you do want "all the data" because they won't be convinced that you can possibly design for their needs and loss-aversion kicks in. This is a great example of why you have to be careful asking customers what they *want*.)

It's the product team's job to figure out the latent needs of users, and then to design solutions that satisfy those needs. Most customers don't know what they need until they see it.

What it sounds like overall is that the team is "guessing" that it is hard to focus, and they need to chop up data/results into sections that are more consumable. I don't know what that guess is based on, but it sounds like an assumption. Before writing any more code or designing any more UIs, I would first want to validate that this is actually a problem by doing some end-customer research to see where "too much unrelated data" got in the way of users successfully completing specific use cases/tasks that we gave them. Once that is done, the team can then evaluate specific improvement tactics such as a design using the "shopping-cart" idea.

Let me comment briefly on the shopping-cart as an aside. Generally speaking, the cart idea sounds potentially like "offloading choices onto users" and a crutch for not making a good default design decisions. I see this a lot. That said, with the little information we have, the tactic cannot be fairly judged.  My general rule around customization is that it can be great, but it should only come after you have designed some great defaults for users.  More often than not, customization comes up because teams do not want to spend the time to determine what the right default design should be and assumes that a customizable solution will solve everyone's problems. Remember: customers usually dont want to spend time tooling around in your product. You goal is to decrease customer tool time, and increase customer goal time.

----

Évans: "...we will try it [the cart, I assume?] eventually "

Brian: So, a decision to "try the cart eventually" brings up the concept of risk/reward and the role design can play in decreasing your product/solution's risk to your business (or to customers).

"Trying it" sounds quite a bit like guessing.  Instead of guessing, their team can reduce risk and inform their current state by having success criteria established up front, and measuring their current state of quality. This means running users through some benchmark tasks/use cases, and having some sort of basic "score." From there, they now have a baseline by which to later evaluate whether the new design with the "cart idea" improved the baseline or not. They can design the shopping cart idea out, and then run the same test/tasks against it with users to see if the cart idea is having the impact they want. For example, they might want to reduce task completion time by X% for a specific, high-frequency use case. They can time this task right now, and then time users doing the same test with mockups to see if the cart idea has merit and is decreasing the customer's time-to-completion.  The point here is that "improvement" is subjective...until your team makes it objective. 

Note that I also said they need to "design the [cart] idea out" and not "build the idea." Better design decreases risk. It may also save you time and money in the long run. You spend less time coding the wrong stuff, building the wrong architecture, and shipping out solutions that do not work well for users. These days, you can sometimes deploy code without much up-front design and "try it," but in my experience, it is very rare to remove features until a bunch of them constitute a large UX/UI mess. Additionally, many teams simply do not evaluate the impact of their latest features such that they can fairly define what "well" means. They just move on to the next feature because chances are, they already spent way more time just getting the first version of the last feature out, and the product owner is already concerned about the next thing in the backlog.

For larger projects/features like this cart idea–which I perceive to not be a trivial engineering effort–I would recommend de-risking this by at least doing some customer interviews. Doing 1x1 interviews will give some broad input that might inform the "cart" idea. It's not as good as having benchmarks and success criteria established, but that does take more effort to set up, and it may be such that the team is not ready to do this anyways. If they have never engaged with customers 1x1 before, I would suggest they take a smaller baby-step and just start having conversations.  Here's a recipe to follow:

RECIPE:
Contact 5–8 end users, and set up some 1x1 60 to 90-minute video/screensharing conversations, and start to ask questions. If you aren't sure where to start with questions, spend 25% of the time asking the users to define their job/role/responsibility and how it relates to the product. Then spend the remaining time asking these questions:

  1. "What was the last thing you did in the software? Can you replay that for me?"  (You should welcome tangents here, and ask the user to speak aloud as they replay their steps)
  2. "Can you recall a time where the product really helped you make a decision?"
  3. "What about a time where the data failed to help you make a decision you thought it would assist with?"
  4. Because he mentioned mobile, I would ask the users to talk about "when" they are using the product. Try not to lead with "do you use the product on mobile or desktop;" you want to infer as much as possible from their actual experiences they describe.

Facilitation Tips:

  • Avoid the temptation to ask what people want or putting users in the situation of being the designer. While there are some ways a skilled researcher can use these to the benefit of the study, for now, I would focus on getting customers to talk about specific scenarios they can replay to you on a screen share.
  • Do ask "why did you do that?" as much as you can during the study. It's also a great way to keep a quiet participant engage.
  • Understand that the interview should not be a survey and your "protocol" (question list) is just there to keep the conversation going. You are here to learn about each customer, individually. Keep the customer talking about their specific experiences, and be open to surprises. One of the best things about this type of qualitative research is learning about things you never knew to ask about. 
  • ​If you get pushback from stakeholders about the things you learned and people don't believe you because "you only talked to 5–8 people," then ask them "how many people would we have to talk to, to convince you about our findings?"  Any conversation is better than none, and there is no magic number of "right people."  You can learn a TON from a few interviews, and for high-risk businesses with thousands or millions of customers, you can also use the findings of the small study to run a large-scale quantitative survey. But, that's a whole other topic 😉

Make interviews with customers a routine habit for your team and get your whole product team (managers, stakeholders, UX, and engineers) involved. If you aren't talking to end users at least monthly, your team is probably out of touch and you're mostly designing and building on assumption. That method of developing products and solutions is higher risk for your business and your customers. 

Now, go forth, interview some customers, and start learning!

Video Sample: O’Reilly Strata Conf

This is recording of my presentation at the O'Reilly Strata Data Conference in New York City in 2017.

Do you spend a lot of time explaining your data analytics product to your customers? Is your UI/UX or navigation overly complex? Are sales suffering due to complexity, or worse, are customers not using your product? Your design may be the problem.

My little secret? You don't have to be a trained designer to recognize design and UX problems in your data product or analytics service, and start correcting them today.

Want the Slides?

Download the free self-assessment guide that takes my slide deck principles and puts them into an actionable set of tips you can begin applying today.

UI Review: Next Big Sound (Music Analytics) – Part 1

Today I got an interesting anomaly email from a service I use called Next Big Sound. Actually, I don't use the service too much, but it crosses two of my interests: music and analytics.

Next Big Sound aggregates music playback data from various music providers (Spotify, Pandora, etc) and also, apparently, tries to correlate changes in music plays with social media events happening in the real world (probably so you can see if given Event X generates a change in Plays). In addition to design consulting, in my "other life," I lead and perform in a dual-ensemble called Mr. Ho's Orchestrotica which has a few albums out that are available on Pandora.

Pandora is one of the available data sources that Next Big Sound monitors. In this email, Next Big Sound apparently detected an abnormal increase in plays on Pandora and alerted me of this via this email. Check it out below:

Image

On first glance, I like the supporting evidence here (chart) and the lead-in text tells me thye are tracking what "normal" plays are such that they can alert me on abnormal increases. At first, I wondered why they were showing me some of my own social media posts, but then I realized they were trying to help me correlate whether any social media activity may have corresponded with the increase in plays. This is a great example of where they know their software probably cannot draw a literal causation relationship, but they can help users correlate and potentially find a causation. Incidentally, I actually don't care much about how many playbacks the Orchestrotica gets on streaming services as it's not a KPI for my group, but I found this a nice way to help artists and labels–especially artists working more in the pop/entertainment world–to understand what is going on with fans of their music, what is getting traction, etc. In this case, there is no correlation here; the social posts from my personal/artist social media accounts had nothing to do with Orchestrotica activities for the most part, but I still liked the UX.

So, what do you think "Create a Graph" does?

I wondered too. Stay tuned for Part 2 to find out.

Does your app/product provide any anomaly notifications like this? I would love to hear about it. Can you send me an example? Email it to brian@orchestrotica.com

“Post-truth,” data analytics, and omissions–are these design considerations?

Post-truth. The 2016 word of the year.

Yikes for some of us.

This got me thinking about UX around data, analytics, and information, and what it means when we present conclusions or advice based on quantitative data.

Are those "facts"?

If your product generates actionable information for customers, then during your design phase, your team should be asking some important questions to get the UX right:

  • What risk is there to our customer if the data is wrong or could be interpreted incorrectly [easily]?
  • What information might we want to include to help customers judge the quality of the information the product generates?
  • If our technology analyzes raw data to provide actionable information, are there relevant analyses that the product did not run that the customer might need to contextualize the conclusions drawn?
  • Is our product (and company) being genuine, honest, and transparent when appropriate?
    (That's how I roll at least, and few scenarios suggest this ever is bad advice.)
  • Is the display of supporting data considered and as unbiased as possible?
    (Notably: did you design the presentation of the information before coding it, or did you just dump it into a charting tool?)

Part of getting a product's design and UX right is knowing what questions to ask.

Let's take a quick example many of us without pensions can relate to: retirement savings.

Let's say you work at a financial institution and you're supposed to design an online tool that can help customers understand how much income they need for retirement, and specifically, what monthly savings target they should have in mind to reach that goal. A wise design-thinking product owner will be considering issues beyond how the UI works, the sliders and input fields on the forms, and the way the output charts look.

If we're talking about "truth" in the context of design, I'd hope the product team considered:

  • How confident are the displayed estimates?
  • Since this is a predictive tool, did the app run one than one type of simulation before generating advice?
  • Did the app factor in unique characteristics of the user, such as their own behavior to date (if known)?
  • Does the design clearly mention relevant variables that the tool cannot control for, and also how much those variables might affect the predictions that are shown?

How much, and how loudly a tool answers these questions depends on the content, risk, customer problems, and needs.

Sometimes these things don't need to be "answered" literally in ink because not all customers will care, or they might just assume that your calculator already does all this magic.  And, there are times when all the ink might just be noise (e.g. weather forecasts).

All that said, I am not sure "post-truth" fits in anywhere with good product design.

How can better design save your business $10,000 this week?

You can save $10,000 / week pretty easily through better design.

One of the values that clients don't always understand is that good design not only improves customer experience and drives revenue, but it also reduces time wasted on engineering the wrong product/feature/design. My solution to this is to get design involved ahead of engineering as much as possible, to inform what tech actually needs to be built. However, if you don't have design resource, you can do it yourself.

Low-fidelity sketching is the fastest and easiest form of design to get going with.  Find some paper, or my preferred venue (a room with a whiteboard), and start! Jam the ideas out using "free" ink. Don't worry about doing it "right." Try several solutions before committing to anything. Work with a partner. But, don't start with code:

Image

One of my clients, DellEMC, does a lot with data presentation in their products, and I did about 10 layout sketches before I committed myself to a design direction (pictured above). Myself, and one of my product manager stakeholders, designed some basic data visuals we liked (right), but the layout wasn't feeling quite kosher, and so we sketched out several different combinations before committing to a direction. In the end, my design direction went even further in a new direction, but it was informed by the final direction we chose through exploring together on the whiteboard.

Doing this type of work with live UI code is nuts and a large waste of time and money. 

A developer making $120,000 salary a year incurs a $2,300 weekly cost in your business. If you have 4 developers involved, that's $10,000/week burn rate. So, for every week that they don't spend coding the wrong design, you're pocketing $10k and increasing the velocity of your product development.

So, go forth, and sketch!

Designing an Effective Honeymoon Phase in Your Product (Part 1)

Let's talk about honeymoons for a second. I don't mean that trip to Cancun, but instead, those first couple weeks of product use where people are evaluating your service, and hoping to realize some of the values that drove them to become your customer in the first place.

Some people call this "onboarding," but I like to think of it as something that happens for a longer period than the customer's first or second session using your product.

Consumer products usually have a much easier time at designing successful user onboarding processes as there are typically a lot fewer considerations, dependencies, or gaps that must be filled before the product's value is obvious. This is often not the case with B2B and enterprise software, especially in the data and analytics space. In order to delight, you need to look at the entire time period (could be days to weeks) between the setup/install/registration, and what I call the "Nth visit" (the point at which your service's value is obvious to customers). More in Part 2...

Here are (5) dashboard design suggestions you can start to apply today

If you're in the analytics space, then you almost certainly have at least one "dashboard" customers use.  I generally define dashboards as the landing page for your product when people log in, and so keep that in mind as you browse today's design suggestions:

  1. Your dashboard is probably too low on information density. 
    Dashboards with just a few large graphics/charts/KPIs are often a sign of low information density. Most of the time, a design looks misleadingly clean and simple because we're evaluating it on the surface. A huge donut chart with 3-5 values, and no calls to action, and no comparisons, may not be helping your customers understand the value or insight behind the analytic. Chances are, your design needs additional comparison data in order for the primary analytics to be meaningful. One notable exception here is something like an "always-on" dashboard; the kind of UI that is projected up on a monitor in a room for all to see (in theory). (Although...when was the last time you saw actually look at one of those omnipresent dashboards that's always up on the monitor?)  These types of dashboards are best left for another lesson.
  2. Most charts and data graphics can be shrunk and retain the same information density. 
    As a general rule, you can, and probably should be, shrinking your charts and data graphics down to the point where they are still legible. This allows more space for comparison information to be added, or for neighboring components to be visible within the user's viewport (the visible area of a given page or screen, as restrained by the device's resolution/size).
  3. A good dashboard usually will promote information that requires user attention, is relevantly curious or insightful, and facilitates completion of frequently repeated tasks. 
    Instead of thinking about the dashboard as a dumping ground to display the latest values for each of your widgets, consider modeling the design around "What would drive somebody to come back here? What new information can we surface here that can help them? Did we provide links/buttons/affordances to drive people to the things they come to the app/site/product to do on a regular basis?"
  4. Consider usage frequency within your design.  
    One of the easiest things you can do to determine how dense, insightful, and rich your dashboard can be (without overwhelming the customer) is to understand the dashboard's usage frequency. In general, the more routinely your dashboard is used, the greater the information density you can provide (and probably should). Alternatively, if users are only peeking at it monthly/quarterly, you probably need to balance both information density, and how much you can assume about the users' "given knowledge" when they're taking in the latest data. If you have particularly technical information, a dashboard that supports infrequent visits may need reminders about terminology, accepted/normal ranges of important KPIs, etc. A simple example of this would be something like a credit score: if you can count on the customers knowing what a credit score is, and they understand the qualitative ranges, then you might be able to get away with just showing the current score number (e.g. "790"). However, if you know the design will be used infrequently, then displaying qualitative ranges next to the quantitative values helps make the design more useful (i.e. "790 - Very Good").
  5. Consider emailing your dashboard.
    ...but heed this advice carefully!  First, if you already have a killer dashboard, then it may make sense to routinely email some or all of the dashboard information to users instead of asking them to log in. Turn that dashboard into a report card your customers can rely on. Additionally, if you have actionable insights in the email, then let users link directly to the place that needs their attention, even if that means deep-linking into your product and bypassing the current dashboard screen displayed in the browser/software by default. On the other hand, if your dashboard design is lacking or immature, then don't waste the time and resources building an email version of it hoping to see better results. Designing and coding rich format emails that render your design intent properly is still very taxing, and it is even more complicated when it comes to UIs that involve charts and data graphics. Wait until you have a great design/UX before investing in email delivery, or consider altering the design you'll deliver via email. Sending out a low-value dashboard every month just reminds customers to question why they're in business with you in the first place.