From a data scientist on Medium: “It’s easy to not understand your customer’s needs.”

So a guy walks into a bar and starts talking about unsupervised learning...

Ok, not quite. Well actually, where I live in Cambridge, MA, that's not really so improbable 😉

I found this article on Medium interesting, written by a data scientist talking in part about why data science projects may not be working, and one of them was "solving the wrong problem." He had come up with a new tool/method to solve an old problem, and it didn't go over well with the users, and so he concluded:

...It’s easy to not understand your customer’s needs. Pick your favorite large company: you can find a project they spent hundreds of millions of dollars on only to find out nobody wanted. Flops happens and they are totally normal. Flops will happen to you and it’s okay! You can’t avoid them, so accept them and let them happy early and often. The more quickly you can pivot from a flop, the less of a problem they’ll be...."

Dr. Jonathan Nolis, Data Scientist

I'm not writing this insight today to pick on Jonathan, after all, it looks like maybe he's based in AZ (my birth state) and hey: we're a friendly bunch! However, that first sentence (my bolding) isn't really accurate, and this is almost certainly part of what contributes to the 85% failure rates on data/analytics projects. No clear idea what the users need/want and/or vague business objectives. That said, as stated in my comment on Medium to Jonathan, I would agree that this not the primary responsibility of the data scientist.

If you're a biz stakeholder, product manager, or analytics leader, then you should be giving your product development team much clearer objectives. It is not difficult to understand customers needs; you just need to regularly go out and talk to them. While there is definitely skill involved in conducting great research and extracting business objectives from stakeholders, you don't need heavy training to get started. You may not discover every latent need or problem to solve, but it's definitely better than not talking to them at all, or taking wild guesses. After all, at some point, too many "flops" start to add up financially and otherwise.

I understand that with certain types of machine learning, there is inherent ambiguity in what might come out the tailpipe. However, you can spend a little more time, and probably a lot less money, doing a little research before committing the resources to implementation of something that may have zero value to anyone.

Here's to a few fewer flops, even if we can't stop them all!

Want More Insights on Designing for Analytics?

How to solicit *real* needs from users via UX research interviews

Readers of DFA know that I'm big on not immediately giving customers what they asked for, and instead asking the question "why" to learn what the real latent customer needs are. And for you internal analytics folks, remember your employees, vendors, etc. are your "customers" whether you think of them that way or not! Anyhow, some of you may be wondering why engagement is low, or you're not getting the results you hoped for. If you're not sure where to start, here is a super easy script:

  1. Recruit your customers for a 1-on-1, 30-60min screen-sharing meeting, or in-person meeting (even better). Tell them you're doing some customer research to learn about what is working and not working about the tools/solutions you manage and work on. You can also share that no advance preparation is needed and let them know their feedback would be extremely useful in making your service more useful, usable, and productive in their work. Note: scheduling can take some time, and you can even outsource this effort. One other thing we sometimes do here with research is to avoid sharing the specific thing we're going to discuss, to avoid users going and doing "homework" ahead of time to familiarize themselves. This may not be possible though, but if you can obfuscate it a little, that is sometimes a good thing. Your customer is likely to feel like they are being "tested" during all of this, and so your job is to help them learn that you're there to evaluate the service, and not them. Avoid using the word "test" and use the word "study."
  2. Open the session by asking them to tell you their background/bio. If possible, get permission to record audio/screen capture and mention it is only for internal review purposes. At the session, ask the person, if you don't already know, what their overall job responsibilities are and then ask how your service fits into that. At this point, the customer is self-reporting, so take this with a grain of salt. If they immediately start showing you interactions with your service, that is GREAT. Let them run wild, and keep asking questions that encourage them to demo the product back to you as they use it. Encourage them to "speak out loud" and give praise for feedback. I usually end up repeating the phrase, "thanks, this is awesome feedback" 20 times in a session. Note that you aren't praising their specific actions: you need to say this whether they do the right or wrong thing with the service because the feedback itself is what is being praised. Anyhow, chances are, after the short "bio" chat about their job responsibilities, they probably won't open up any tools as they will be expecting you to lead. As such, you now want them to open the tool and proceed further.
  3. Ask the customer to open up the service/tool you plan to discuss. Note: the study has already started at this very moment. Take special note to focus on what you see them DOING much more than what they are SAYING. Take note of things like
    1. Was the service bookmarked in their browser or easily accessible?
    2. Did it look like they were fumbling around just to launch it (e.g. haven't used it in a while, but don't want to admit it?)
    3. Was the login/password forgotten or not immediately accessible? These are all good signs customers aren't utilizing the service.
    4. If they need help after a bit, help them, and state, "If you haven't used this in a while, it's not a problem. I can help you get access to the tool." Note that this is called an "assist," and you want to do this only after you have concluded that it is rather obvious the customer can't even get past the login. Typically, in research, your job is also to avoid assisting.
      • Remember too, that this is NOT a training session but a discovery session to learn about what is happening in the wild when you aren't around.
      • Additionally, your goal isn't to scold them for not using your service, but to try to solicit useful information and honest facts from them. As such, during this simple act of opening/accessing the service, this a great example of where Actions speak louder than Words. Your customer might have told you they "use it all the time," but in reality, if you see them fumbling to try to simply open your service, you can see that what they're saying may not be quite as true as what they are doing. Keep this concept of "doing" over "saying" in mind as self-reported behavior is often very misleading. This is one of the core things that I see my clients/stakeholders getting wrong. You cannot necessarily believe customers/users' needs as verbally stated. They do not always know what they need, and their reporting of past behavior is often flawed. Which leads me to my final / next step: the recent history question.​
  4. "When did you last use the [service] if you can recall? Can you show me what you did specifically, speaking aloud as you go through it?" This question is specifically worded in such a way that you're not asking them in general how they use the tool, but instead, you are asking them to demonstrate a SPECIFIC use case they worked through to get some useful insights. This is much better as it forces them to use the service and show you their UX. You are likely to learn a ton here, and one of the best things you will learn is stuff you never even knew to ask about! You might see strange behavior patterns, ping-ponging between screens, opening up of external tools/spreadsheets, etc. This is all very good feedback.
    1. If the user fumbles quite a bit with your request and it's obvious they don't know how to use the service, it's ok to just tell them, "if you haven't been in here in awhile, that's ok. Can you tell me what you think this service might be useful for? What might you be able to use this for?" At this point, you're now observing their clicks, and encouraging them to "keep thinking out loud." Note that this is unscripted intentionally, so you want to let them take tangents and follow their instincts. Your job is simply to collect information and not judge their skill with the service.
  5. ​At the end of the session:
    1. Invite them to ask you any questions they may have.
      1. If your service DOES have a way to solve the question they have, don't tell them this and instead ask, "Do you think there is a way to do [do that task]?" Invite them to "try" themselves. If they get entirely lost, but your service does have this feature/need met, provide an assist to them, and then ask them to continue. Remember to encourage them to think aloud the entire time, and tell them, "we're here to evaluate the design, not you." Most customers feel like they are "dumb" when they fumble for too long (we all know that feeling when we can't open a simple bottle, or figure out how to open the door, or some other poorly designed system that seems like it should be easy!).
      2. If your service does NOT have a way to answer their need/question, encourage them to explain to you what end goal they have and what would make the service awesome. It can be pie in the sky; that's ok. What you want to avoid is encouraging them to start designing out the system in that moment, and instead, focus on what they personally, would think is valuable. Users also have a tendency to want to talk for others and think they are unique so watch out for, "I think most people would X, but I probably wouldn't." You want to learn about Y and not X in this case so keep coming back to them with things like, "that's great feedback thanks. Can you tell me what YOU would need/do/want that's different from what you think everyone else needs? I am really curious about your own particular needs and it sounds like you think they might be unique."
    2. Thank them and ask them if you can be in touch with them again in the future as you integrate their feedback. Your job is to develop a long term relationship and let them know that you need continuous user feedback to make the service better, and that their feedback contributes to a better user experience. Most customers love helping out.

Need help? Set up a free micro-consultation call with me on my contact page.

Want More Insights on Designing for Analytics?

Here are (5) dashboard design suggestions you can start to apply today

If you're in the analytics space, then you almost certainly have at least one "dashboard" customers use.  I generally define dashboards as the landing page for your product when people log in, and so keep that in mind as you browse today's design suggestions:

  1. Your dashboard is probably too low on information density. 
    Dashboards with just a few large graphics/charts/KPIs are often a sign of low information density. Most of the time, a design looks misleadingly clean and simple because we're evaluating it on the surface. A huge donut chart with 3-5 values, and no calls to action, and no comparisons, may not be helping your customers understand the value or insight behind the analytic. Chances are, your design needs additional comparison data in order for the primary analytics to be meaningful. One notable exception here is something like an "always-on" dashboard; the kind of UI that is projected up on a monitor in a room for all to see (in theory). (Although...when was the last time you saw actually look at one of those omnipresent dashboards that's always up on the monitor?)  These types of dashboards are best left for another lesson.
  2. Most charts and data graphics can be shrunk and retain the same information density. 
    As a general rule, you can, and probably should be, shrinking your charts and data graphics down to the point where they are still legible. This allows more space for comparison information to be added, or for neighboring components to be visible within the user's viewport (the visible area of a given page or screen, as restrained by the device's resolution/size).
  3. A good dashboard usually will promote information that requires user attention, is relevantly curious or insightful, and facilitates completion of frequently repeated tasks. 
    Instead of thinking about the dashboard as a dumping ground to display the latest values for each of your widgets, consider modeling the design around "What would drive somebody to come back here? What new information can we surface here that can help them? Did we provide links/buttons/affordances to drive people to the things they come to the app/site/product to do on a regular basis?"
  4. Consider usage frequency within your design.  
    One of the easiest things you can do to determine how dense, insightful, and rich your dashboard can be (without overwhelming the customer) is to understand the dashboard's usage frequency. In general, the more routinely your dashboard is used, the greater the information density you can provide (and probably should). Alternatively, if users are only peeking at it monthly/quarterly, you probably need to balance both information density, and how much you can assume about the users' "given knowledge" when they're taking in the latest data. If you have particularly technical information, a dashboard that supports infrequent visits may need reminders about terminology, accepted/normal ranges of important KPIs, etc. A simple example of this would be something like a credit score: if you can count on the customers knowing what a credit score is, and they understand the qualitative ranges, then you might be able to get away with just showing the current score number (e.g. "790"). However, if you know the design will be used infrequently, then displaying qualitative ranges next to the quantitative values helps make the design more useful (i.e. "790 - Very Good").
  5. Consider emailing your dashboard.
    ...but heed this advice carefully!  First, if you already have a killer dashboard, then it may make sense to routinely email some or all of the dashboard information to users instead of asking them to log in. Turn that dashboard into a report card your customers can rely on. Additionally, if you have actionable insights in the email, then let users link directly to the place that needs their attention, even if that means deep-linking into your product and bypassing the current dashboard screen displayed in the browser/software by default. On the other hand, if your dashboard design is lacking or immature, then don't waste the time and resources building an email version of it hoping to see better results. Designing and coding rich format emails that render your design intent properly is still very taxing, and it is even more complicated when it comes to UIs that involve charts and data graphics. Wait until you have a great design/UX before investing in email delivery, or consider altering the design you'll deliver via email. Sending out a low-value dashboard every month just reminds customers to question why they're in business with you in the first place.

Want More Insights on Designing for Analytics?

Here’s a fast way to evaluate the utility of your dashboard design

Here's a super easy thing you can do today to evaluate your data product's dashboard. If you're displaying quantitative data of any sort, especially trends, then this will probably help you come up with opportunities to improve your design. Of course, testing the design with real users is always the best way to evaluate your product! With that, let's jump in.

It's quite easy, and I borrowed the technique (from Tufte):

For any and all your key UI widgets that sum up conclusion data, read the data aloud and then ask yourself "...as compared to what?"

If you cannot answer that question easily without leaving the dashboard or report, then you know you probably have room for improvement.

If you're going to tell the customer in a donut chart that the distribution of (3) values over some time period was "3,17, and 80" then the question is, "as compared to what?"

Keep digging further:

  • Do I need to know what the previous values were? Over what period?
  • How likely is the customer to know these values as given knowledge? (e.g. I bet you know what your typical home temperature is, but do you know what the barometric pressure at home typically is? Don't assume one design pattern always work for all the data points.)
  • Is the absolute value of the data interesting, or is the change (delta) in these values what is interesting?
  • Could the data be presented in a qualitative way (e.g. "3 = great, 17= so-so")
  • Do I have to read or view a lot of ink to figure out "hey, there's really not much new here to look at from last month/week/etc."

If you're still stuck once you've asked this question against your data, here are some ideas you can use to inspire your design. Try comparing your metrics to:

  • My average, min, or max
  • Team/group/industry/competitor average/min/max/movement
  • My change since last period
  • My typical deviation / pattern
  • My business's cycles
  • A unique benchmark in your company, product, or the industry
  • An index you created
  • A SMALL, relatable unit people can grasp. In other words, showing $26,981,230.12 might be the real number, but printing $26.98M is easier to read.
  • Even better, show that $27M as something like "2,000x the average value and 45x the #2 earner" puts that huge number into a relatable context.

Now your turn: what useful comparison did I leave out that you've found useful to customers?

Want More Insights on Designing for Analytics?

What can McDonald’s teach you about prototyping?

As both a musician, and a product designer, I loved a scene in the recent movie, The Founder.  This film discusses the rise of McDonalds restaurants, and how the restaurant focused on its design and operations to enable speedy service to customers.

In this scene, the restauranteurs chalked (designed) a speed-optimized layout for a kitchen on a tennis court, with real employees literally walking through order/cook/delivery scenarios to help inform the ideal locations for food stations, fryers, refrigerators, cash registers, sinks, etc. It's design prototyping in action!  They interacted (actively) with real kitchen workers, and they measured the design against success criteria (that being primarily speed in their case). The owners didn't waste money and time building a new kitchen day 1, only to learn the sink or fryer was in the wrong place.

You can design and prototype like McDonalds did with your data or analytics product too. Or, on just a single feature.

Before you invest tons of money into engineering, data science, architecture, data collectors, and all the other plumbing, get your design in front of some people, test it with some pass/fail criteria, and reduce the risk of launching a poor first iteration. The MVP mindset doesn't mean you have to, or should, take a WAG (wild-ass guess) and pray for results.  That's just a waste of money and time. While I value the "just ship" mentality and listening for customer feedback after launch, you have to remember that not all of your customers are going to give you useful feedback, or feedback at all. It usually requires interpretation. When they do give you input, they're likely to share symptoms with you that do not directly identify the real problem you may have. They might say, "it's crowded by the fryer." You typically have to actively observe users to get to the real problems that need design changes ("the refrigerator location and door swing is what is causing personnel backups near the fryer.") Customers aren't going to typically offer the latter, nor are they trained to do so.

Watch the tennis court scene here: https://www.nytimes.com/2017/01/19/movies/john-lee-hancock-narrates-a-scene-from-the-founder.html?_r=0

Want More Insights on Designing for Analytics?

How can good design help you avoid product bloat and drift?

Let's talk about your product's drift and keeping it in check.

As your product evolves, it will likely grow in size, and get more complex over time. You'll listen to customers, adapt to their needs, and over time will begin to encounter situations where different segments' needs are in conflict, and you either dissatisfy one group of customers, or you attempt to satisfy everyone in your target market. More needs and audience types to satisfy means more UI. More UI usually means a more complicated UX (all you have to do to make it more complex is add more choices). More complex UX can lead to costly redesigns/rebuilds and dissatisfied customers. But, there's a way to grow carefully without taking a lot of extra effort.

I like to use benchmark user stories to keep the design of the product in check.

Benchmark user stories are a set of 3-7 usage scenarios that–no matter what else happens–should be well served by your product's design. They represent those use cases that should not be complicated or impeded as the product evolves and grows. Until your whole team can agree on this set of sacrosanct stories, you're guessing, increasing technical and UI debt, and are more susceptible to chasing fads and squeaky customers whose needs may be quietly derailing you from your product's overall mission.

Before we dive in deep, you can use this benchmarking concept with a legacy product, or a new product/project, so don't feel like you've missed the boat if you didn't start your project out this way.

Here's an example of a fictitious onboarding scenario for a data product. This product collects stats/metrics from a customer's data and tries to provide insights on that data, and so there is a fair amount of setup and configuration required before the analytics can kick in and yield value:

As a ____[one of your customer personas]____, once I've signed up for [your product], I imagine there is going to be some tricky setup to get all the data collectors in place prior to me getting much value out of the tool. I need good guidance on the order of those steps, when I should do them, and how to understand if the infrastructure is set up correctly. Once I know that, it would be great to understand how long it will be before the analytics kick in and I can start to realize ____[some value prop promised to the customer]_____. Since I know it could take days or weeks for enough historical data to be analyzed by your system, ideally the app would notify me when enough data has been analyzed and the first useful conclusions are available. And man, I would be stoked if there was a cool dashboard or report I could show my boss that begins to show the value of this tool immediately, given I had to really sell my boss on the purchase of this service. 

How can you use benchmark stories like this to your advantage?

  • Perform an audit of your product to uncover where your biggest gaps or friction exist. Use that as a baseline from which to improve.
  • Invite some real customers to go through the scenarios. Use your user study findings as a baseline from which to improve. Early on, this is also a great way to validate that you've actually chosen the right benchmarks. As the customers if these scenarios represent their reality.
  • Inform what work gets prioritized in your sprints (if you know the benchmarks scenarios are performing poorly, should you really be adding a 7th new feature to benefit one squeaky customer?)
  • Prevent feature creep and drift within your product. Anyone in the org can pull the benchmarks out as a defensive weapon. "This proposed feature will make scenario #3 a lot harder. Are we sure we want to go there?"
  • When a new feature or idea will introduce a significant change or impact on your benchmarks, but the stakeholders believe it's the right thing to do, then you can use the benchmarks to establish that your product is making a conscious and deliberate pivot, and not just "drifting" into new territory to the confusion of the rest of your organization.  If they benchmarks need to change, great, but your product team should be consciously making these decisions. "Yes, the dashboard is going to change to X and it will mean a serious change to the UX for Persona B who is (today) using it for Y purpose. We understand that, and need to rewrite our second benchmark story as it's no longer relevant to our new design." Remember to re-validate your benchmarks with the techniques above if you're going to change them significantly.

How I generate benchmark stories with my clients:

  1. Gather the product team (product owner/management (PDM), principal designers, UX researchers, possibly an engineering lead, and any additional stakeholders) in a working session with a whiteboard. It could take 4-8 hours spread over a few sessions to get final agreement. That's ok. I will also add that with data and analytics products, I almost always like to include an engineering representative as they typically have very valuable input. That said, keep the session focused on defining the scenarios and goals, and not the tactics and implementations that may exist (or need to exist).
  2. Choose a facilitator to lead the working sessions from the whiteboard. Ideally, everyone is at the board writing and participating, but sometimes it's easier with one facilitator.
  3. Propose, debate and begin to draft the key scenarios together–everyone participates. You can start with shorthand such as "setup and onboarding" before going into the full detail shown in the above example. I like getting a shorthand list first, before flushing out the details. Typically, PDM and UXD members will be (and should be) proposing the scenarios. If you're concerned about the session not being productive, consider starting with a smaller team (UXD+PDM) and then repeating the process with the larger group. I would stay at the "shorthand" level if you plan to do a pre-work session.
  4. How do you resolve prioritization debates during these sessions?
    You may find (especially within existing products) that product management, sales, support, engineering, and UXD have widely differing opinions on what constitutes the small list of stories worthy of benchmark status. Ultimately, I believe that PDM has the final responsibility to own and define the goals that will be labeled "benchmarks," however design and UX personnel should be "keeping their peers in check." Because PDM personnel can be strongly sales, marketing, or engineering-biased, UXDs can provide a great balance.  If you haven't figured it out yet, I strongly believe that product managers and designers should be in lockstep and one of the tightest duo relationships that exist within your organization. (For startups, I usually recommend turning that duo into a power trio that includes a design-thinking engineering/technical rep).
  5. Once you have the shorthand list, draft them into full prose format and detail–together. 
    I suggest remaining at the whiteboard for this process (not a computer). Your final deliverable will look something a collection of paragraphs similar the example above. Most stories should be 2-5 sentences in length, and you may want to title each of them so they're easy to refer to later. Don't spend the time typing them up electronically during this working session. Date the whiteboard, and take a photo of the stories so somebody can type them up cleanly later. You shouldn't need a follow-up session if you do it this way as you already worked out the details in the initial working session.
  6. Post them in a public place and proselytize them to the rest of the organization members who contribute to the product.
    Often times, "public" means a Wiki page or something along those lines. Wherever you post them, you want to be sure that individual contributors and managers are aware of these goals and are cognizant of the fact that their work should improve, but never impede upon the benchmarks. If you have the resources, you might even consider producing some cartooning or visual aides to accompany the stories (and make them more memorable to the rest of your team).

Want More Insights on Designing for Analytics?

What’s the #1 way you can simplify your service?

Ok, you probably know this one, but let's dig in a little farther.

I recently started to explore using the TORBrowser when surfing on public wi-fi for more security (later finding out that using a VPN, and not TOR, is what will enable safer surfing). However, in the process of downloading and trying the TORBrowser out, it provided me with a golden example of what you should not to do in your product.

The very first screen I saw when I launched TOR was this:

Image

So, what's the big deal here? First, I will share the answer to today's subject line with you:

Remove everything that is not necessary.  

​Yeah, yeah, you probably have heard that before. Famously, the pope asked Michelangelo how he knew what to carve while creating the statue of David, and his response was along the lines of, "I removed everything that wasn't David." Nice.

Are you removing the cruft and noise from your product?

If we take this thinking further, I would say that today's core takeaway for you is to "remove choices by making good assumptions in your product, wherever possible." You might be wrong sometimes, but you'll be a right a lot of the time.

Jumping back to the TORBrowser UI example above, there is more you can learn from their design choices:

  1. This UI says, "This [option] will work in most situations." Well then, why isn't this automatically selected as a default choice?
    Does this screen now seem necessary to you? Why does the product not just "try" that setting by default, and present the other option as a fallback, if the default fails? Nobody downloaded TORBrowser with a goal to "set it up" with the right networking settings. This entire step is not necessary. It literally says it's not "in most situations."
  2. Right information...at the wrong time. 
    I haven't needed to use this pane yet as the default setting worked (surprise!), but it's an example of the developers trying to present helpful information. That's good. The design problem is that it's appearing at the wrong time in my experience. I don't need this right now, and I don't even want to think about how the networking is configured. It's completely irrelevant. Are you presenting choices at the right time in your product?
  3. Most users don't care how your software works; don't expose the plumbing. 
    ​There are sometimes exceptions to this for certain technical products, but even when there are, once most users have "learned" what they need to learn about the plumbing, it quickly becomes irrelevant. The value has to shine, or people stop paying for the service. That includes products built for technical audiences.
  4. This UI and UX is not fun at all...especially as a first impression.
    It's a needless distraction, it's not fun, and it's got me focused on, "how hard will it be to get this app working?"
  5. The visual design attention (or lack thereof) is undermining the mission of the product. 
    This is the hardest one to teach, but a combination of graphic design choices (probably unconscious ones) here contribute to this UI not feeling particularly safe, secure, and careful. The goal of TORBrowser is to "protect" the user. If you think of words like protection, precision, stability, and safety, then the visual design should reinforce these ideas. The topic of graphic design is hardly something to be captured in an email, but I can leave you with a few suggestions and considerations. Refer to the diagram for a detailed analysis:

    Image

    1. What could be removed from the TORBrowser UI sample?
    2. Are the invisible things (like padding/margin whitespace choices) consistent, meaningful, and deliberate?
    3. While a single graphic design choice sometimes has the power to impact usability or even the financial the bottom line, typically, it is the sum of numerous small design choices that account for the overall perception of your product's quality and aesthetic.
    4. It's possible to follow "all the rules" and still not have a great product aesthetic or utility. (That's why we have designers.)

Want More Insights on Designing for Analytics?

Beware the dreaded “reporting section” in your analytics service

"That stuff probably belongs in the reporting section."

I've heard that one before.

There's probably a better approach.

Remember: it's not really about "analytics" -- it's about providing information to help your customers make better decisions.

Shoveling your analytics into the "reporting section" can easily become a dumping ground for "stuff that might be useful" and "stuff people look at once in awhile."  The fact your "reporting area" might just contain longer term historical data or more facets or data points doesn't mean relegating it to this dumping ground is solving it for your customers. I often find little attention is given to content that lives in here, and often times, the content really shouldn't live there. Engineering decisions shouldn't be driving where content fits in your information architecture. Nor should the quantity of historical data or what data source feeds that section of the product.

You still need to figure out what the use cases are. Are they really going to "run a report" and be done? What earlier tasks had to happen before that reporting would be helpful? What tasks/activities are they going to do after they run the report? Are those tasks part of your product/service or do they take place elsewhere? Will users be coming and leaving repeatedly over the course of performing one action?

Your design needs to account for users' needs and the tasks and activities that support those needs.

Want More Insights on Designing for Analytics?

What can my teapot teach you about designing for analytics?

My teapot, or rather the water heater, helps me make great tea, based on the type of tea I want to drink.

It also was a reminder for me about how good design means translating quantitative values into qualitative values people can relate to:

Image

As a tea drinker, my goal isn't to heat the water to 175 degrees.

It's to enjoy the best green tea I can make.

The lesson here can be translated to charts, analytics, and messaging that your product's customers interact with.

Are you just dumping numbers on them, and triggering actions or messages based on threshold values etc? Or, do you translate the quantitative into useful, qualitative labels like this water heater?

Granted, some complex software systems and analytics cannot be watered down to this level of simplicity, but the interface does a good job of helping me to achieve my actual goal. 

In the next edition, we will talk about a small design tweak that could make the interface for this water heater even better. Can you guess what it is?

It comes down to "loudness."

I often talk to my clients about "loudness" on the screen (boldest, biggest, etc.). What data is screaming for attention? What is subservient? Are things balanced properly?

In this case, the proper green tea water temperature is 175 degrees. However, the 175 label is "louder" than the "green" label. This suggests that I should care more about the quantitative value (175)  than the fact this setting is what makes good green tea.

Making "green" the louder label, and the "175" a bit quieter, would improve this interface for the average tea drinker. It's a small tweak, but when you have a rich UI with a lot of data, all of these minor UI details add up and either compliment or hinder your product.

If you need help designing your own water heater interface, or your software product, you can schedule a free 30-minute consultation with me.

Want More Insights on Designing for Analytics?

Here’s one of the simplest ways to simplify complex analytics

This design "no-no" appears almost every time a new client [with a product that displays analytics] asks me to review their UI/UX.

More often than not, I'm not provided with any relevant user tasks/usage contexts by which I can do my evaluation, but clients still want my opinion on what could be better, or what they're doing wrong.

Inevitably, the UI will have some KPI (key performance indicators) presented on the screen, perhaps as big numbers or histograms. The data might be I/O operations per second. Or, number of new subscribers. Or, how much energy was saved last month. However, even if I don't fully understand the domain or KPIs, there is often a very good chance that something is missing in the UI that could make the UX better for the customers intended to benefit from this information:

Providing useful, neighboring comparisons.

Did you know our US National Debt as of this writing is $19.4 trillion dollars?

usdebtclock.org is a fairly ugly, but interesting website which gets a few things right, and a few things wrong. Here's a grab from one corner of the page that we can break down for a moment. For now, I'm going to ignore the interaction design, improper display of numerical data, and aesthetic issues with the site and talk for a moment about how they did and didn't use comparisons effectively:

Image

  • The national debt was broken down to "per citizen" and "per taxpayer." $19T is hard to fathom, but $59k/year is relatable. It's a decent, middle class salary in many parts of the USA. That's a useful comparison that just took some division.
  • They could also have used a proxy for $59k, perhaps substituting a piece of merchandise such as "a new, mid-range BMW."  How might the information feel more relatable if somebody told you the debt was like "every citizen in the US having a 100% unpaid loan on a new BMW they can't afford, and every taxpayer having 2.5 BMWs"?  Are there places in your UI where a proxy might help people understand the information better?
  • Remember how I mentioned "neighboring comparisons" above?  There's something very fascinating about the national debt being 6x the federal tax revenue. However, this analysis is lost because the revenue and debt numbers were neither placed near eachother, nor were they visualized (designed) in such a way that one appeared 6x larger than the other without doing mental math. This is a fantastic example of the difference between data and information. You have to apply design to turn data into useful information.

You can learn more about using useful comparisons in my free self-assessment guide for products using analytics.

Want More Insights on Designing for Analytics?