Reader questions answered: "what are your top concerns designing for analytics?"

Today I want to respond to a reader who answered a previous email I sent you all about your top concerns designing for analytics.

Here's Évans' email:

In analytics, it’s not like a CRUD [Create-Read-Update-Delete] with a simple wizard-like workflow (Input - Validate - Save). It’s kinda hard to keep the user focused when there are so many things to see at the same time on different angle to make a decision. 

So, for me, the #1 concern is : how can you keep the the user focused on what he has to do.

When working with web applications, we can’t show all of the data, we need to limit the results. Otherwise, timeouts will occur or worse, it will use all of the client mobile data 🙂 So, a lot of this data is divided in different “pages”. Sure, we could add “sugar” on the dashboard and bring different “pastry-charts”, but it is not always desired by the client. When they know what data they have, they will prefer to "cross join" the full data. Maybe we should think outside-the-box ? One of my colleagues brought the idea of a “shopping cart” to pick specific data from these different “pages” to work with them after… we will try it eventually 🙂

Hope it could help !


So, I will tackle this the best I can not knowing anything about the product/solution he is working on, the team, and their success metrics.  I will make some assumptions we we work through the separate parts of this email that I can understand:


Évans: It’s kinda hard to keep the user focused when there are so many things to see at the same time on different angle to make a decision.  So, for me, the #1 concern is : how can you keep the the user focused on what he has to do.

Brian: So, why are there so many things to see at the same time? What informed that original design choice? How does the team know that the user is having trouble focusing? Did they evaluate customers performing certain tasks/activities? Or is this a guess? If I take this statement at face value, it sounds like there are assumptions being made, which may or may not be true. The thing is, we can get from a broad subjective opinion about the design to a specific, objective measurement of it. In plain english: if we know what the product is supposed to do for the customers, and we have measured customers performing those tasks, we can now objectively rate whether there is actually "distraction" in the UI and can then inform our future decisions. If the situation is that there are "lots of use cases," then this comes back to understanding your customer, and making hard choices about what is most important. This means using the eraser as much as the pencil, and understanding that the product is not going to support every use case equally well. Design is about making decisions, and the team needs to decide what needs the solution is going to best satisfy, and understand that it may mean making other use cases/features more difficult for customers to ensure that the core values are not compromised. There is not a magic solution to "doing it all well" with large analytics solutions and data products.  I typically advise my clients to get agreement on a few core use cases/product values, and focus on making those great, before worrying about all the other things the product does.  


Évans: When working with web applications, we can’t show all of the data, we need to limit the results. Otherwise, timeouts will occur or worse, it will use all of the client mobile data 🙂  ... One of my colleagues brought the idea of a “shopping cart” to pick specific data from these different “pages” to work with them after… we will try it eventually

Brian: So, while it is great that there is some concern being given to practical things such as mobile-data use/cost, remember this: I don't know any analytics solution where the goal is to "show all of the data." Regardless of technical issues around timeouts or delivering too much data to the client/browser, most users don't need or want this. (Of course, if you *ask* customers if they want this, almost everyone will probably tell you do want "all the data" because they won't be convinced that you can possibly design for their needs and loss-aversion kicks in. This is a great example of why you have to be careful asking customers what they *want*.)

It's the product team's job to figure out the latent needs of users, and then to design solutions that satisfy those needs. Most customers don't know what they need until they see it.

What it sounds like overall is that the team is "guessing" that it is hard to focus, and they need to chop up data/results into sections that are more consumable. I don't know what that guess is based on, but it sounds like an assumption. Before writing any more code or designing any more UIs, I would first want to validate that this is actually a problem by doing some end-customer research to see where "too much unrelated data" got in the way of users successfully completing specific use cases/tasks that we gave them. Once that is done, the team can then evaluate specific improvement tactics such as a design using the "shopping-cart" idea.

Let me comment briefly on the shopping-cart as an aside. Generally speaking, the cart idea sounds potentially like "offloading choices onto users" and a crutch for not making a good default design decisions. I see this a lot. That said, with the little information we have, the tactic cannot be fairly judged.  My general rule around customization is that it can be great, but it should only come after you have designed some great defaults for users.  More often than not, customization comes up because teams do not want to spend the time to determine what the right default design should be and assumes that a customizable solution will solve everyone's problems. Remember: customers usually dont want to spend time tooling around in your product. You goal is to decrease customer tool time, and increase customer goal time. 


Évans: "...we will try it [the cart, I assume?] eventually "

Brian: So, a decision to "try the cart eventually" brings up the concept of risk/reward and the role design can play in decreasing your product/solution's risk to your business (or to customers).  

"Trying it" sounds quite a bit like guessing.  Instead of guessing, their team can reduce risk and inform their current state by having success criteria established up front, and measuring their current state of quality. This means running users through some benchmark tasks/use cases, and having some sort of basic "score." From there, they now have a baseline by which to later evaluate whether the new design with the "cart idea" improved the baseline or not. They can design the shopping cart idea out, and then run the same test/tasks against it with users to see if the cart idea is having the impact they want. For example, they might want to reduce task completion time by X% for a specific, high-frequency use case. They can time this task right now, and then time users doing the same test with mockups to see if the cart idea has merit and is decreasing the customer's time-to-completion.  The point here is that "improvement" is subjective...until your team makes it objective. 

Note that I also said they need to "design the [cart] idea out" and not "build the idea." Better design decreases risk. It may also save you time and money in the long run. You spend less time coding the wrong stuff, building the wrong architecture, and shipping out solutions that do not work well for users. These days, you can sometimes deploy code without much up-front design and "try it," but in my experience, it is very rare to remove features until a bunch of them constitute a large UX/UI mess. Additionally, many teams simply do not evaluate the impact of their latest features such that they can fairly define what "well" means. They just move on to the next feature because chances are, they already spent way more time just getting the first version of the last feature out, and the product owner is already concerned about the next thing in the backlog.

For larger projects/features like this cart idea–which I perceive to not be a trivial engineering effort–I would recommend de-risking this by at least doing some customer interviews. Doing 1x1 interviews will give some broad input that might inform the "cart" idea. It's not as good as having benchmarks and success criteria established, but that does take more effort to set up, and it may be such that the team is not ready to do this anyways. If they have never engaged with customers 1x1 before, I would suggest they take a smaller baby-step and just start having conversations.  Here's a recipe to follow: 

Contact 5–8 end users, and set up some 1x1 60 to 90-minute video/screensharing conversations, and start to ask questions. If you aren't sure where to start with questions, spend 25% of the time asking the users to define their job/role/responsibility and how it relates to the product. Then spend the remaining time asking these questions:
  1. "What was the last thing you did in the software? Can you replay that for me?"  (You should welcome tangents here, and ask the user to speak aloud as they replay their steps)
  2. "Can you recall a time where the product really helped you make a decision?"
  3. "What about a time where the data failed to help you make a decision you thought it would assist with?"
  4. Because he mentioned mobile, I would ask the users to talk about "when" they are using the product. Try not to lead with "do you use the product on mobile or desktop;" you want to infer as much as possible from their actual experiences they describe. 
Facilitation Tips:
  • Avoid the temptation to ask what people want or putting users in the situation of being the designer. While there are some ways a skilled researcher can use these to the benefit of the study, for now, I would focus on getting customers to talk about specific scenarios they can replay to you on a screen share.
  • Do ask "why did you do that?" as much as you can during the study. It's also a great way to keep a quiet participant engage. 
  • Understand that the interview should not be a survey and your "protocol" (question list) is just there to keep the conversation going. You are here to learn about each customer, individually. Keep the customer talking about their specific experiences, and be open to surprises. One of the best things about this type of qualitative research is learning about things you never knew to ask about. 
  • ​If you get pushback from stakeholders about the things you learned and people don't believe you because "you only talked to 5–8 people," then ask them "how many people would we have to talk to, to convince you about our findings?"  Any conversation is better than none, and there is no magic number of "right people."  You can learn a TON from a few interviews, and for high-risk businesses with thousands or millions of customers, you can also use the findings of the small study to run a large-scale quantitative survey. But, that's a whole other topic 😉
Make interviews with customers a routine habit for your team and get your whole product team (managers, stakeholders, UX, and engineers) involved. If you aren't talking to end users at least monthly, your team is probably out of touch and you're mostly designing and building on assumption. That method of developing products and solutions is higher risk for your business and your customers. 

Now, go forth, interview some customers, and start learning!

Need to improve the design of your analytics or data product?