What was wrong with this founder’s SAAS Analytics UI?

"Can you take a look at my UI and provide feedback?" It's a common question I get. How do we judge a UI/UX design of an SAAS analytics tool? By understanding what the customer is trying to do and what a meaningful outcome looks like. What are the signs we're on track? Learn more in this story from my time advising MIT Sandbox Venture Fund founders.

What was wrong with this founder's SAAS analytics tool UI?

I advise for MIT Sandbox venture fund on product/design for ML/analytics/data. Often, when the startup founders in this program reach out for one of my monthly office hour slots, the question on Zoom begins like this:

Them: "I just wanted to show you our UI and get some feedback."
Me: "That's nice, but that's a really giant open-ended question and you may get very giant, broad feedback that isn't helpful. What specific type of feedback are you looking for?"
Them: "Oh you know, just whether the UI is good or not."
Me: "Well, have you shown this to any users yet? What did they do or say? They may have an idea of what is good for them."

Eventually, we get to "what problem or need brought the customer to log into this app in the first place?"  And finally, once that is uncovered, we begin to surface what their actual needs are -- which in turn tells me something about "how good the UI may be."

See, for me to judge a data product's design as "good" beyond basic principles of UX/UI design, first we have to develop empathy for the user/customer.

We have to know what it's like to be them, to do their job, to have their role, to know what worries them, and the like.

When we frame things this way, "what's wrong with the design" often becomes self-explanatory to a degree. This is when learning can happen.

Last week, I was presented with the home dashboard of this SAAS analytics tool. In this session, it was an app that helps banks see not just the vendors and categories of transactions placed by the credit cards they issue, but also aggregates/totals of what specific PRODUCTS were bought across all card holders.

The thing is, it was unclear what questions needed to be answered with all the donut and bar charts.

In the end, it turns out, on a totally different screen, there was an advertising campaign-planner tool. "Yea, our product is meant to help a (some persona) at the bank run product-level promotions such as cash-back-on-an-iPhone15-purchase." However, the "evidence" -- all the historical sales data down at the product level -- that was intended to enhance the creation of future promotions was in an entirely different part of the application.

In other words, the decision making aid that the app wanted to supply was entirely decoupled from the process of planning a new promotion, and knowing how/where/to whom to target it.

They had decoupled the act of deciding "to whom/where/what should we promote?" from the act of "plan a new promotion for [a product] so that we can ___[objective]___."

While I don't know this specific audience, I told him that the issue with his design isn't probably a problem with the data viz choices, but rather, the fact that tool did not seem to support the logical workflow a promotions person would likely have. Additionally, it did not provide any recommendations/suggestions to the promotions person based on the data beyond adjusting the total audience size as the filters/dropdowns were selected. The solution here had been decoupled from the "job to be done" of the user. At least, on the surface, that is my hypothesis as to whether the "design was good or not."  We can't even judge the data viz choices yet either -- because we don't know how and when they were supposed to be used.

Some of you may understand this mapping of "jobs to be done" over to UI/UX. However, for data products, there are two key takeaways I hope you'll make:

  1. Even if you "saw this one coming" as I was writing, I hadn't actually watched a customer/user do "promotion creating" work. I don't know how a promotions person does this, when they do it, how often they do it, whether they have full autonomy, whether they love it, whether they hate it, what incentives exist, etc. This is why UX research, SME knowledge, and customer exposure time is critical -- to fill in the macro level info with all the 100s of details that would allow us to turn an "acceptable" solution into an indispensable one. The big picture being right is great for MVP, but it may not get you to the land of "so good, they would pay you for it or exchange something of value."
  2. Knowing the KPIs that the data product needs to provide is not substantial enough to facilitate a UX that would be so good, somebody would pay to use it. The thing is, it's possible this founder's dashboard had all the right metrics and info on it -- but, on a heuristic evaluation, to me, they were most likely not at the right place and at the right time. Understanding "right place and right time" is critical to connecting the data back to a user in a way that makes it actionable and useful -- however, it may not change the calculations, model predictions, or other plumbing details behind the scene at all. If your team sees most or all of the work - the game - as doing the latter, there's a good chance you may end up with "technically right, effectively wrong."

If you're wondering how to get better at UI/UX for your own data product - so they get used, create value, and actually delight your users - the two formats of my Designing Human-Centered Data Products training might help: the self-guided course and the instructor-led seminar.


Photo by Katie McNabb on Unsplash

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

More Free Insights: