(8) invisible design problems that are business problems

Today's insight was originally inspired by a newsletter I read from Stephen Anderson on designing for comprehension, and I felt like this could be expanded on for analytics practitioners and people working on data products.

One of the recurring themes I hear from my clients is around the topic of general engagement (or lack thereof) by end users/employees/stakeholders that are supposed to be benefitting from the insights of SAAS data products or internal analytics solutions. There are a lot of possible reasons why your engagement may be low, but there's a good chance that the design may be a potential reason. Unfortunately, not all design issues are immediately visible or visual in nature, but you can learn the skills to begin identifying them.

So, why are they business problems?

For internal analytics practitioners, if your customers/employees/users are "guessing" instead of using the tools you're providing them, then ultimately, you're not affecting their productivity or professional growth, and the company's investment in analytics is not returning a positive result overall.

On the other hand, if you've got a revenue-generating SAAS product, lack of engagement has a direct bottom-line impact: renewals. How long until somebody of importance notices they're paying for a service they never use? Do you really want to bank your business success on auto-renewal alone? The long-term value play is creating an indispensable service.

Here are some problems I frequently see when designing for analytics that go beyond standard data visualization issues. You should be examining and resolving these on an ongoing basis, in a proactive manner. (If you're sitting waiting for passive feedback, you're unlikely to ever "see" many of these issues). Most of these are not "once and done" problems, with simple tactical fixes. Discovering these strategic issues requires adopting ongoing behaviors your organization should develop, if you want to be able to consistently deploy meaningful value to your customers:

  1. Usability issues: getting the value from the service is too difficult, too long, not worth the effort. The only way to spot this and really understand how to fix the real issues are via 1x1 testing of tasks with customers. There are tons of tutorials on how to facilitate usability studies, and you can outsource the testing as well.
  2. Utility issues: while the user can "operate" the design properly, there is low value. This can be a result of vanity analytics, or displaying the evidence before displaying the practical value stemming from the evidence. This sometimes presents, in customer-speak, as "I get what it is showing me, but why would I want this?"
  3. Timing or context issues: your analytics, while useful and usable, are not coming at the right time in the user's lifecycle of use.
    1. For example, you may be presenting information that is perhaps only useful at year-end, yet your tool doesn't know this and continues to persist the information in the UI as if it is meaningful signal mid-year. Right info, wrong time. Perhaps your tool should adapt to business cycles and anticipate time-sensitive needs.
    2. Another example may be a situation where a customer perhaps needs a cyclical (e.g. monthly) readout, but your tool requires them to log in and fetch the data instead of just notifying them of the information at the time it is needed. This doesn't mean you need to run out and create a scheduler for every aspect of your solution. On the contrary, this can lead to other issues.
    3. A third example goes like this. Ever heard from a customer, "this is great stuff, but I'm [in my truck] by that time and dont have my computer with me. So I don't use your tool very much." In this case, perhaps a mobile experience would have led to more engagement by the driver of the truck, and therefore, more value for him, and for the company. When was the last time you did a ride-along with your drivers? Did you even know you had drivers? The point is, the context of use [while-driving-a-truck] was not considered at the time the design was provided [a desktop solution].
  4. Domain knowledge issues: the information presented contains technical jargon, or advanced domain knowledge that customers do not have yet. You can't reliably know this without talking to customers directly, and you'll need to hone your interview facilitation skills to acquire this type of information. This is in part due to the fact that it can be embarrassing, or perceived to be a risk, for customers/end users to admit they don't know what certain things mean. Your job is to help them realize that you're testing your design, and it is the design that failed, not them.
  5. Ambiguous Correlation/Causation relationships: is your design declarative or exploratory? If it's declarative, did you provide the right evidence for this? If you're trying to show correlation, is it clear to the user what relationships you're delineating?
  6. You're building a framework instead of solution. I see this one a lot. Every UI view on every page shares the same "features," and over time, the design becomes dictated by the backing APIs or the reusable code snippets engineering doesn't want to rework on a case-by-case basis. The reality is that you shouldn't be forcing patterns on people too early, and if you're not rigorously validating your designs with customers, you have no idea what aspects in the design should really be "stock" UI features. A simple example is table sorting/filtering: your default control/view for this, while seeming to be "uber flexible," may actually cause UX problems because the customer cannot understand "why would I ever want to sort this table by X? Why would I want to filter this?" In your attempt to provide flexibility by automatically allowing every table view to be filtered and sorted, you actually just increased the complexity of the tool for no good reason. You might have shipped more code faster, but you didn't provide more value.
  7. "We're using agile." Agile is not the same thing as agility, and while this could be an entire post on its own, using agile doesn't guarantee successful deployments of value to users. A lot of the time, agile is a buzzword for doing incremental (not iterative) development, and more often than not in my experience, there is little, if any customer design validation (usability testing or discovery work) being done. The other thing with popular Agile methods (e.g. modified scrum) is that there is no formal design phase, and the assumption is that all design and coding can always be done simultaneously. This is not always true, and it's even less true unless you have a seasoned design practice within your organization that has properly integrated itself. It's also *definitely* not true if you're conceiving a brand new service or product. 
  8. Knowledge gaps or distributed cognition issues:  The best way I can think to explain this is with an example. Let's pretend we have an analytics service that allows employees to make projections/predictions about things such as bulk purchasing decisions of some good for the next fiscal year. In reality, the person who is going to make a final business decision using your analytics doesn't really have or rely solely on the information required in your tool. Through observation of their use of your service (not just asking them!), you might find that your customer is accessing 2 or 3 different systems before making the purchasing decision, none of which share data with each other. In short, your analytics solution is really just "part" of their overall workflow/process, and you haven't mapped the way they actually make a purchasing decision to your software solution.

Remember: you cannot just "look" at your tool and consistently identify these design issues. Even with tons of design training, an expert cannot just "see" all of these issues either. You have to go into the field, observe users, and run structured usability studies. Asking customers what they want or think is also unreliable, because end users are not always aware of their behaviors and actions, and you're likely to get an incomplete (or inaccurate) depiction as they try to answer your questions "intelligently."

Focusing on what people are doing is much more truthful and enlightening for making good design decisions.

Good luck!

More Free Insights:

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

More Free Insights: