(8) invisible design problems that are business problems

Today's insight was originally inspired by a newsletter I read from Stephen Anderson on designing for comprehension, and I felt like this could be expanded on for analytics practitioners and people working on data products.

One of the recurring themes I hear from my clients is around the topic of general engagement (or lack thereof) by end users/employees/stakeholders that are supposed to be benefitting from the insights of SAAS data products or internal analytics solutions. There are a lot of possible reasons why your engagement may be low, but there's a good chance that the design may be a potential reason. Unfortunately, not all design issues are immediately visible or visual in nature, but you can learn the skills to begin identifying them.

So, why are they business problems?

For internal analytics practitioners, if your customers/employees/users are "guessing" instead of using the tools you're providing them, then ultimately, you're not affecting their productivity or professional growth, and the company's investment in analytics is not returning a positive result overall.

On the other hand, if you've got a revenue-generating SAAS product, lack of engagement has a direct bottom-line impact: renewals. How long until somebody of importance notices they're paying for a service they never use? Do you really want to bank your business success on auto-renewal alone? The long-term value play is creating an indispensable service.

Here are some problems I frequently see when designing for analytics that go beyond standard data visualization issues. You should be examining and resolving these on an ongoing basis, in a proactive manner. (If you're sitting waiting for passive feedback, you're unlikely to ever "see" many of these issues). Most of these are not "once and done" problems, with simple tactical fixes. Discovering these strategic issues requires adopting ongoing behaviors your organization should develop, if you want to be able to consistently deploy meaningful value to your customers:

  1. Usability issues: getting the value from the service is too difficult, too long, not worth the effort. The only way to spot this and really understand how to fix the real issues are via 1x1 testing of tasks with customers. There are tons of tutorials on how to facilitate usability studies, and you can outsource the testing as well.
  2. Utility issues: while the user can "operate" the design properly, there is low value. This can be a result of vanity analytics, or displaying the evidence before displaying the practical value stemming from the evidence. This sometimes presents, in customer-speak, as "I get what it is showing me, but why would I want this?"
  3. Timing or context issues: your analytics, while useful and usable, are not coming at the right time in the user's lifecycle of use.
    1. For example, you may be presenting information that is perhaps only useful at year-end, yet your tool doesn't know this and continues to persist the information in the UI as if it is meaningful signal mid-year. Right info, wrong time. Perhaps your tool should adapt to business cycles and anticipate time-sensitive needs.
    2. Another example may be a situation where a customer perhaps needs a cyclical (e.g. monthly) readout, but your tool requires them to log in and fetch the data instead of just notifying them of the information at the time it is needed. This doesn't mean you need to run out and create a scheduler for every aspect of your solution. On the contrary, this can lead to other issues.
    3. A third example goes like this. Ever heard from a customer, "this is great stuff, but I'm [in my truck] by that time and dont have my computer with me. So I don't use your tool very much." In this case, perhaps a mobile experience would have led to more engagement by the driver of the truck, and therefore, more value for him, and for the company. When was the last time you did a ride-along with your drivers? Did you even know you had drivers? The point is, the context of use [while-driving-a-truck] was not considered at the time the design was provided [a desktop solution].
  4. Domain knowledge issues: the information presented contains technical jargon, or advanced domain knowledge that customers do not have yet. You can't reliably know this without talking to customers directly, and you'll need to hone your interview facilitation skills to acquire this type of information. This is in part due to the fact that it can be embarrassing, or perceived to be a risk, for customers/end users to admit they don't know what certain things mean. Your job is to help them realize that you're testing your design, and it is the design that failed, not them.
  5. Ambiguous Correlation/Causation relationships: is your design declarative or exploratory? If it's declarative, did you provide the right evidence for this? If you're trying to show correlation, is it clear to the user what relationships you're delineating?
  6. You're building a framework instead of solution. I see this one a lot. Every UI view on every page shares the same "features," and over time, the design becomes dictated by the backing APIs or the reusable code snippets engineering doesn't want to rework on a case-by-case basis. The reality is that you shouldn't be forcing patterns on people too early, and if you're not rigorously validating your designs with customers, you have no idea what aspects in the design should really be "stock" UI features. A simple example is table sorting/filtering: your default control/view for this, while seeming to be "uber flexible," may actually cause UX problems because the customer cannot understand "why would I ever want to sort this table by X? Why would I want to filter this?" In your attempt to provide flexibility by automatically allowing every table view to be filtered and sorted, you actually just increased the complexity of the tool for no good reason. You might have shipped more code faster, but you didn't provide more value.
  7. "We're using agile." Agile is not the same thing as agility, and while this could be an entire post on its own, using agile doesn't guarantee successful deployments of value to users. A lot of the time, agile is a buzzword for doing incremental (not iterative) development, and more often than not in my experience, there is little, if any customer design validation (usability testing or discovery work) being done. The other thing with popular Agile methods (e.g. modified scrum) is that there is no formal design phase, and the assumption is that all design and coding can always be done simultaneously. This is not always true, and it's even less true unless you have a seasoned design practice within your organization that has properly integrated itself. It's also *definitely* not true if you're conceiving a brand new service or product. 
  8. Knowledge gaps or distributed cognition issues:  The best way I can think to explain this is with an example. Let's pretend we have an analytics service that allows employees to make projections/predictions about things such as bulk purchasing decisions of some good for the next fiscal year. In reality, the person who is going to make a final business decision using your analytics doesn't really have or rely solely on the information required in your tool. Through observation of their use of your service (not just asking them!), you might find that your customer is accessing 2 or 3 different systems before making the purchasing decision, none of which share data with each other. In short, your analytics solution is really just "part" of their overall workflow/process, and you haven't mapped the way they actually make a purchasing decision to your software solution.

Remember: you cannot just "look" at your tool and consistently identify these design issues. Even with tons of design training, an expert cannot just "see" all of these issues either. You have to go into the field, observe users, and run structured usability studies. Asking customers what they want or think is also unreliable, because end users are not always aware of their behaviors and actions, and you're likely to get an incomplete (or inaccurate) depiction as they try to answer your questions "intelligently."

Focusing on what people are doing is much more truthful and enlightening for making good design decisions.

Good luck!

Want More Insights on Designing for Analytics?

What internal analytics practitioners can learn from analytics “products” (like SAAS)

When I work on products that primarily exist to display analytics information, I find most of them fall into roughly four different levels of design maturity:

  1. The best analytics-driven products give actionable recommendations or predictions written in prose telling a user what to do based on data.  They are careful about the quantity and design of the supporting data that drove the insights and recommendations being displayed, and they elegantly balance information density, usability, and UX.
  2. The next tier of products are separated from the top tier by the fact they're limited in their focus only on historical data and trends. They do not predict anything, however, they do try to provide logical affordances at the right time, and do not just focus on "data visualization."
  3. Farther down the continuum are products that have progress with visualizing their data, but haven't given UX as much attention.  It's possible for your product to have a *great* UI, and a terrible UX.  If customers cannot figure out "why do I need this?," "where do i go from here?," "is this good/bad?," or "what action should I take based on this information?," then the elegant data viz or UI you invested in may not be providing much value to your users.
  4. At the lowest end of the design maturity scale for analytics products are basic data-query tools that provide raw data exports, or minimally-designed table-style UIs. These tools require a lot of manual input and cognitive effort by the user to know how to properly request the right data and format (design!) it in some way that it becomes insightful and actionable. If you're an engineer or you work in a technical domain, the tendency with these UIs is to want to provide customers with "maximum flexibility in exploring the data." However, with that flexibility often comes a more confusing and laborious UI that few users will understand or tolerate. Removing choices is one of the easiest ways to simplify a design.One of my past clients used to call these products "metrics toilets," and I think that's a good name! Hopefully, you don't have a metrics toilet. *...Flush...*

What level is your product at right now?

Want More Insights on Designing for Analytics?

Failure rates for analytics, BI, and big data projects = 75% – yikes!

Not to be the bearer of bad news, but I recently found out just how many analytics, IOT, big data, and BI projects fail. And the numbers are staggering. Here's a list of articles and primary sources. What's interesting to me about many of these is the common issue around "technology solutions in search of a problem." Companies cannot define precisely what the analysis or data or IOT is supposed to do for the end users, or for the business.

And, it hasn't changed in almost a decade according to Gartner:

  • Nov. 2017: Gartner says 60% of #bigdata projects fail to move past preliminary stages. Oops, they meant 85% actually. 
  • Nov. 2017: CIO.com lists 7 sure-fire ways to fail at analytics. “The biggest problem in the analysis process is having no idea what you are looking for in the data,” says Tom Davenport, a senior advisor at Deloitte Analytics (source)
  • May 2017: Cisco reports only 26% of survey respondents are successful with IOT initiatives (74% failure rate) (source)
  • Mar 2015: Analytics expert Bernard Marr on Where Big Data Projects Fail (source)
  • Oct 2008: A DECADE AGO - Gartner's #1 flaw for BI services: "Believing 'If you build it, they will come...'" (source)

There are more failure-rate articles out there.

Couple these stats with failure rates for startup companies and...well, isn't it amazing how much time and money is spent building solutions that are underdelivering so significantly? It doesn't have to be like this.

Go out and talk to your customers 1 on 1. Find a REAL problem to solve for them. Get leadership agreement on what success means before you start coding and designing. There's no reason to start writing code and deploying "product" when there is no idea of what success looks like for both the customers and the business.

Skip the design strategy part, and you'll just become another one of the statistics above.

Does your company have an interesting win or failure story you can share? Email me and tell me about it.

Want More Insights on Designing for Analytics?

My reactions to the Chief Data Officer, Fall 2017 conference summary

I ran into a an article about the Chief Data & Analytics Officer, Fall conference that summarized some of the key takeaways at the previous year's conference. One paragraph in the article stuck out to me:

...
The Great Dilemma – Product vs Project vs Capability Analytics Approaches
Although not one of these approaches will provide a universal solution, organisation’s must be clear on which avenue they’d like to take when employing enterprise analytics. Many speakers discussed the notion of analytics as a product/service, and the importance in marketing that product/service to maximise buy-in and adoption. However, analytics executives may look to take a capability-based approach, but one cannot simply build an arsenal of analytics capabilities without a clearly defined purpose and value generated for the business...

(Bolding added by me)

For companies pursuing internal analytics solutions, or creating externally-facing data products or solutions, the situation is basically the same: you cannot start with a bunch of data and metrics, visualize it, and then hope that you have a product/solution somebody cares about. The data isn't what is interesting: it is the actions or strategic planning one can take from the data that holds the value. You have to design the data into information, in order to get it to the point customers can grok this value.

I have found engineering-lead organizations that tend to operate in the "build first, find problem second" method, looking at design as something you bring in at the end to "make it look all pretty and nice." A good UX strategy is a good product strategy is a good analytics strategy: by spending time to understand the latent needs people have for your analytics/data up front, you're much more likely to generate a solution that solves for a need on the other side.

Want More Insights on Designing for Analytics?

How can you possibly design your service effectively without these?

I'm working with a large, household-name technology company right now on a large project, and they struggle with one of the same things so many of my clients struggle with. Today's topic is articulating use cases and goals in an effective manner that allows your design and development to proceed with clarity and accountability.

If your service's strategy, use cases, and goals are not dictated clearly (or at all) and you're doing "feature-driven" development, you have a lot less chance of succeeding, and a lot greater chance of building stuff that has low utility to your customers. You also take on the code and design debt that comes with building junk that has to be refactored (or worse, a small handful of noisy customers likes what you did, and now you have to justify pulling the plug on their value so you can focus on the majority of customers who will sustain your service.)

Your goal as a product owner/manager in the data/analytics space is not to "display the data we're collecting." The job is to figure out how the data can be turned into a user experience that provides customers with value (as perceived by them). A big red flag for me on a consulting engagement is when the stakeholders I'm talking to cannot articulate the top 5-10 goals and use cases the service is supposed to facilitate. I call these the benchmark use cases, and if you can't state yours, and you're the business stakeholder, how can your team possibly be successful?

How can you know if your service's design is right?

How can you even measure the design / service for effectiveness when you don't know what a pass/fail UX looks like?

You can't. 

If your team doesn't know where the goalpost is, or the difference between good and bad UX, you're not likely to succeed. Your product/business/UX team need to be able to clearly state these benchmark use cases if you want to have a design that is obtainable, useful, usable, and measurable. If you skip this and just start making stuff, you'll just pay for the mistakes on the backend in the form of rewriting code, dealing with customer complaints, or in many cases: SILENCE. It costs more, takes more time, and is more frustrating for everyone.

Spend the time to make sure the top of your product development process begins with clear benchmark use cases and your engineers and designers will have a much better chance of delighting your customers.

Want More Insights on Designing for Analytics?

Reader questions answered: “what are your top concerns designing for analytics?”

Today I want to respond to a reader who answered a previous email I sent you all about your top concerns designing for analytics.

Here's Évans' email:

+++++
In analytics, it’s not like a CRUD [Create-Read-Update-Delete] with a simple wizard-like workflow (Input - Validate - Save). It’s kinda hard to keep the user focused when there are so many things to see at the same time on different angle to make a decision. 

So, for me, the #1 concern is : how can you keep the the user focused on what he has to do.

When working with web applications, we can’t show all of the data, we need to limit the results. Otherwise, timeouts will occur or worse, it will use all of the client mobile data 🙂 So, a lot of this data is divided in different “pages”. Sure, we could add “sugar” on the dashboard and bring different “pastry-charts”, but it is not always desired by the client. When they know what data they have, they will prefer to "cross join" the full data. Maybe we should think outside-the-box ? One of my colleagues brought the idea of a “shopping cart” to pick specific data from these different “pages” to work with them after… we will try it eventually 🙂

Hope it could help !
Évans

+++++

So, I will tackle this the best I can not knowing anything about the product/solution he is working on, the team, and their success metrics.  I will make some assumptions we we work through the separate parts of this email that I can understand:

----

Évans: It’s kinda hard to keep the user focused when there are so many things to see at the same time on different angle to make a decision.  So, for me, the #1 concern is : how can you keep the the user focused on what he has to do.

Brian: So, why are there so many things to see at the same time? What informed that original design choice? How does the team know that the user is having trouble focusing? Did they evaluate customers performing certain tasks/activities? Or is this a guess? If I take this statement at face value, it sounds like there are assumptions being made, which may or may not be true. The thing is, we can get from a broad subjective opinion about the design to a specific, objective measurement of it. In plain english: if we know what the product is supposed to do for the customers, and we have measured customers performing those tasks, we can now objectively rate whether there is actually "distraction" in the UI and can then inform our future decisions. If the situation is that there are "lots of use cases," then this comes back to understanding your customer, and making hard choices about what is most important. This means using the eraser as much as the pencil, and understanding that the product is not going to support every use case equally well. Design is about making decisions, and the team needs to decide what needs the solution is going to best satisfy, and understand that it may mean making other use cases/features more difficult for customers to ensure that the core values are not compromised. There is not a magic solution to "doing it all well" with large analytics solutions and data products.  I typically advise my clients to get agreement on a few core use cases/product values, and focus on making those great, before worrying about all the other things the product does.

----

Évans: When working with web applications, we can’t show all of the data, we need to limit the results. Otherwise, timeouts will occur or worse, it will use all of the client mobile data 🙂  ... One of my colleagues brought the idea of a “shopping cart” to pick specific data from these different “pages” to work with them after… we will try it eventually 

Brian: So, while it is great that there is some concern being given to practical things such as mobile-data use/cost, remember this: I don't know any analytics solution where the goal is to "show all of the data." Regardless of technical issues around timeouts or delivering too much data to the client/browser, most users don't need or want this. (Of course, if you *ask* customers if they want this, almost everyone will probably tell you do want "all the data" because they won't be convinced that you can possibly design for their needs and loss-aversion kicks in. This is a great example of why you have to be careful asking customers what they *want*.)

It's the product team's job to figure out the latent needs of users, and then to design solutions that satisfy those needs. Most customers don't know what they need until they see it.

What it sounds like overall is that the team is "guessing" that it is hard to focus, and they need to chop up data/results into sections that are more consumable. I don't know what that guess is based on, but it sounds like an assumption. Before writing any more code or designing any more UIs, I would first want to validate that this is actually a problem by doing some end-customer research to see where "too much unrelated data" got in the way of users successfully completing specific use cases/tasks that we gave them. Once that is done, the team can then evaluate specific improvement tactics such as a design using the "shopping-cart" idea.

Let me comment briefly on the shopping-cart as an aside. Generally speaking, the cart idea sounds potentially like "offloading choices onto users" and a crutch for not making a good default design decisions. I see this a lot. That said, with the little information we have, the tactic cannot be fairly judged.  My general rule around customization is that it can be great, but it should only come after you have designed some great defaults for users.  More often than not, customization comes up because teams do not want to spend the time to determine what the right default design should be and assumes that a customizable solution will solve everyone's problems. Remember: customers usually dont want to spend time tooling around in your product. You goal is to decrease customer tool time, and increase customer goal time.

----

Évans: "...we will try it [the cart, I assume?] eventually "

Brian: So, a decision to "try the cart eventually" brings up the concept of risk/reward and the role design can play in decreasing your product/solution's risk to your business (or to customers).

"Trying it" sounds quite a bit like guessing.  Instead of guessing, their team can reduce risk and inform their current state by having success criteria established up front, and measuring their current state of quality. This means running users through some benchmark tasks/use cases, and having some sort of basic "score." From there, they now have a baseline by which to later evaluate whether the new design with the "cart idea" improved the baseline or not. They can design the shopping cart idea out, and then run the same test/tasks against it with users to see if the cart idea is having the impact they want. For example, they might want to reduce task completion time by X% for a specific, high-frequency use case. They can time this task right now, and then time users doing the same test with mockups to see if the cart idea has merit and is decreasing the customer's time-to-completion.  The point here is that "improvement" is subjective...until your team makes it objective. 

Note that I also said they need to "design the [cart] idea out" and not "build the idea." Better design decreases risk. It may also save you time and money in the long run. You spend less time coding the wrong stuff, building the wrong architecture, and shipping out solutions that do not work well for users. These days, you can sometimes deploy code without much up-front design and "try it," but in my experience, it is very rare to remove features until a bunch of them constitute a large UX/UI mess. Additionally, many teams simply do not evaluate the impact of their latest features such that they can fairly define what "well" means. They just move on to the next feature because chances are, they already spent way more time just getting the first version of the last feature out, and the product owner is already concerned about the next thing in the backlog.

For larger projects/features like this cart idea–which I perceive to not be a trivial engineering effort–I would recommend de-risking this by at least doing some customer interviews. Doing 1x1 interviews will give some broad input that might inform the "cart" idea. It's not as good as having benchmarks and success criteria established, but that does take more effort to set up, and it may be such that the team is not ready to do this anyways. If they have never engaged with customers 1x1 before, I would suggest they take a smaller baby-step and just start having conversations.  Here's a recipe to follow:

RECIPE:
Contact 5–8 end users, and set up some 1x1 60 to 90-minute video/screensharing conversations, and start to ask questions. If you aren't sure where to start with questions, spend 25% of the time asking the users to define their job/role/responsibility and how it relates to the product. Then spend the remaining time asking these questions:

  1. "What was the last thing you did in the software? Can you replay that for me?"  (You should welcome tangents here, and ask the user to speak aloud as they replay their steps)
  2. "Can you recall a time where the product really helped you make a decision?"
  3. "What about a time where the data failed to help you make a decision you thought it would assist with?"
  4. Because he mentioned mobile, I would ask the users to talk about "when" they are using the product. Try not to lead with "do you use the product on mobile or desktop;" you want to infer as much as possible from their actual experiences they describe.

Facilitation Tips:

  • Avoid the temptation to ask what people want or putting users in the situation of being the designer. While there are some ways a skilled researcher can use these to the benefit of the study, for now, I would focus on getting customers to talk about specific scenarios they can replay to you on a screen share.
  • Do ask "why did you do that?" as much as you can during the study. It's also a great way to keep a quiet participant engage.
  • Understand that the interview should not be a survey and your "protocol" (question list) is just there to keep the conversation going. You are here to learn about each customer, individually. Keep the customer talking about their specific experiences, and be open to surprises. One of the best things about this type of qualitative research is learning about things you never knew to ask about. 
  • ​If you get pushback from stakeholders about the things you learned and people don't believe you because "you only talked to 5–8 people," then ask them "how many people would we have to talk to, to convince you about our findings?"  Any conversation is better than none, and there is no magic number of "right people."  You can learn a TON from a few interviews, and for high-risk businesses with thousands or millions of customers, you can also use the findings of the small study to run a large-scale quantitative survey. But, that's a whole other topic 😉

Make interviews with customers a routine habit for your team and get your whole product team (managers, stakeholders, UX, and engineers) involved. If you aren't talking to end users at least monthly, your team is probably out of touch and you're mostly designing and building on assumption. That method of developing products and solutions is higher risk for your business and your customers. 

Now, go forth, interview some customers, and start learning!

Want More Insights on Designing for Analytics?

Video Sample: O’Reilly Strata Conf

This is recording of my presentation at the O'Reilly Strata Data Conference in New York City in 2017.

Do you spend a lot of time explaining your data analytics product to your customers? Is your UI/UX or navigation overly complex? Are sales suffering due to complexity, or worse, are customers not using your product? Your design may be the problem.

My little secret? You don't have to be a trained designer to recognize design and UX problems in your data product or analytics service, and start correcting them today.

Want the Slides?

Download the free self-assessment guide that takes my slide deck principles and puts them into an actionable set of tips you can begin applying today.

Want More Insights on Designing for Analytics?

UI Review: Next Big Sound (Music Analytics) – Part 1

Today I got an interesting anomaly email from a service I use called Next Big Sound. Actually, I don't use the service too much, but it crosses two of my interests: music and analytics.

Next Big Sound aggregates music playback data from various music providers (Spotify, Pandora, etc) and also, apparently, tries to correlate changes in music plays with social media events happening in the real world (probably so you can see if given Event X generates a change in Plays). In addition to design consulting, in my "other life," I lead and perform in a dual-ensemble called Mr. Ho's Orchestrotica which has a few albums out that are available on Pandora.

Pandora is one of the available data sources that Next Big Sound monitors. In this email, Next Big Sound apparently detected an abnormal increase in plays on Pandora and alerted me of this via this email. Check it out below:

Image

On first glance, I like the supporting evidence here (chart) and the lead-in text tells me thye are tracking what "normal" plays are such that they can alert me on abnormal increases. At first, I wondered why they were showing me some of my own social media posts, but then I realized they were trying to help me correlate whether any social media activity may have corresponded with the increase in plays. This is a great example of where they know their software probably cannot draw a literal causation relationship, but they can help users correlate and potentially find a causation. Incidentally, I actually don't care much about how many playbacks the Orchestrotica gets on streaming services as it's not a KPI for my group, but I found this a nice way to help artists and labels–especially artists working more in the pop/entertainment world–to understand what is going on with fans of their music, what is getting traction, etc. In this case, there is no correlation here; the social posts from my personal/artist social media accounts had nothing to do with Orchestrotica activities for the most part, but I still liked the UX.

So, what do you think "Create a Graph" does?

I wondered too. Stay tuned for Part 2 to find out.

Does your app/product provide any anomaly notifications like this? I would love to hear about it. Can you send me an example? Email it to brian@orchestrotica.com

Want More Insights on Designing for Analytics?

Getting confidence in the value of your data

(As shown to customers in your UI)

I'm talking to a prospective SAAS client right now, and they're trying to expose some analytics on their customers' data so that the customers can derive ROI from the SAAS on their own. The intent is that the data can also be useful to the SAAS sales team, as a tool to help prospects understand what the possible ROI might be.

I had a question for Dave around whether the project would be successful if we talked to the users, designed a bunch of interfaces, solicited feedback on the design outputs, and found out that the data, while interesting, didn't really help the customers derive ROI. Would the design engagement still be productive and a success in the end? Ultimately, I didn't want to take on a project if we had hunches that the data we had, while being the best possible available data and elegantly presented, may not help the end user or buyer calculate ROI.

Here's what Dave told me:

Yes, the design engagement would still be a success. It provides us a punchlist of what else we need to do, which is in-and-of-itself is useful; and presumably defines what the analysis/reporting needs would be once we get that data. Less of a success, or more of a delayed-gratification one, but still useful.

I thought this was interesting to share, and I hoped Dave would say this because it shows that sometimes, you have to do some design to figure out what the final design needs to be. You can't always plan ahead what the right solution is and moving from designing on assumption to designing on fact is powerful information to inform your product.

Conversely, you can also spec out the entire project, including all the data/queries that customers said would be useful, write it into a spec or backlog, code it up, skip design, and then still have it not be successful because customers couldn't actually experience the ROI that your data was supposed to convey. A product backlog does not = a viable product. It's just a bunch of user stories or features. The glue holding them together, and what helps customers realize the ROI, is design.

Want More Insights on Designing for Analytics?

Tips to help focus your analytics design/engineering efforts on results

If you are starting out on a new feature design, or analytics effort, can you clearly state what the value will be in quantifiable terms at the end of the sprint?

Are you building an "exploratory" UI, or one that is supposed to drive home conclusions for the customer?

When clients come to me about product design engagements, I spend a lot of time trying to understand, at the end of the project, how success will be measured. Frequently, my clients haven't thought this out very much. I think that's natural; when you're close to your data, you can probably see a lot of ways it could be "visualized" or that value could be pulled out. But, when it's time to get specific, are you and your team able to clearly share a vision of what the desired end state is?

Here are some examples of goals and success criteria I've seen with my design clients. Having these types of success metrics makes it much easier for everyone involved on the project to know if they're on track to deliver value:

  • SAAS Example: Make it easier for the sales team to sell our product by surfacing interesting analytics that help customers see the value of the product. Ideally, a 30-day closing period for a sale drops to a 1-week closing period.
  • IT Software Example: Remove unnecessary troubleshooting time from the customer's plate by running analytics to either surface a problem, or eliminate what isn't the problem. This is a reduction in customer tool-time effort. If we can drop troubleshooting time by 50%, that is worth $X per incident (the business impact time + the manpower/labor time saved).
  • Generic example: Help the customer understand what interesting outliers are in the data so they can take action. There are opportunities to exploit if the outliers are interesting. Our analytics should help surface these outliers, and qualify them as well. If we can save 10hrs a week of "exploration" time the customer has to do by surfacing this data early in the UX, that is a substantial labor savings ox $x as well as overall product quality/delight since the users can now focus on what's really important in their job (not the "hunt").

This is the start of making any design engagement successful.

Got some of your own goals/metrics to share? Hit reply; I would love to hear them.  If you're embarking on a design project and need help getting these defined,  you can schedule a free micro-consult with me below.

Want More Insights on Designing for Analytics?