Ah, fluff. It's a great word, and it was actually born in its marshmallow form just a mile away in Somerville, MA. There's even a fluff festival.
A few weeks ago, we talked about the various levels of maturity that an analytics-driven product can go through, with the Holy Grail being one that delivers actionable, useful insights at the right time without a lot of tool effort or human analysis. The question today is, what if your software can only deliver a very rudimentary recommendation because the underlying algorithm is in its infancy? Is offering a rudimentary software-generated insight better than just letting the end user have access to the underlying data and analyzing it themselves?
Today we'll talk about this, and I'll leave you with some new self-assessment questions you can use to explore providing insights vs. raw analytics.
Products w/ Any Automated Insights = A Good Start
First off, ballpark insights generated by your product are probably better than making your customer come up with their own via extensive tool time and analysis. If you're just displaying charts and metrics, then take a hard look at your product and ask where the software can take on more of the analysis effort you're currently imposing on your customers. Users didn't probably buy your product to look at charts and tables; they probably want actionable insights.
However, if you're like some of my past clients, you may be intimately familiar with the weaknesses in the product's ability to generate quality insights, if not embarrassed by them. "Our algorithm doesn't factor in inflation." "It doesn't know the last time a user logged in to the other site when it generates estimates." "The underlying data is only based on 1-week sample intervals, not daily." You know all the ways the product can generate incorrect insights for your customers and how it may not generate any insights at all when it could have, were it not so rudimentary or restricted.
So, should your product be generating insights driven by an immature algorithm...or fluff? Or are you better off just giving users the underlying data analytics (evidence)?
Answer: You probably need to show both, but the relationship is often an inverse one:
How Users Interpret Your Insights and Ambiguity
There is an inherent ambiguity that comes with designing analytics products, especially those that are generating insights. Welcome to the challenge! Nobody said making these products would be easy.
On the plus side, in my experience talking to users of these types of products, they're not dumb. They know that your software isn't a magic genie, and that they have to decide how much to trust the insights your software may have derived. In particular, if your software generates insights that, if incorrect, could impose high risk on the customer, then it's even more important that the users can validate/invalidate the findings your product arrived at, via their analysis of the evidence.
So, lead with the insights, support with beautiful evidence (see the Edward Tufte book too!)
Retire in Monte Carlo?
Not quite. But, have you ever used one of those retirement calculators that tell you how much you need to save for retirement? There are a tremendous number of variables that could come into play with estimating a number like this over many years (Monte Carlo simulation anyone)? The ingredients that can go into predictive recommendations and insights can be really complicated. However, even the retirement calculators that use these Monte Carlo simulations are not perfect, and yet people still use them all the time. They understand there is no way to predict the future or analyze a past incident perfectly. In this retirement planning tool, most users probably never bother to look at the evidence that supports "the final number" they get from the calculator because the risk is fairly low, and it's easy to get a second opinion elsewhere. However, if your product provides hard-to-find/proprietary data or insights, or insights that could have significant negative impact if incorrect, then the importance of providing good supporting evidence increases.
Anyhow, I know that today's topic was a bit more heady than normal, so if you're wondering how you can apply some of today's topic to your product, then read on:
Measuring the Quality or Potential of Your Product's Automated Insights
- What risk and reward is associated with your tool providing customers with a poor recommendation?
This includes the risk to the customer's business, and the risk to your product/company. It's easy to think about all the ways your product could introduce business risk if it doesn't generate accurate insights. On the reward side, remember that users understand there is ambiguity associated with software-generated insights, and that your product's value is not limited to generating a perfect insight every time. If you help save a customer time, or elucidate them on how your software arrived at the [wrong] insight, you may have still taught them something about how they can improve their own analysis of the problem you're both trying to solve. Your ballpark insight, even if occasionally incorrect, might save them a *ton* of time. If your insights are accurate enough of the time, their willingness to forgive occasional "wrong answers" increases as well. - Is "playing it safe" and not providing automated insights truly less risk to your product and business?
Not necessarily. Avoid the attitude of, "we can't generate a good recommendation, so we better not generate any at all: just show them the data and let them decide on their own." This is not necessarily a safer, or better UX. What you did is shift all the risk and effort to the customer, and you probably diminished the value of your product too. You're assuming users know how to analyze all the data to arrive at a useful insight, or will spend the requisite time to do so. So, while you may be legally safer doing this, your product may ending up being of little to no value, especially when your competitor comes out with a product that does provide insights. Lawyers 1, Product 0. - When the product puts out poor insights, can the product / algorithm learn from those incidents?
This could be through automated or manual means such as:- Allowing users to contribute data back into your system to improve its insights
- Monitoring customer feedback via your helpdesk
- Doing ongoing customer research
There may be variables your team never considered. Consider that some data may be qualitative, and other data may be quantitative.
- Have you clearly explained the limitations of your insights such that customers can qualify the advice accordingly?
Hint: you can't answer this without observing users directly, and interviewing them. Even if you tell them via your UI that "recommendations are only based on x, y, and z...," don't assume they understand this or read it. Go talk to them. Free UI tip: consider providing information about what your software did not take into account when providing its automated insights. What the product did not do, consider, or factor in is also informative. - Are your automated insights unique?
Ever seen Kayak's flight price forecaster? If there were no competing forecasters on the market, how much more do you think users would trust Kayak's forecasts? Until you have competitors outwardly differentiating themselves, it is probably hard for users to evaluate the quality of one price forecaster over another. Take advantage of being first to market. Do you really want to be prettying up all your charts and graphs while your competitor is working on building a forecaster that just estimates the answer?