If you’re concerned about low engagement with your enterprise data product, analytics service, or decision support tool, then you might be focusing on the wrong problem.
What you need to do is design an engaging experience, instead of focusing on the quantity of engagement.
Gartner just posted new numbers in early 2019; once again, 80% of analytics insights won’t deliver business outcomes, and the same number of AI projects will effectively remain “alchemy.”
Are you sure that “more engagement” is what’s needed to change these numbers? I doubt it. Your goal should be to design a better experience that is useful, usable, and (gasp) perhaps delightful. If you get that stuff right, the value will emerge.
So, for starters, change your success criteria from “more engagement” to “meaningful engagement.” Now, at this point, some of you may be asking, “what is meaningful engagement?” How will you define that, and then measure it?
Human-Centered Design Provides a Process to Define What Meaningful Engagement Means
Human-centered design—and its process of building empathy for people, defining problems, and crafting solutions that get tested—can help us define meaningful engagement so we can then test our data products and analytics applications to see if they’re actually facilitating the desired engagement. Now, some of this may sound similar to the popular activities encompassed by the methodology called “design thinking.” The reality is that design is lot more about “doing” than “thinking,” and there’s no reason to call it “design doing.” It’s just design. But, that's for another article. 😉
Meaningful user engagement in the context of data products is something you should be able to express prior to building solutions, so that the team has the product’s missionin mind as they create. If your team can’t define “meaningful” for the user in the context of your analytics service or data product, it’s time to take an educated guess, and then get some data to back up your assumptions. How? Get out there, start interviewing your customers 1x1, and get some data to validate and inform your understanding of “meaningful engagement.” You do like data, right? This should be fun! Despite that, 40% of public companies are still not talking to customers before engaging in product development. Don’t be one of them. Your business can’t reliably and repeatedly design data products for meaningful engagement until you know what meaningful means to your customers.
Now, if you’re still stuck, taking an initial crack at defining “meaningful engagement” for your service, I’ve compiled a list below of “starter” ideas to feed your hypothesis. From IOT monitoring tools to forecasting and SAAS analytics products, there many possible definitions out there:
- Reduced actual time to complete a frequent task (you do have a list of tasks to test, right?)
- Reduced perceived time or complexity to complete a frequent task (a design improvement may not result in an actual clock-time improvement, but it can subjectively appear to be better)
- Reduced friction, context switching, or tool time when performing the tasks in a given scenario
- Desirable business outcomes were achieved, stemming from insights gleaned in the software (an outcome may be a decision to “do nothing” as well)
- Subjective feedback from users that they like/enjoy using the tool (meaning, their perception of your application is good, regardless of actual tool utility and usability). Watch out for this one: “likes” can change, especially over time. What was fun the first time may not be so the 100th time the task is performed, and this is partly why research and developing customer empathy is important: it encourages you to focus on what’s needed and not what is “liked.”
- Less required ping-ponging between other applications, paper, screens, interfaces, or tools (this is not always a bad thing, especially if applications are designed to work well together such that the context switching feels invisible)
- The system invites me to use it at the right time, preventing the need to “just check in.” Example: in a monitoring system, an interesting outlier, signal, or event is sent to me as a notification (and the notification itself may be sufficient)
- The customer is able to input corrections, improvements, or acknowledgements to improve their experience in the present or future. At the most basic level, this may be as simple as being able to report a simple bug/problem/complaint in the UI (please: put a chat widget, email address, or contact form on your app so users know how to get feedback to the right people). At a high level, you might be accepting human feedback re: data in the UI that gets fed back into a predictive model to improve it in the future.
- If the tool provides “declarative” analytics, it prioritizes giving the user recommendations for action or insightful conclusions (often in plain text format) first before presenting evidence (analytics) or raw data.
- When seated in front of the data product or decision support software, a prospect (a new, but qualified user with the proper domain knowledge etc.) could explain back to the product/analytics team, in simple terms, what the purpose and value of the tool is–without significant effort.
- The user feels like they accomplished something meaningful (e.g. they prevented a problem, predicted an outcome, made a meaningful discovery)
- The user feels like they helped somebody else (another user, another department, the business, etc.) accomplish something meaningful
- The user realizes that, through the use of your tools, that the “new way” is better than the “old way;” they can feel/understand the bigger picture around how the data is helping them, their customer, or their business.
Notice that this isn’t a list of features. These bullets are about the engagement itself, the outcome that occurs, and even the feelings the user might have.
Once you have your own list of definitions of meaningful engagement, you can now conduct 1x1 user studies to see if the design is facilitating the desired engagement. In your initial “benchmark” study, you’re looking for some very particular data:
- Re-validate that your definitions of “meaningful engagement” are actually accurate, when users are sitting down and using your data product/application to complete specific tasks.
- Establish a baseline score of how well your service is designed such that when you make improvements and re-test, the size and impact of the changes are easy to see. Don’t underestimate this: even if your software has a lot of usability, utility, or value problems right now, that’s ok. What’s more important is that you understand where you are now, and when you’re testing iterations, what it means to be “better.” You can now isolate specific issues, test design changes, and measure the improvements against a reference point.
Once you get past the early phases of hypothesis, validation, evaluation of the hypothesis, and testing the efficacy of the design, you’re going to have much clearer idea of what “meaningful” really means. This clarity in turn makes future design iterations easier as well: by knowing what the customer really needs, the barriers, gaps, or friction in your data product will start to stick out more clearly. And, the clearer the problem is, the easier it is to design better solutions.
So, the next time you hear somebody complaining about “low engagement” with a data product or analytics service, you can remind them that “low engagement” may be just fine. What really matters is whether the design of the tool is affording meaningful engagement.