Designing MVPs for Data Products and Decision Support Tools

Author's Note: This article was originally published to my mailing list, hence the reference to previous emails and published podcast episodes.  

Before I jump into this week’s article on MVPs for custom data products, just wanted to address one listener’s response to the new podcast in case others had the same experience. Re: the audio quality in the first couple episodes, my apologies for any concerns you may have experienced with my guests’ sound; a small setting in our recording software (Zoom) was disabled (by default) and limited what my team could clean up during editing. We’ve since enabled multi-track recording and now have a lot more audio editing control for my remote guests who do not have professional audio equipment.

More importantly though, this feedback actually inspired me to change my focus of your article today to the topic of designing MVPs, or minimum viable products in the context of decision support tools and analytics services.

First, here’s this listener’s feedback, unedited:

Hi Brian,

Some feedback on Episode #2 with Julie Yoo. It appeared she called in from a mobile phone, and the listening quality was unpleasant. As a result, I deleted my subscription. Please don’t include bad sound quality interviews in future episodes. As a result of this email, I’m giving you another chance, because I really am interested in the topic.

--L

So, what does this have to do with you designing good enterprise data products and decision support software?

In episode #000 of Experiencing Data, I mentioned that this show is an “MVP.” It’s a starting point, not an end point, and at some point, just like an app or product, you have to stop designing and coding it, and ship it so you can start learning. Once you’ve learned something, then you can meaningfully improve it, and re-evaluate. Now, on a personal level, L’s feedback stung a little at first, especially since I care a lot about audio quality when producing anything, whether it’s this podcast or my band’s records. 😉 But, it’s not about me. It’s about the listeners. In this case, you.

Now, let’s switch places.

If you were in my shoes (my lucky pair of Esquivel wingtips at the moment), and this podcast was your product/application, and you shipped it and solicited this feedback, I’d say you were doing a good job.

Why?

You didn’t wait for perfect. 

You made something “good enough,” you shipped it, and you got feedback early enough to make changes. You learned something early enough to make changes. That’s the point of MVPs and Agile.

If we break this down more, you actually learned from L that:

  1. L is interested in what you’re creating. In fact, “really” interested. So, if you weren’t sure if your product/app was going to be useful, you now have some data about that. In this case, after just 2 "episodes" worth of iterations. What’s your analytics deployment or custom data product’s equivalent to a 2-episode release? Did you wait to release a “full service” with 10 episodes before you got feedback?
  2. You have some rather low-hanging technical issues to address that may be high value if more people feel like L does. As a product owner, you have to decide if “more research” is relevant, whether to ignore L, or just fix the issue and move on, but the point here is that what may have seemed like a tactical/minor engineering problem might be of higher importance to your users than your team ever thought.
  3. This technical problem–while perhaps almost trivial from a LOE to fix standpoint–was big enough for one customer to consider leaving the “service.” This point is actually really interesting and relevant to enterprise data products. An ongoing theme I hear is that data science and analytics teams are often bogged down in improving model quality, prediction accuracy, and other technical “stuff” that doesn’t always address the most pressing business concerns. Concerns like “low engagement” with the software, which is often a proxy for “are customers finding our tools/apps to be a source of indispensable decision support?” Fixing small stuff sometimes isn’t sexy, but you might find through your MVP testing that fixing some of these trivial things is a big win from a customer standpoint.
  4. You are now aware of another aspect of your service to “keep an eye on” that you might not have given much thought to previously. It may turn out to be a trivial signal in the long run, but the smaller iteration allows you identify it early. Furthermore, the MVP approach also means you have invested less time, money, and code to date, making it culturally and financially easier to adapt when customer needs dictate it.

So, if you believe in the value of MVPs to maximize your learning early, let’s look at some tactical tips for designing an effective prototype or a data product or decision support tool.

UX Design for Data Products and MVPs: What Prototype Fidelity Maximizes Learning?
Abandoning the podcast analogy for a second, let’s talk about design vs. engineering prototypes and how these relate to MVPs and learning quickly about what is/isn’t working well with your service.

If the point of the MVP is really to maximize amount of validated learning with the least effort, then ask yourself whether you could be learning earlier from a no-code prototype such as a design mockup or sketch. While a high-fidelity prototype with real data that closely mimics the final product/app is more likely to provide you with better feedback, there is also a cost in terms of the effort and expense required to design and build this type of prototype. You also have to watch for going “native” the more and more you invest in a heavy prototype. When there’s already been a heavy technology investment up front, you are likely to be more resistant to changing it and start blaming customers when your design doesn’t serve them the way you or they hoped. Working in lower fidelity reduces the chances of you falling in love with your creation and opens you up to change earlier in the product/tool development process.

Use Realistic Data When Designing Data Products
Whatever format you choose, when designing a custom enterprise data product or any type of decision support that leverages analytics, one of the most important things you can do, whether the prototype medium is code or pixels, is to use realistic data. Note that I didn’t say real data.You don’t necessarily need working code to learn: you can often use low-fidelity formats such as sketches, paper prototypes, or pixels (rendered design mockups), so long as your prototype has realistic data in it that won’t take customers out of context or disrupt the important learning you set out to do. A coded prototype with tons of functionality and unrealistic seed data may be just as bad as a static prototype with unrealistic data.

Here’s an example. Many years ago, I helped redesign the core Portfolio Summary area for Fidelity Investment’s retail website, and while testing our portfolio user interfaces and UX with customers (asking them to pretend to “own” the stocks/positions and complete certain tasks we gave them), we found out how important it was to make sure that equity pricing was “realistic.” Putting this in today’s context, had we showed an AAPL (Apple) stock position trading at $12.34 in the Positions table (~1/12 of it’s current price), no matter what we might have set out to learn about our designs, we’d usually have the customer panicking that they just lost a ton of money and their entire focus would be on reviewing “the market” to understand “what the heck just happened to AAPL?” The unrealistic data was a distraction. Even though the product team’s goal was to learn, “are these the right default columns/metrics for a Portfolio Summary UI?” our prototype’s lack of realistic data took the customer out of context and inhibited our ability to learn about the critical part of the design we needed to validate (or invalidate).

In the end, all we needed to do was to update the prototype’s static data with some more realistic data. Changing that $12.34 to $152.34 amounts to adding “1” character to a mockup, which is definitely cheaper and faster than wiring up a whole working prototype with real quote data. These days, there are even tools that let you populate pixel-based prototypes with data from spreadsheets or JSON files, making it even easier to correct these types of problems.

The takeaway here? If you’re testing static prototypes and not an iteration of actual software/product, then use realistic data to avoid creating unnecessary distractions. If you’re testing a “shipped” version, then of course the data better be realistic, and you should be open to watching (for a short time) how that customer reacts to AAPL losing 80%+ of its value, even if that’s not the goal of the test 😉 You might learn something invaluable about your service.

Data Products with More Complex Visualizations Need Higher Visual Fidelity to Validate Learning
As a general rule, the more complex the visualization is, the more that visual design fidelity matters. When you start presenting large amounts of information, color, size, spacing, and visualization really becomes more and more important. I think it’s hard to validate/invalidate a “wireframe” of a complex visualization because the visualization itself is what may determine the utility and usability of the tool. When you’re testing workflows, text-based information, navigation, and these types of aspects, it’s generally easier to get away with testing your service with lower fidelity designs. Note that in this context, I am talking about visual fidelity. 

In order to test the application's interface design, before committing a large technology investment, you need to get the design rendered enough to communicate the intent and remove poor visualization choices which, like unrealistic data, become distractions. You might find out that it’s possible to load some static, realistic data into a working visualization library that generates a “working prototype.” That’s great, so long as the tool defaults are good, or you have the resources to correct all the various visual design and other problems that create distractions. You have to be careful here, as the next thing you know, you might find your team is “doing real engineering,” spending a lot of time and money, and building a prototype that you may become more resilient to changing, even if it’s underperforming. Remember: this wasn’t supposed to be an engineering project; you were looking to design just enough of a prototype to validate its utility and learn how to make it better–from the perspective of the user. Fortunately, visualization libraries and tools have gotten better and better, including default rendering choices, such that the ability to quickly prototype with code is not only easier today, but less…ugly.

Decide What You Want to Learn from the Prototype in Advance
While I love “discovery” when doing research and design testing, you are likely to come away with actionable advice or concrete validation of the design if you have some idea about what you want to learn about before you have a user/test participant in front of you. This should not be difficult if you established goals for the product/app prior to designing anything. If you revisit the original success criteria for this software iteration, you can use that to establish the questions you will ask customers about when they are faced with your prototype or working MVP. Your goal might be something like, “Do they have enough information to make an informed decision? Did the user believe the recommendations the technology made and if not, what did they try to do to validate the recommendations, if anything?” By removing distractions in the design, and having a focused plan of evaluation, you’ll improve the quality of your learning. Additionally, it will reduce the effort required to design the minimum prototype you need. Sometimes you just need to know they'd click on "Sort," even if the prototype doesn't actually sort.

Don’t Confuse An Engineering Prototype with a Design or MVP
Using an engineering prototype whose goal was to get the data sources displayed on the screen is not a reliable way to regularly drive your application towards the “V” in MVP. (Viable, or valuable). “Look, we now have customer data and prediction scores pulling in from all 12 sources into this table. Let’s test that UI with users.”  By all means, I am 100% a fan of engineering and design activities running in tandem, especially if the company culture is such that the business knows that the technology should serve the people, and the bridge between the tech and the desired business and user outcomes is design. However, if this engineering prototype is going to be presented to a user as a possible “MVP,” and it is not held accountable to its ability to generate the customer and business outcomes you set forth at the beginning of the project, then you’re much more likely to build the wrong thing and accrue technical debt that may slow down important future iteration.

“Working software” is fun to see internally, and your business may view this as a measure of “progress.” Ultimately though, progress is only made if you can demonstrate traction against a clearly stated set of desired business and customer outcomes. Engineering progress does not necessarily reflect progress in product/tool efficacy. Now, sometimes an engineering-driven design ends up working well. This is probably a result of one of three things: 1) the required UX/UI is so small/minimal it would be hard to mess up 2) the fruit is so low-hanging that anything you code up is likely to provide some value, at least for awhile or 3) you just got lucky.

For those of you designing more complex decision support tools, can you rely on luck in each iteration? If you’re always working from a data and code perspective first, is that a reliable way to build consistently good iterations that are governed by a clear problem set and held accountable to real outcomes? If you get in this habit, are you going to be willing to significantly change the code when the no-design prototype doesn’t facilitate the outcomes you established up front? How invested are you in your current technology path as a result of letting the technology dictate the product capabilities?

Sooner or later, it’s too late to turn the ship. And guess what? It’s a big ship now, and it doesn’t turn with quite as much agility as it used to.

Next stop: the ports of Redesign and Refactor.

Photo by Rodolphe Courtier.

More Free Insights:

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

More Free Insights: