Let's talk about your product's drift and keeping it in check.
As your product evolves, it will likely grow in size, and get more complex over time. You'll listen to customers, adapt to their needs, and over time will begin to encounter situations where different segments' needs are in conflict, and you either dissatisfy one group of customers, or you attempt to satisfy everyone in your target market. More needs and audience types to satisfy means more UI. More UI usually means a more complicated UX (all you have to do to make it more complex is add more choices). More complex UX can lead to costly redesigns/rebuilds and dissatisfied customers. But, there's a way to grow carefully without taking a lot of extra effort.
I like to use benchmark user stories to keep the design of the product in check.
Benchmark user stories are a set of 3-7 usage scenarios that–no matter what else happens–should be well served by your product's design. They represent those use cases that should not be complicated or impeded as the product evolves and grows. Until your whole team can agree on this set of sacrosanct stories, you're guessing, increasing technical and UI debt, and are more susceptible to chasing fads and squeaky customers whose needs may be quietly derailing you from your product's overall mission.
Before we dive in deep, you can use this benchmarking concept with a legacy product, or a new product/project, so don't feel like you've missed the boat if you didn't start your project out this way.
Here's an example of a fictitious onboarding scenario for a data product. This product collects stats/metrics from a customer's data and tries to provide insights on that data, and so there is a fair amount of setup and configuration required before the analytics can kick in and yield value:
As a ____[one of your customer personas]____, once I've signed up for [your product], I imagine there is going to be some tricky setup to get all the data collectors in place prior to me getting much value out of the tool. I need good guidance on the order of those steps, when I should do them, and how to understand if the infrastructure is set up correctly. Once I know that, it would be great to understand how long it will be before the analytics kick in and I can start to realize ____[some value prop promised to the customer]_____. Since I know it could take days or weeks for enough historical data to be analyzed by your system, ideally the app would notify me when enough data has been analyzed and the first useful conclusions are available. And man, I would be stoked if there was a cool dashboard or report I could show my boss that begins to show the value of this tool immediately, given I had to really sell my boss on the purchase of this service.
How can you use benchmark stories like this to your advantage?
- Perform an audit of your product to uncover where your biggest gaps or friction exist. Use that as a baseline from which to improve.
- Invite some real customers to go through the scenarios. Use your user study findings as a baseline from which to improve. Early on, this is also a great way to validate that you've actually chosen the right benchmarks. As the customers if these scenarios represent their reality.
- Inform what work gets prioritized in your sprints (if you know the benchmarks scenarios are performing poorly, should you really be adding a 7th new feature to benefit one squeaky customer?)
- Prevent feature creep and drift within your product. Anyone in the org can pull the benchmarks out as a defensive weapon. "This proposed feature will make scenario #3 a lot harder. Are we sure we want to go there?"
- When a new feature or idea will introduce a significant change or impact on your benchmarks, but the stakeholders believe it's the right thing to do, then you can use the benchmarks to establish that your product is making a conscious and deliberate pivot, and not just "drifting" into new territory to the confusion of the rest of your organization. If they benchmarks need to change, great, but your product team should be consciously making these decisions. "Yes, the dashboard is going to change to X and it will mean a serious change to the UX for Persona B who is (today) using it for Y purpose. We understand that, and need to rewrite our second benchmark story as it's no longer relevant to our new design." Remember to re-validate your benchmarks with the techniques above if you're going to change them significantly.
How I generate benchmark stories with my clients:
- Gather the product team (product owner/management (PDM), principal designers, UX researchers, possibly an engineering lead, and any additional stakeholders) in a working session with a whiteboard. It could take 4-8 hours spread over a few sessions to get final agreement. That's ok. I will also add that with data and analytics products, I almost always like to include an engineering representative as they typically have very valuable input. That said, keep the session focused on defining the scenarios and goals, and not the tactics and implementations that may exist (or need to exist).
- Choose a facilitator to lead the working sessions from the whiteboard. Ideally, everyone is at the board writing and participating, but sometimes it's easier with one facilitator.
- Propose, debate and begin to draft the key scenarios together–everyone participates. You can start with shorthand such as "setup and onboarding" before going into the full detail shown in the above example. I like getting a shorthand list first, before flushing out the details. Typically, PDM and UXD members will be (and should be) proposing the scenarios. If you're concerned about the session not being productive, consider starting with a smaller team (UXD+PDM) and then repeating the process with the larger group. I would stay at the "shorthand" level if you plan to do a pre-work session.
- How do you resolve prioritization debates during these sessions?
You may find (especially within existing products) that product management, sales, support, engineering, and UXD have widely differing opinions on what constitutes the small list of stories worthy of benchmark status. Ultimately, I believe that PDM has the final responsibility to own and define the goals that will be labeled "benchmarks," however design and UX personnel should be "keeping their peers in check." Because PDM personnel can be strongly sales, marketing, or engineering-biased, UXDs can provide a great balance. If you haven't figured it out yet, I strongly believe that product managers and designers should be in lockstep and one of the tightest duo relationships that exist within your organization. (For startups, I usually recommend turning that duo into a power trio that includes a design-thinking engineering/technical rep). - Once you have the shorthand list, draft them into full prose format and detail–together.
I suggest remaining at the whiteboard for this process (not a computer). Your final deliverable will look something a collection of paragraphs similar the example above. Most stories should be 2-5 sentences in length, and you may want to title each of them so they're easy to refer to later. Don't spend the time typing them up electronically during this working session. Date the whiteboard, and take a photo of the stories so somebody can type them up cleanly later. You shouldn't need a follow-up session if you do it this way as you already worked out the details in the initial working session. - Post them in a public place and proselytize them to the rest of the organization members who contribute to the product.
Often times, "public" means a Wiki page or something along those lines. Wherever you post them, you want to be sure that individual contributors and managers are aware of these goals and are cognizant of the fact that their work should improve, but never impede upon the benchmarks. If you have the resources, you might even consider producing some cartooning or visual aides to accompany the stories (and make them more memorable to the rest of your team).