How to solicit *real* needs from users via UX research interviews

Readers of DFA know that I'm big on not immediately giving customers what they asked for, and instead asking the question "why" to learn what the real latent customer needs are. And for you internal analytics folks, remember your employees, vendors, etc. are your "customers" whether you think of them that way or not! Anyhow, some of you may be wondering why engagement is low, or you're not getting the results you hoped for. If you're not sure where to start, here is a super easy script:

  1. Recruit your customers for a 1-on-1, 30-60min screen-sharing meeting, or in-person meeting (even better). Tell them you're doing some customer research to learn about what is working and not working about the tools/solutions you manage and work on. You can also share that no advance preparation is needed and let them know their feedback would be extremely useful in making your service more useful, usable, and productive in their work. Note: scheduling can take some time, and you can even outsource this effort. One other thing we sometimes do here with research is to avoid sharing the specific thing we're going to discuss, to avoid users going and doing "homework" ahead of time to familiarize themselves. This may not be possible though, but if you can obfuscate it a little, that is sometimes a good thing. Your customer is likely to feel like they are being "tested" during all of this, and so your job is to help them learn that you're there to evaluate the service, and not them. Avoid using the word "test" and use the word "study."
  2. Open the session by asking them to tell you their background/bio. If possible, get permission to record audio/screen capture and mention it is only for internal review purposes. At the session, ask the person, if you don't already know, what their overall job responsibilities are and then ask how your service fits into that. At this point, the customer is self-reporting, so take this with a grain of salt. If they immediately start showing you interactions with your service, that is GREAT. Let them run wild, and keep asking questions that encourage them to demo the product back to you as they use it. Encourage them to "speak out loud" and give praise for feedback. I usually end up repeating the phrase, "thanks, this is awesome feedback" 20 times in a session. Note that you aren't praising their specific actions: you need to say this whether they do the right or wrong thing with the service because the feedback itself is what is being praised. Anyhow, chances are, after the short "bio" chat about their job responsibilities, they probably won't open up any tools as they will be expecting you to lead. As such, you now want them to open the tool and proceed further.
  3. Ask the customer to open up the service/tool you plan to discuss. Note: the study has already started at this very moment. Take special note to focus on what you see them DOING much more than what they are SAYING. Take note of things like
    1. Was the service bookmarked in their browser or easily accessible?
    2. Did it look like they were fumbling around just to launch it (e.g. haven't used it in a while, but don't want to admit it?)
    3. Was the login/password forgotten or not immediately accessible? These are all good signs customers aren't utilizing the service.
    4. If they need help after a bit, help them, and state, "If you haven't used this in a while, it's not a problem. I can help you get access to the tool." Note that this is called an "assist," and you want to do this only after you have concluded that it is rather obvious the customer can't even get past the login. Typically, in research, your job is also to avoid assisting.
      • Remember too, that this is NOT a training session but a discovery session to learn about what is happening in the wild when you aren't around.
      • Additionally, your goal isn't to scold them for not using your service, but to try to solicit useful information and honest facts from them. As such, during this simple act of opening/accessing the service, this a great example of where Actions speak louder than Words. Your customer might have told you they "use it all the time," but in reality, if you see them fumbling to try to simply open your service, you can see that what they're saying may not be quite as true as what they are doing. Keep this concept of "doing" over "saying" in mind as self-reported behavior is often very misleading. This is one of the core things that I see my clients/stakeholders getting wrong. You cannot necessarily believe customers/users' needs as verbally stated. They do not always know what they need, and their reporting of past behavior is often flawed. Which leads me to my final / next step: the recent history question.​
  4. "When did you last use the [service] if you can recall? Can you show me what you did specifically, speaking aloud as you go through it?" This question is specifically worded in such a way that you're not asking them in general how they use the tool, but instead, you are asking them to demonstrate a SPECIFIC use case they worked through to get some useful insights. This is much better as it forces them to use the service and show you their UX. You are likely to learn a ton here, and one of the best things you will learn is stuff you never even knew to ask about! You might see strange behavior patterns, ping-ponging between screens, opening up of external tools/spreadsheets, etc. This is all very good feedback.
    1. If the user fumbles quite a bit with your request and it's obvious they don't know how to use the service, it's ok to just tell them, "if you haven't been in here in awhile, that's ok. Can you tell me what you think this service might be useful for? What might you be able to use this for?" At this point, you're now observing their clicks, and encouraging them to "keep thinking out loud." Note that this is unscripted intentionally, so you want to let them take tangents and follow their instincts. Your job is simply to collect information and not judge their skill with the service.
  5. ​At the end of the session:
    1. Invite them to ask you any questions they may have.
      1. If your service DOES have a way to solve the question they have, don't tell them this and instead ask, "Do you think there is a way to do [do that task]?" Invite them to "try" themselves. If they get entirely lost, but your service does have this feature/need met, provide an assist to them, and then ask them to continue. Remember to encourage them to think aloud the entire time, and tell them, "we're here to evaluate the design, not you." Most customers feel like they are "dumb" when they fumble for too long (we all know that feeling when we can't open a simple bottle, or figure out how to open the door, or some other poorly designed system that seems like it should be easy!).
      2. If your service does NOT have a way to answer their need/question, encourage them to explain to you what end goal they have and what would make the service awesome. It can be pie in the sky; that's ok. What you want to avoid is encouraging them to start designing out the system in that moment, and instead, focus on what they personally, would think is valuable. Users also have a tendency to want to talk for others and think they are unique so watch out for, "I think most people would X, but I probably wouldn't." You want to learn about Y and not X in this case so keep coming back to them with things like, "that's great feedback thanks. Can you tell me what YOU would need/do/want that's different from what you think everyone else needs? I am really curious about your own particular needs and it sounds like you think they might be unique."
    2. Thank them and ask them if you can be in touch with them again in the future as you integrate their feedback. Your job is to develop a long term relationship and let them know that you need continuous user feedback to make the service better, and that their feedback contributes to a better user experience. Most customers love helping out.

Need help? Set up a free micro-consultation call with me on my contact page.

The Easiest Way to Simplify Your Product or Solution’s Design

Ok, you probably know this one, but let's dig in a little farther.

I recently started to explore using the TORBrowser when surfing on public wi-fi for more security (later finding out that using a VPN, and not TOR, is what will enable safer surfing). However, in the process of downloading and trying the TORBrowser out, it provided me with a golden example of what you should not to do in your product.

The very first screen I saw when I launched TOR was this:

Image

So, what's the big deal here? First, I will share the answer to today's subject line with you:

Remove everything that is not necessary.  

​Yeah, yeah, you probably have heard that before. Famously, the pope asked Michelangelo how he knew what to carve while creating the statue of David, and his response was along the lines of, "I removed everything that wasn't David." Nice.

Are you removing the cruft and noise from your product?

If we take this thinking further, I would say that today's core takeaway for you is to "remove choices by making good assumptions in your product, wherever possible." You might be wrong sometimes, but you'll be a right a lot of the time.

Jumping back to the TORBrowser UI example above, there is more you can learn from their design choices:

  1. This UI says, "This [option] will work in most situations." Well then, why isn't this automatically selected as a default choice?
    Does this screen now seem necessary to you? Why does the product not just "try" that setting by default, and present the other option as a fallback, if the default fails? Nobody downloaded TORBrowser with a goal to "set it up" with the right networking settings. This entire step is not necessary. It literally says it's not "in most situations."
  2. Right information...at the wrong time. 
    I haven't needed to use this pane yet as the default setting worked (surprise!), but it's an example of the developers trying to present helpful information. That's good. The design problem is that it's appearing at the wrong time in my experience. I don't need this right now, and I don't even want to think about how the networking is configured. It's completely irrelevant. Are you presenting choices at the right time in your product?
  3. Most users don't care how your software works; don't expose the plumbing. 
    ​There are sometimes exceptions to this for certain technical products, but even when there are, once most users have "learned" what they need to learn about the plumbing, it quickly becomes irrelevant. The value has to shine, or people stop paying for the service. That includes products built for technical audiences.
  4. This UI and UX is not fun at all...especially as a first impression.
    It's a needless distraction, it's not fun, and it's got me focused on, "how hard will it be to get this app working?"
  5. The visual design attention (or lack thereof) is undermining the mission of the product. 
    This is the hardest one to teach, but a combination of graphic design choices (probably unconscious ones) here contribute to this UI not feeling particularly safe, secure, and careful. The goal of TORBrowser is to "protect" the user. If you think of words like protection, precision, stability, and safety, then the visual design should reinforce these ideas. The topic of graphic design is hardly something to be captured in an email, but I can leave you with a few suggestions and considerations. Refer to the diagram for a detailed analysis:Image

    1. What could be removed from the TORBrowser UI sample?
    2. Are the invisible things (like padding/margin whitespace choices) consistent, meaningful, and deliberate?
    3. While a single graphic design choice sometimes has the power to impact usability or even the financial the bottom line, typically, it is the sum of numerous small design choices that account for the overall perception of your product's quality and aesthetic.
    4. It's possible to follow "all the rules" and still not have a great product aesthetic or utility. (That's why we have designers.)

What internal analytics practitioners can learn from analytics “products” (like SAAS)

When I work on products that primarily exist to display analytics information, I find most of them fall into roughly four different levels of design maturity:

  1. The best analytics-driven products give actionable recommendations or predictions written in prose telling a user what to do based on data.  They are careful about the quantity and design of the supporting data that drove the insights and recommendations being displayed, and they elegantly balance information density, usability, and UX.
  2. The next tier of products are separated from the top tier by the fact they're limited in their focus only on historical data and trends. They do not predict anything, however, they do try to provide logical affordances at the right time, and do not just focus on "data visualization."
  3. Farther down the continuum are products that have progress with visualizing their data, but haven't given UX as much attention.  It's possible for your product to have a *great* UI, and a terrible UX.  If customers cannot figure out "why do I need this?," "where do i go from here?," "is this good/bad?," or "what action should I take based on this information?," then the elegant data viz or UI you invested in may not be providing much value to your users.
  4. At the lowest end of the design maturity scale for analytics products are basic data-query tools that provide raw data exports, or minimally-designed table-style UIs. These tools require a lot of manual input and cognitive effort by the user to know how to properly request the right data and format (design!) it in some way that it becomes insightful and actionable. If you're an engineer or you work in a technical domain, the tendency with these UIs is to want to provide customers with "maximum flexibility in exploring the data." However, with that flexibility often comes a more confusing and laborious UI that few users will understand or tolerate. Removing choices is one of the easiest ways to simplify a design.One of my past clients used to call these products "metrics toilets," and I think that's a good name! Hopefully, you don't have a metrics toilet. *...Flush...*

What level is your product at right now?

Video Sample: O’Reilly Strata Conf

This is recording of my presentation at the O'Reilly Strata Data Conference in New York City in 2017.

Do you spend a lot of time explaining your data analytics product to your customers? Is your UI/UX or navigation overly complex? Are sales suffering due to complexity, or worse, are customers not using your product? Your design may be the problem.

My little secret? You don't have to be a trained designer to recognize design and UX problems in your data product or analytics service, and start correcting them today.

Want the Slides?

Download the free self-assessment guide that takes my slide deck principles and puts them into an actionable set of tips you can begin applying today.

Getting confidence in the value of your data

(As shown to customers in your UI)

I'm talking to a prospective SAAS client right now, and they're trying to expose some analytics on their customers' data so that the customers can derive ROI from the SAAS on their own. The intent is that the data can also be useful to the SAAS sales team, as a tool to help prospects understand what the possible ROI might be.

I had a question for Dave around whether the project would be successful if we talked to the users, designed a bunch of interfaces, solicited feedback on the design outputs, and found out that the data, while interesting, didn't really help the customers derive ROI. Would the design engagement still be productive and a success in the end? Ultimately, I didn't want to take on a project if we had hunches that the data we had, while being the best possible available data and elegantly presented, may not help the end user or buyer calculate ROI.

Here's what Dave told me:

Yes, the design engagement would still be a success. It provides us a punchlist of what else we need to do, which is in-and-of-itself is useful; and presumably defines what the analysis/reporting needs would be once we get that data. Less of a success, or more of a delayed-gratification one, but still useful.

I thought this was interesting to share, and I hoped Dave would say this because it shows that sometimes, you have to do some design to figure out what the final design needs to be. You can't always plan ahead what the right solution is and moving from designing on assumption to designing on fact is powerful information to inform your product.

Conversely, you can also spec out the entire project, including all the data/queries that customers said would be useful, write it into a spec or backlog, code it up, skip design, and then still have it not be successful because customers couldn't actually experience the ROI that your data was supposed to convey. A product backlog does not = a viable product. It's just a bunch of user stories or features. The glue holding them together, and what helps customers realize the ROI, is design.

Tips to help focus your analytics design/engineering efforts on results

If you are starting out on a new feature design, or analytics effort, can you clearly state what the value will be in quantifiable terms at the end of the sprint?

Are you building an "exploratory" UI, or one that is supposed to drive home conclusions for the customer?

When clients come to me about product design engagements, I spend a lot of time trying to understand, at the end of the project, how success will be measured. Frequently, my clients haven't thought this out very much. I think that's natural; when you're close to your data, you can probably see a lot of ways it could be "visualized" or that value could be pulled out. But, when it's time to get specific, are you and your team able to clearly share a vision of what the desired end state is?

Here are some examples of goals and success criteria I've seen with my design clients. Having these types of success metrics makes it much easier for everyone involved on the project to know if they're on track to deliver value:

  • SAAS Example: Make it easier for the sales team to sell our product by surfacing interesting analytics that help customers see the value of the product. Ideally, a 30-day closing period for a sale drops to a 1-week closing period.
  • IT Software Example: Remove unnecessary troubleshooting time from the customer's plate by running analytics to either surface a problem, or eliminate what isn't the problem. This is a reduction in customer tool-time effort. If we can drop troubleshooting time by 50%, that is worth $X per incident (the business impact time + the manpower/labor time saved).
  • Generic example: Help the customer understand what interesting outliers are in the data so they can take action. There are opportunities to exploit if the outliers are interesting. Our analytics should help surface these outliers, and qualify them as well. If we can save 10hrs a week of "exploration" time the customer has to do by surfacing this data early in the UX, that is a substantial labor savings ox $x as well as overall product quality/delight since the users can now focus on what's really important in their job (not the "hunt").

This is the start of making any design engagement successful.

Got some of your own goals/metrics to share? Hit reply; I would love to hear them.  If you're embarking on a design project and need help getting these defined,  you can schedule a free micro-consult with me below.

“Post-truth,” data analytics, and omissions–are these design considerations?

Post-truth. The 2016 word of the year.

Yikes for some of us.

This got me thinking about UX around data, analytics, and information, and what it means when we present conclusions or advice based on quantitative data.

Are those "facts"?

If your product generates actionable information for customers, then during your design phase, your team should be asking some important questions to get the UX right:

  • What risk is there to our customer if the data is wrong or could be interpreted incorrectly [easily]?
  • What information might we want to include to help customers judge the quality of the information the product generates?
  • If our technology analyzes raw data to provide actionable information, are there relevant analyses that the product did not run that the customer might need to contextualize the conclusions drawn?
  • Is our product (and company) being genuine, honest, and transparent when appropriate?
    (That's how I roll at least, and few scenarios suggest this ever is bad advice.)
  • Is the display of supporting data considered and as unbiased as possible?
    (Notably: did you design the presentation of the information before coding it, or did you just dump it into a charting tool?)

Part of getting a product's design and UX right is knowing what questions to ask.

Let's take a quick example many of us without pensions can relate to: retirement savings.

Let's say you work at a financial institution and you're supposed to design an online tool that can help customers understand how much income they need for retirement, and specifically, what monthly savings target they should have in mind to reach that goal. A wise design-thinking product owner will be considering issues beyond how the UI works, the sliders and input fields on the forms, and the way the output charts look.

If we're talking about "truth" in the context of design, I'd hope the product team considered:

  • How confident are the displayed estimates?
  • Since this is a predictive tool, did the app run one than one type of simulation before generating advice?
  • Did the app factor in unique characteristics of the user, such as their own behavior to date (if known)?
  • Does the design clearly mention relevant variables that the tool cannot control for, and also how much those variables might affect the predictions that are shown?

How much, and how loudly a tool answers these questions depends on the content, risk, customer problems, and needs.

Sometimes these things don't need to be "answered" literally in ink because not all customers will care, or they might just assume that your calculator already does all this magic.  And, there are times when all the ink might just be noise (e.g. weather forecasts).

All that said, I am not sure "post-truth" fits in anywhere with good product design.

Think it’s hard building an analytics product or service? Try cutting features out.

Try cutting features out of it.

Apparently, that whole quote from Michaelangelo about "I just carve away the part of the statue that doesn't look like David" is a myth.

But, it's a good myth for design thinkers. It helps me remember that you can add customer value by removing materials from a design.

We talk a lot about what feature to add/change in our products, but how often are you investigating what should be removed? Even the stuff some customers might like?

This is a slippery slope.

You can create a lot of one-off features and content for specific customers in your product, and in the short-term, that may be the right play for a startup trying to get early sales and customers. However, as you grow, your ability to remove product without causing friction for customers will become more and more difficult. In almost every product, you can probably point to the handful of people that love/use that one area of the UI. It's much harder to measure the value of "what if we take this out to reduce noise/complexity?" How do you quantify that?

In short, you probably can't–at least not as easily as the departments currently deriving revenue from the few folks using that one special GUI area you'd like to yank.

So, I think the lesson here is:

1) Be mindful when adding content and understand that once it's in the product, it may be hard to take it out later.
2) Acknowledge that you sometimes have to take a short-term hit (take the axe to something) to make a longer-term game (less choices, less noise, less product, but more value). This may mean you making an internal "sales" pitch to stakeholders if you are not the decision maker / product owner as to why it's time to get the axe out. Usability testing may be able to help show how a different execution/design/workflow eliminates the need for the old UI and makes it less risky to remove.
3) Your few noisy customers–who probably love that feature/area you'd love to yank–may not represent the masses. This also means you have to go out and get wider feedback on your product; don't just listen to the squeakiest wheels. You have no way of knowing if that squeaky wheel is an outlier or not; so talk to your end users regularly. One technique: discuss the thoughts/ideas/complaints of the squeakiest customers with a wider base of your customers, so you can put the squeaky ones' comments in context and see if they're actually a good voice for the silent majority.

So, get your axes out and let me know how it goes.

Not sure what to cut down? I have a lot of saws in my toolbox. Set up a free micro-consult with me on my contact page.

A Venture Capitalist’s Take on Designing Useful Big Data Products

I loved this quote:

”Identify 2-3 need-to-know insights, and make that the focus of the product. Rather than thinking about competing products, think about competing processes. The goal of the product is to be consistently used by all users, not just power users, and the only way to accomplish this is to make it as simple as possible to discover the relevant insights. Reduce clicks and noise.”

While a couple years old, this is from a solid TechCrunch article by Kyle Fugere of dunnhumby Ventures. While I don't entirely agree with everything such as the comments about "too much information," the spirit of Kyle's article is entirely sound. I also love this: "​Most big data startups are simply interpreting and presenting data to end users. However, the ability to do that effectively is something that many startups struggle to achieve." 

How can good design help you avoid product bloat and drift?

Let's talk about your product's drift and keeping it in check.

As your product evolves, it will likely grow in size, and get more complex over time. You'll listen to customers, adapt to their needs, and over time will begin to encounter situations where different segments' needs are in conflict, and you either dissatisfy one group of customers, or you attempt to satisfy everyone in your target market. More needs and audience types to satisfy means more UI. More UI usually means a more complicated UX (all you have to do to make it more complex is add more choices). More complex UX can lead to costly redesigns/rebuilds and dissatisfied customers. But, there's a way to grow carefully without taking a lot of extra effort.

I like to use benchmark user stories to keep the design of the product in check.

Benchmark user stories are a set of 3-7 usage scenarios that–no matter what else happens–should be well served by your product's design. They represent those use cases that should not be complicated or impeded as the product evolves and grows. Until your whole team can agree on this set of sacrosanct stories, you're guessing, increasing technical and UI debt, and are more susceptible to chasing fads and squeaky customers whose needs may be quietly derailing you from your product's overall mission.

Before we dive in deep, you can use this benchmarking concept with a legacy product, or a new product/project, so don't feel like you've missed the boat if you didn't start your project out this way.

Here's an example of a fictitious onboarding scenario for a data product. This product collects stats/metrics from a customer's data and tries to provide insights on that data, and so there is a fair amount of setup and configuration required before the analytics can kick in and yield value:

As a ____[one of your customer personas]____, once I've signed up for [your product], I imagine there is going to be some tricky setup to get all the data collectors in place prior to me getting much value out of the tool. I need good guidance on the order of those steps, when I should do them, and how to understand if the infrastructure is set up correctly. Once I know that, it would be great to understand how long it will be before the analytics kick in and I can start to realize ____[some value prop promised to the customer]_____. Since I know it could take days or weeks for enough historical data to be analyzed by your system, ideally the app would notify me when enough data has been analyzed and the first useful conclusions are available. And man, I would be stoked if there was a cool dashboard or report I could show my boss that begins to show the value of this tool immediately, given I had to really sell my boss on the purchase of this service. 

How can you use benchmark stories like this to your advantage?

  • Perform an audit of your product to uncover where your biggest gaps or friction exist. Use that as a baseline from which to improve.
  • Invite some real customers to go through the scenarios. Use your user study findings as a baseline from which to improve. Early on, this is also a great way to validate that you've actually chosen the right benchmarks. As the customers if these scenarios represent their reality.
  • Inform what work gets prioritized in your sprints (if you know the benchmarks scenarios are performing poorly, should you really be adding a 7th new feature to benefit one squeaky customer?)
  • Prevent feature creep and drift within your product. Anyone in the org can pull the benchmarks out as a defensive weapon. "This proposed feature will make scenario #3 a lot harder. Are we sure we want to go there?"
  • When a new feature or idea will introduce a significant change or impact on your benchmarks, but the stakeholders believe it's the right thing to do, then you can use the benchmarks to establish that your product is making a conscious and deliberate pivot, and not just "drifting" into new territory to the confusion of the rest of your organization.  If they benchmarks need to change, great, but your product team should be consciously making these decisions. "Yes, the dashboard is going to change to X and it will mean a serious change to the UX for Persona B who is (today) using it for Y purpose. We understand that, and need to rewrite our second benchmark story as it's no longer relevant to our new design." Remember to re-validate your benchmarks with the techniques above if you're going to change them significantly.

How I generate benchmark stories with my clients:

  1. Gather the product team (product owner/management (PDM), principal designers, UX researchers, possibly an engineering lead, and any additional stakeholders) in a working session with a whiteboard. It could take 4-8 hours spread over a few sessions to get final agreement. That's ok. I will also add that with data and analytics products, I almost always like to include an engineering representative as they typically have very valuable input. That said, keep the session focused on defining the scenarios and goals, and not the tactics and implementations that may exist (or need to exist).
  2. Choose a facilitator to lead the working sessions from the whiteboard. Ideally, everyone is at the board writing and participating, but sometimes it's easier with one facilitator.
  3. Propose, debate and begin to draft the key scenarios together–everyone participates. You can start with shorthand such as "setup and onboarding" before going into the full detail shown in the above example. I like getting a shorthand list first, before flushing out the details. Typically, PDM and UXD members will be (and should be) proposing the scenarios. If you're concerned about the session not being productive, consider starting with a smaller team (UXD+PDM) and then repeating the process with the larger group. I would stay at the "shorthand" level if you plan to do a pre-work session.
  4. How do you resolve prioritization debates during these sessions?
    You may find (especially within existing products) that product management, sales, support, engineering, and UXD have widely differing opinions on what constitutes the small list of stories worthy of benchmark status. Ultimately, I believe that PDM has the final responsibility to own and define the goals that will be labeled "benchmarks," however design and UX personnel should be "keeping their peers in check." Because PDM personnel can be strongly sales, marketing, or engineering-biased, UXDs can provide a great balance.  If you haven't figured it out yet, I strongly believe that product managers and designers should be in lockstep and one of the tightest duo relationships that exist within your organization. (For startups, I usually recommend turning that duo into a power trio that includes a design-thinking engineering/technical rep).
  5. Once you have the shorthand list, draft them into full prose format and detail–together. 
    I suggest remaining at the whiteboard for this process (not a computer). Your final deliverable will look something a collection of paragraphs similar the example above. Most stories should be 2-5 sentences in length, and you may want to title each of them so they're easy to refer to later. Don't spend the time typing them up electronically during this working session. Date the whiteboard, and take a photo of the stories so somebody can type them up cleanly later. You shouldn't need a follow-up session if you do it this way as you already worked out the details in the initial working session.
  6. Post them in a public place and proselytize them to the rest of the organization members who contribute to the product.
    Often times, "public" means a Wiki page or something along those lines. Wherever you post them, you want to be sure that individual contributors and managers are aware of these goals and are cognizant of the fact that their work should improve, but never impede upon the benchmarks. If you have the resources, you might even consider producing some cartooning or visual aides to accompany the stories (and make them more memorable to the rest of your team).