Discount launch pricing for the Data Product Leadership Community ends Sep. 30!
Participate in our live monthly Zoom webinars (recorded for on-demand access), connect over a 24/7 Slack, and get introduced to a new peer 1x1 every 2 weeks. Join my free Insights mailing list to get subscriber-only pricing before Oct. 1.
Get the coupon then apply for membership

122 – Listener Questions Answered: Conducting Effective Discovery for Data Products with Brian T. O’Neill

Experiencing Data with Brian T. O'Neill
Experiencing Data with Brian T. O'Neill
122 - Listener Questions Answered: Conducting Effective Discovery for Data Products with Brian T. O’Neill
/

Today I’m answering a question that was submitted to the show by listener Will Angel, who asks how he can prioritize and scale effective discovery throughout the data product development process. Throughout this episode, I explain why discovery work is a process that should be taking place throughout the lifecycle of a project, rather than a defined period at the start of the project. I also emphasize the value of understanding the benefit users will see from the product as the main goal, and how to streamline the effectiveness of the discovery process. 

Highlights/ Skip to:

  • Brian introduces today’s topic, Discovery with Data Products, with a listener question (00:28)
  • Why Brian sees discovery work as something that is ongoing throughout the lifecycle of a project (01:53)
  • Brian tackles the first question of how to avoid getting killed by the process overhead of discovery and prioritization (03:38)
  • Brian discusses his take on the question, “What are the ultimate business and user benefits that the beneficiaries hope to get from the product?”(06:02)
  • The value Brian sees in stating anti-goals and anti-personas (07:47)
  • How creative work is valuable despite the discomfort of not being execution-oriented (09:35)
  • Why customer and stakeholder research activities need to be ongoing efforts (11:20)
  • The two modes of design that Brian uses and their distinct purposes (15:09)
  • Brian explains why a clear strategy is critical to proper prioritization (19:36)
  • Why doing a few things really well usually beats out delivering a bunch of features and products that don’t get used (23:24)
  • Brian on why saying “no” can be a gift when used correctly (27:18)
  • How you can join the Data Product Leadership Community for more dialog like this and how to submit your own questions to the show (32:25)

Quotes from Today’s Episode

  • “Discovery work, to me is something that largely happens up front at the beginning of a project, but it doesn’t end at the beginning of the project or product initiative, or whatever it is that you’re working on. Instead, I think discovery is a continual thing that’s going on all the time.” Brian T. O’Neill (01:57)
  • “As tooling gets easier and easier and we need to stand up less infrastructure and basic pipelining in order to get from nothing to something, I think more of the work simply does become the discovery part of the work. And that is always going to feel somewhat inefficient because by definition it is.” Brian T. O’Neill (04:48)
  • “Measuring [project management metrics] does not tell us whether or not the product is going to be valuable. It just tells us how fast are we writing the code and doing execution against something that may or may not actually have any value to the business at all.” Brian T. O’Neill (07:33)
  • “How would you measure an improvement in the beneficiaries' lives? Because if you can improve their life in some way—and this often means me at work— the business value is likely to follow there.” Brian T. O’Neill (18:42)
  • “Without a clear strategy, you’re not going to be able to do prioritization work efficiently because you don’t know what success looks like.” Brian T. O’Neill (19:49)
  • “Doing a few things really well probably beats delivering a lot of stuff that doesn’t get used. There’s little point in a portfolio of data products that is really wide, but it’s very shallow in terms of value.” Brian T. O’Neill (23:27)
  • “Anytime you’re going to be changing behavior or major workflows, the non-technical costs and work increase. And we have to figure out, ‘How are we going to market this and evangelize it and make people see the value of it?’ These types of behavior changes are really hard to implement and they need to be figured out during the design of the solution — not afterwards.” Brian T. O’Neill (26:25)

Links

Transcript

Brian: Welcome back to Experiencing Data. This is Brian T. O’Neill. Today, I’m flying solo, again. We’re going to be jumping into the topic of discovery with data products. And to get this going, we have a question that was submitted on my website, designingforanalytics.com/podcast. This question comes from Will Angel, so I’m going to go ahead and play his question and I’ll give you a little additional context that came up in our follow-up email conversation, and then I’ll go into my response. So, this is shared with permission from Will, so here’s what Will had to say.

Will: Hi, Brian. Thanks for the podcast. How do you deal with discovery in data product development? Sometimes the discovery work is most of the work of building a data product, which makes it harder to work in a consistent process, leading to confused and sometimes disappointed end-users. Just getting to the point of knowing what you want to build and what you’re able to build can take most of the effort, which can represent a significant investment just to get to the point where you can quote-unquote, “start work,” on a data product. My process relies on a lot of intuition, flexibility, and judgment, which is not an easily reproducible system. What guidance can you share on prioritizing this pre-discovery discovery work? Thank you.

Brian: The first thing I wanted to say was actually just on that last comment that Will made, the “pre-discovery discovery work.” I guess I don’t see it that way. Discovery work, to me is something that largely happens up front at the beginning of a project, but it doesn’t end at the beginning of the project or product initiative, or whatever it is that you’re working on. Instead, I think discovery is a continual thing that’s going on all the time. So, in some cases, we actually design in order to figure out what do we need to design and build. We have to actually do some design effort to figure out what’s needed. It’s not something that we premeditate ahead of time.

Anyhow, I’m going to break this episode into two major parts and address two follow-up questions that came up when I asked Will for a little bit more background on his question. Those two questions are, “How do we avoid getting killed by the process overhead of discovery and prioritization?” And then secondly, “How can we efficiently and effectively prioritize which data products to develop?” And so, just for additional contexts, I think that Will was actually thinking about my podcast with Nadiem from Mindfuel when he was talking about the data ability aspects of data products as well as this portfolio idea where you’re managing, you know, multiple potential solutions for a variety of different stakeholders. So, if that helps with some additional context.

Will also give me some background that he’s very clearly into backlog prioritization. It sounds like they’re probably using Agile, they’re trying to do minimum viable product work, et cetera, so the build versus discovery thing, and seeing those as very discrete pieces of work was also something that came up in our conversation. But anyhow, let’s jump into this first question about how do we avoid getting killed by the process overhead of discovery and prioritization.

So, I don’t have a single answer for all this. I kind of have a bunch of just different ideas that I wanted to throw at you, all of you who might be resonating with this question. The first is that discovery work is by definition not efficient. And I actually love—there’s a sales leader in the creative profession space named Blair Enns that I really enjoy his podcast, and he coined a term called the Innoficiency Principle, which states that innovation and efficiency are mutually opposable goals. In any reasonably functioning organization, one cannot be increased without decreasing the other.

So, it may feel that if our identities are wrapped up in the writing of code and the analysis of data sets and the building of models or dashboards or things, it may feel very inefficient to be doing, quote, “discovery work,” that does not involve, and I’ll just use, maybe, writing code as a general catch-all term for the delivery and execution work. However, these days as tooling gets easier and easier and we need to stand up less infrastructure and basic pipelining in order to get from nothing to something, I think more of the work simply does become in the discovery part of the work. And that is always going to feel somewhat inefficient because by definition it is. You can’t really make it efficient. We don’t know where we’re going all the time and we don’t know how we’re going to get there and we don’t even know what the destination is during this early part.

And in order to come up with something new, or to solve a new problem, which is, quote, “Innovative,” even if innovation isn’t necessarily the goal, there’s going to be default inefficiency in that. But we have to accept that we’re learning something along the way and that hopefully, as we go through this process of quote, “wasteful innovation,” where we can’t necessarily show any immediate value for something that we’ve learned that we are—somehow that’s paying—it’s not paying forward, it’s—there are learnings that we can apply in the future to reduce the amount of uncertainty on future projects. We learned something that we can carry forward. So, we need to stop thinking about this as being efficient because it’s not.

The second idea here is that what are the ultimate business and user benefits that the beneficiaries hope to get from the product? So, are those super tangible? And can everyone—especially the people, these beneficiaries—can they understand those metrics just as well as you? And guess what, they’re probably not going to come to the table with a list of them very clearly spelled out that are all easily quantifiable. We have to dig this out of them during the discovery phase.

And so, it’s important to note, too, that early on progress metrics may be highly qualitative, they may be the only ones we can really measure early on because we can’t know, did this create business value. Well, it’s not in production, so it absolutely hasn’t yet. So, we have to look at other measurements of success or progress to validate if we’re on the right track. And that means we have to, kind of, define, and they might be squishy. It may be, “The sales team likes the general direction things are going in.” Or, “Someone says this would definitely help me increase the speed of my end-of-quarter accounting work that I do having this information.” “If this was real, and I could actually use this, it kind of feels like it would make things faster.”

You may have very squishy metrics like this early on, but that may be all we can do. But it’s important that we have something there in order to know how to measure progress. Otherwise, you’re going to probably default back to project management metrics and things that are easily quantifiable, and measuring that stuff does not tell us whether or not the product is going to be valuable. It just tells us how fast are we writing the code and doing execution against something that may or may not actually have any value to the business at all.

So, the third one here kind of goes in tandem with this, are what I call anti-goals or anti-personas. And so, sometimes it’s helpful at the beginning of an initiative to talk about who is this not going to be for. And what is this solution not going to do? Especially if people have inflated expectations about it. What would make this project or initiative or product fail? Like, let’s play out some of the worst-case scenarios here. How could it go way off the rails? And at what point might we make a call that progress has not been made and it’s time to say no? Like, how far do we want to go with this?

And obviously, through experience, if you’ve been doing this a while, you’re going to have more history in your back pocket to pull on here. And we might need to give examples of these things to the user because they can’t necessarily generate this. Like, who is the solution not for? They might not see that until you start talking to them, that well, it’s really not going to help a salesperson understand anyth—like pricing, for example. The model may tell you who are your best prospects to call for the next quarter to close a sale, but it’s not going to tell you anything about the value of that sale.

So, as long as that’s understood, we have to scope this down in order to get you to the higher close rate that you seek; that was our business goal. This is something you might need to volunteer to them because you understand the world of data and they may not be thinking about it that way. So, there’s actually some generative creativity that has to happen here to kind of think through some of these solutions here. They’re not solutions; they are places of possible failure that the beneficiary has not necessarily explored or mentioned to you but maybe highly relevant.

The next one is creative work, which we call this work where we, again, we don’t know what the end state will be and we don’t even know how we’re going to get there yet is again, by definition, it’s going to have discomfort with it because it is not execution-oriented. There’s not a clear path. “I need you to build this thing and this is what it’s going to look like when it’s done. Here are the plans. Go.”

That is not what most of our product management and design work looks like. It’s nice when you get to that point where there is a clear trajectory and you’ve validated that your trajectory is on track. It’s really rewarding to get to that execution phase there, but it’s easy to jump into that execution phase and think that simply by finishing the execution, you will definitely get some value because you’ve launched it. And I think, especially with data products, that’s not true. The other mindset that goes with this, to me is, is this idea of owning the problem space and not just the solution, and the more that we’re kind of holding that problem the entire time with us and not attaching ourselves to any solution direction, I think that also can help us with this particular challenge.

The other thing with this is that I think dancing with this uncertainty is something that leaders have to be really good at. And we have to know—and I guarantee you like the CEO that’s running the ship, they’re doing this all the time. They’re taking in all the facts and all the information that’s there, but ultimately, they have to make a decision about where to go. And those decisions are rarely a hundred percent certain, right? There’s always going to be that there. So, I think embracing the uncertainty and looking for where are we going to stop and learn something and say, “This isn’t on the right track, it’s time to make a change.” And being aware of our sunk cost biases, and all these kinds of things, that’s just a quality that we really need to jump into.

The next big, kind of, bullet here is research activities. Your customer and stakeholder research activities need to be an ongoing thing. If it’s just project-based, it’s much harder to do this, particularly if you’re bouncing around between different customer or user types in your organization. And I’m thinking largely here about, you know, a data team inside a large business where you have lots of different potential personas that come to you for solutions. The better your team understands the problem space, and they’ve, in the eyes of the beneficiaries—again, the using that catch-all for users and the business sponsors; sometimes those are the same person, sometimes they’re not—but if that’s an ongoing activity, and you know what’s it like to be this person that’s going to use this thing, you know, about how they’ve done it in the past, how they do it today, you definitely have—it’s that idea of this camera in their office and being able to talk to them in a way that makes them feel like, “Wow, you really understand what it’s like to be me as a marketing person,” or whatever. This is going to help us make better products more quickly. And again, that’s not writing code, or whatever.

And some of this can be outsourced to the team, but I think there’s—really the best way to do this is there has to be that exposure time between the makers, especially your leads on your team, and the users of these solutions because that empathy needs to be created there. And the other thing that goes with like this idea of research is that—or it doesn’t really go with research, but this idea of confusing delivery work with progress, right? It can feel like delivery work, writing the code, making the models, doing the stuff is a sign of progress because it’s tangible and easily quantifiable to show, “Hey, look, I checked in this code,” or whatever, but that doesn’t necessarily mean we’re making progress. I would argue that the more insights that we have about what are the problems of our beneficiaries and how closely can we—how quickly can we get into the solution mode because we know the problem space really well there, those insights are ultimately going to buy you time when it does get time to do delivery because you’re going to spend less time making the wrong things, you’re going to be able to anticipate where things are going to go wrong, and in general, you’re just going to come up with better stuff more quickly.

And the final thing just on this, like in a research topic is that, you know, if every user base is effectively brand new, you can’t tier your learnings forward. So, if we’re always doing this on a, you know, for two weeks, we met with the team and then we went away for three or four months, and then we came back. We’re not building a habit of listening and really developing that empathy with the people we’re trying to serve. And so, you’re kind of starting over and you’re not able to carry that stuff—the important stuff forward that we’re learning on the ground, where the decisions—if we’re talking about like decision support, application development, or analytics, and these kinds of things, we’re going to check out and three, four months in a quarter, a lot of stuff can change, and so we need to find some way to have some regular exposure there, whether it’s a team that’s dedicated, you have outside resources, helping you do that, clueing into your team on occasion, when they’re—don’t—you know, maybe they are heads down in delivery mode.

There’s lots of different ways to do this, but I honestly don’t know how you can possibly get better at prioritizing the work and, and avoiding the, quote, “waste,” if we don’t understand who it’s for, what their challenges and problems are and talk about it in a way that they can understand it. If we don’t get better at that, we’re basically guessing and just carrying our experience forward and it’s great to have some of our experience and past history to work with, but we can bring a lot of bias into the situation with that. And the next big idea here is design, to me, kind of has two—at a high level, there’s, like, two different facets of design. I think we can design to refine something that we think is on track and so, this is kind of the idea of, like, we know where it’s going, we kind of have an idea what it looks like—and again, this could be an API, it could be a dashboard, it could be an application, but it’s this idea of like, we generally know what it’s going to be when it comes out, and it’s just, it’s not done yet, but we know where we’re going. That’s a different kind of design than designing to figure out what’s actually needed.

So, this is the idea of where we did just enough to discovery to potentially make something. And maybe it doesn’t have any real data behind it, but it would be enough to generate additional questions that would then clarify the trajectory we need to go down. And that work might be throwaway work, but when we start getting into the solution mode, you’re almost certainly going to start generating questions that did not come up during the quote, “pre-discovery phase,” or just the initial discovery phase. Things always get different when we start showing people things and we start showing progress. This is where sometimes the most enlightened information comes out because we realize, oh, that’s, you know, that’s not what we understood when you said you needed X or that you wanted to have X.

So, I like this idea of getting in there quickly, trying to design something to generate conversation. But the warning is that we don’t fall in love with that too much. And that’s why we want to keep that fidelity as low as possible early on so that we don’t fall too in love with what we’ve made and not want to throw it out, even if the signs are that it’s not really the direction we should be going down.

And finally, again, this idea, just remember, the discovery work is always ongoing. So, you know, even in my seminar, my course, Designing Human-Centered Data Products that I teach, one of the things I talk about in there is that even though there’s eight modules in there, it’s not a linear process. So, you might come into a product or project where phase—you know, module four is the most relevant place to start applying the design process, or the product and design thinking process to the work. And eventually, it might drive you back to module two and you realize, “Wow, we really don’t understand the space as well as we thought but the act of doing some low-fidelity prototyping generated better research questions,” and we had to go back—quote, “back to the drawing table,” as you might call it.

My point is, think of it as a circle and not a line. These are just different types of activities that we will go in and out of, based on where we are. And eventually, we start kind of diverging and moving that way, and we probably do converge. And we get more into a classic development track where we know where we’re going and it’s just a matter of executing. But we should welcome that need to, quote, “go backwards,” or to revisit stuff that feels earlier like this discovery or doing additional research.

I think that’s something that we simply have to do, and maybe even more so with data products because we don’t always know what’s possible in advance to make, unlike with like traditional, say, software engineering, where typically it’s just a matter of execution and you have a fairly good ability to predict is this thing even technically possible to make? I feel like a lot of times, that’s pretty well understood outside of the data-specific area, whereas with data, it’s much harder to do that. And then the last thing is—we kind of talked about this already—this business value is tied to customer adoption. I talk about this a lot on the show, but I really like this idea of how would you measure an improvement in the beneficiaries' lives? Because if you can improve their life some way—and this might mean at work—but if you can improve their life some way, the business value is likely to follow there.

So, we want to be measuring the benefits and not necessarily the outputs. And so, the adoption piece, while it is important to be aware of what the adoption level is of the solutions that we’re making, ultimately, the benefits part is what we need to be most clued into. Are we creating a benefit or not? Are we improving somebody’s life or not? That needs to be the thing that we’re really tied into if we are effectively, quote, “in a service organization,” and we’re serving the customers with our knowledge and our, you know, our hands to build the products and the things that eventually come out.So, that’s kind of all I had to say on that first part of the question.

The second part of the question—or the second question, rather, that Will had offered up was, “How can we efficiently and effectively prioritize which data products to develop?” The big thing that comes to mind here is that without a clear strategy, you’re not going to be able to do prioritization work efficiently because you don’t know what success looks like. And so, at best, you start making decisions based on perceived scope, or how difficult is this solution perceived to be because the number of unknowns or things like this. And to me, early on, that stuff doesn’t matter as much as what is the strategy behind this, particularly on the business side. What are the big goals that leadership has for the entire organization? And do these initiatives that we’re doing have a clear tie back to that?

That’s an easy way to simply cross some stuff off the list because if you can’t figure out the tie between what the project or product ask is, and the overall strategy that’s supposed to be driving everybody’s efforts, then it should be fairly clear what stuff to move out of the way. And then you can get into the implementation difficulty or the size factor or some of these other ways. And there’s ways to do this with like, you know, two by two matrixes. With sticky notes, this is a good way. Like, we can look at you know perceived costs versus you know, the data ability as Nadiem likes to call it, like, how easily accessible with the data be in order to potentially make this machine-learning model that we hope will predict x and y, we can start you know, doing price versus difficulty or one could be, you know, change management, what’s the perceived difficulty of the human factors adoption piece versus the amount of technical effort it’s going to be?

And you can map these on a wall and take a picture of the different matrices and see if you see any patterns there about, you know, look at all these things in the top right corner. Like, this one’s always in the top right corner quadrant, maybe, you know, there’s some patterns that you can detect there to drive prioritizing in an optimal way. The other thing on this prioritization question; at some point, Will had mentioned in his follow-up question that it can be hard sometimes to say no to somebody that has asked for something, particularly if they really want this data product. And how badly they want it is not necessarily a sign of future usage or that you’re going to create any value there. And the reason is because the vision in a customer’s head or the sponsors head is almost certainly not the reality of what you can and probably will create.

So, as such, we have to uncover the hidden unarticulated benefits that they seek behind what is sometimes called the presenting problem that they’ve spoken aloud to or written into a requirements document or whatever the format is the way that you receive requests for doing data work. But so often, these presenting problems are actually verbalized as solutions. Like, “We need a large-language model. We need an open, ChatGPT-style solution for our business. What are we doing?” That is a solution in search of a problem.

And so, behind that, what they really want are the benefits is that they perceive a large-language model, chatbot-based solution would give, but they may not understand what’s involved with building such a solution there. And so, it’s really important to understand what’s behind that beyond just the hype and the FOMO and all of that. So, be very careful with the idea of them wanting it a lot as necessarily impacting whether or not that’s a good project or product to prioritize in your queue if you only have limited resources like pretty much everybody does.

The third idea here is that, you know, all things being equal, doing a few things really well probably beats delivering a lot of stuff that doesn’t get used and thinking about, you know, I have this big portfolio here and it’s really wide, but it’s very shallow in terms of value. One of the reasons this is really important—and I think we all understand that in principle—but every time you make one of these solutions that doesn’t get used, you can quantify the tax and the cost, the labor, the dollars, the budget that was spent. You have all these metrics of stuff that you can definitely show the value of. And we want the benefits to be easy to compute, not the tax and the cost to be easy to compute. And so, this is why I like getting small wins on the board and doing a few things really well and being careful with the projects that we bite off.

So, try to get some small wins on the board, especially if you haven’t, or you’ve never really gone back and measured stuff. Try to be really, really informed about how are we going to measure success on these next—this data product or these features we plan to ship. Having some idea, a way to quantify these things is really empowering, too because when you start to—when you really can attach it to some value that’s being created there, it provides energy and a kind of a dose of optimism to everybody that the beneficiaries are happy, the makers, the designers, the data scientists, the team that made the thing, is going to feel more empowered and like their work matters. So, start tracking those analytics on your analytics effectively, but just know they’re not always going to be digital analytics. They may be more qualitative in nature.

The other thing, too, on this kind of idea of prioritization is focusing on data products that will minimize disruption to how people do their work now. So, I might, especially if you’re not particularly close to the users and you’re somewhat native and, you know, projects get thrown over the wall, you don’t have a lot of one-on-one direct exposure time to the people using your solutions, I would be probably focusing more on solutions that pass the high business value—not even necessarily high business value, but there’s demonstrable business value, it’s aligned with the strategy, but it’s also solutions that would minimize the need to change how people do their work or their workflow now. So, it’s this idea of if the river is already flowing downstream at 12 knots—or whatever; however you measure stream speed, I don’t know how it is—but if it’s already going that way and they’re on a raft, and you can, you know, come up with a canoe, it’s going the same way, it’s in the same destination, it’s about the same speed, but there’s new benefits to your creation that you’re coming up with that doesn’t require them to change course, it’s pretty easy to get out of the boat and hop into the boat right next to it, the switching cost is low, those may be better ones to bite off, simply because you don’t have to play the change management and human factors game as much. You know, anytime you’re you’re going to be changing behavior or major workflows or any of that, the non-technical costs go up and the non-technical work goes up to, kind of, figure out how are we going to market this and evangelize it and make people see the value of it. And all these kinds of things that are, frankly, they’re really hard to do and they need to be figured out during the design of the solution and not afterwards.

But I know a lot of teams tend to think about that stuff later, after something is made and not while we’re in the creation process, in the design process, which is the way I really advocate teams to do is to think about the adoption, and the quote, “Operationalization,” as you’ve heard me say on this show, that is part of the design of the solution from the beginning. It is part of the data product. It’s a feature that needs to get developed—if that’s the best way to say it—to you to make it stick.

Two more last things on this is that saying no is also a gift. And this can be as simple as saying, “Look, I’ve really heard you feel like you know, this, this LM—you want us to build a chat bot. You think it’s really going to help—I don’t know—the HR team answer questions about how to deal with difficult employees, or—I don’t know—whatever the thing is, but the chance of us wasting, either your time or your money is super, super high on this project, and we don’t want to waste your time and your money. And the reason we know that is because we’ve done projects like this before that had these parameters.” And you might talk about that a little bit. “It’s missing this. We don’t know what the data is that’s out there. Even if we had it, we don’t know if we can model it, we don’t know if we can predict this thing. And we know on average that you know building an X, Y, and Z wouldjamacallit can take even 12 to 18 months just to get to the first version of something, and that will not help you get X advice or benefit that you said that you really want. And so, there’s a ton of risk doing this and we don’t want to waste your time and budget.”

So, that gift there, the way you say it has a lot to do with helping them understand it. Obviously, this is where management, and again having a clear strategy, have to come into play because that should also be helping answer the question of what do we say no to because we can’t say yes to everything. So, I try to frame those nos as why is this no in your best interest. It’s not, it’s not so much that we don’t want to work on it or I don’t want to help you; it’s just the risk to you is too high. And I don’t mean lying or tricking anybody or anything like that. It’s not a game. It’s about genuinely explaining how it doesn’t align up.

And it may be something you simply don’t have the resources to do it and the other projects are much more aligned with the business strategy, they have more clearly defined problem spaces, and the solutions that might work in that space are much more well defined, and so those, by definition, get higher priority and your team, you know, et cetera, et cetera. And then the final thing here was there was also some comments from Will about this idea of like the space between when, say, a data scientist or some lead on a data project is in that discovery phase, like, figuring out, should we work on this problem? What does the space look like? They’re just kind of getting their head around it, and then there’s a pause… and you know, the rest of the team is working on development and implementation of some other project, and there’s this kind of gap between when the discovery work happened and when it’s going to get into the implementation phase. So, this gets into all kinds of like project management kinds of stuff, and how do we keep relevant information intact with the solution here?

I think this has a lot to do with just communication and how you set up your teams for execution. But I actually think it’s great when in software teams with, when I’m looking at a team, I like this idea of having design working upfront some degree, and so we’re doing some design activity which includes research work and advance to inform us about our near-term future, but it’s not tied to immediate implementation work. There is also some effort being put on immediate implementation work as well, but that’s more tactical types of work. And we actually do need both of these streams going all the time. And the answer is both of them are required.

The mode switching, I think, again, just a very tactical communication that I’ve seen is like, you know, recording a—sometimes when I have clients, I do a lot of screencast recording, so when I finally get my head around a space, and I’m kind of ready to give someone to read out, I don’t usually just hop on a call. A lot of times, I’ll just record a short, it might be 10 or 15 minutes, and I might just say, “Hey, it’s January 2nd, 2024, and I’m giving a status about this project. And refresher: the goals are this, the objectives are this, here’s where we are there. This is what I’ve learned so far. We looked at this, we looked at that, we found out this, we found out that. Here’s a one page doc that summarizes kind of the status of what this is.” And basically giving a brain dump.

And it could be about the design of something or it could be a walkthrough of something, it could be maybe of a Jupyter notebook, and you walk through how, like, “We did some early computations, and we tried this and that didn’t work. And this thing is actually not a working thing, but I’m just walking you through it right now, kind of where we’re at.” And you have some kind of archive here of where that person who is doing the discovery, where their headspace is at that time. And that could be just still at the research phase. It doesn’t mean—there might not even be any prototype or anything to look at that point, but I think the question was really getting at preserving the knowledge that was captured, quote, “upstream,” versus the gap there between when that stuff might get executed down the road and making sure you don’t lose that information.

So anyhow, a lot of that I think most of you probably have different ways you could approach that, but I found some value with recording videos. They’re easy to share, you don’t need to do it on a one-on-one basis, people can rewind and go back, all those kinds of things. So, that can be one tactical way to do that. So anyhow, to Will, thank you for sharing your question here. I hope that was helpful to you and to maybe some other people that were listening here. I think these are great questions.

I’m hoping these are the kinds of questions, too, that we can have open dialogs about in the data product leadership community, so if you’re interested in that, please consider joining. We’re going to be opening—at the time of this recording—in the late summer of 2023. So designingforanalytics.com/community, if you’re interested in that.

The final thing I’ll say on this is that if you have a question yourself—if you haven’t ever made it to the little end music of the show, there is a link on my website, designingforanalytics.com/podcast, and if you go there, there’s a link to drop an audio question of your own right in the browser. You don’t have to download any software or anything like that. And you can leave a message and I’ll try to answer it here on the show. So, thank you. Until next time, keep making those products useful, usable, beautiful, and drive for adoption because that’s how you’re going to get to business value. Take care.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe for Podcast Updates

Join my DFA Insights mailing list to get weekly insights on creating human-centered data products, special offers on my training courses and seminars, and one-page briefs about each new episode of #ExperiencingData.