
Today I’m talking about how to measure data product value from a user experience and business lens, and where leaders sometimes get it wrong. Today’s first question was asked at my recent talk at the Data Summit conference where an attendee asked how UX design fits into agile data product development. Additionally, I recently had a subscriber to my Insights mailing list ask about how to measure adoption, utilization, and satisfaction of data products. So, we’ll jump into that juicy topic as well.
Answering these inquiries also got me on a related tangent about the UX challenges associated with abstracting your platform to support multiple, but often theoretical, user needs—and the importance of collaboration to ensure your whole team is operating from the same set of assumptions or definitions about success. I conclude the episode with the concept of “game framing” as a way to conceptualize these ideas at a high level.
Key topics and cues in this episode include:
- An overview of the questions I received (00:45)
- Measuring change once you’ve established a benchmark (7:45)
- The challenges of working in abstractions (abstracting your platform to facilitate theoretical future user needs) (10:48)
- The value of having shared definitions and understanding the needs of different stakeholders/users/customers (14:36)
- The importance of starting from the “last mile” (19:59)
- The difference between success metrics and progress metrics (24:31)
- How measuring feelings can be critical to measuring success (29:27)
- “Game framing” as a way to understand tracking progress and success (31:22)
Quotes from Today’s Episode
- “Once you’ve got your benchmark in place for a data product, it’s going to be much easier to measure what the change is because you’ll know where you’re starting from.” - Brian (7:45)
- “When you’re deploying technology that’s supposed to improve people’s lives so that you can get some promise of business value downstream, this is not a generic exercise. You have to go out and do the work to understand the status quo and what the pain is right now from the user's perspective.” - Brian (8:46)
- “That user perspective—perception even—is all that matters if you want to get to business value. The user experience is the perceived quality, usability, and utility of the data product.” - Brian (13:07)
- “A data product leader’s job should be to own the problem and not just the delivery of data product features, applications or technology outputs. ” - Brian (26:13)
- “What are we keeping score of? Different stakeholders are playing different games so it’s really important for the data product team not to impose their scoring system (definition of success) onto the customers, or the users, or the stakeholders.” - Brian (32:05)
- “We always want to abstract once we have a really good understanding of what people do, as it’s easier to create more user-centered abstractions that will actually answer real data questions later on. ” - Brian (33:34)
Resources and Links:
Transcript
Brian: Welcome back to Experiencing Data. This is Brian T. O’Neill, and today I’m going to be talking about how to measure data product value from a UX and business lens and where leaders get it wrong. So, why this and why now? This is influenced by two things that recently happened to me. The first was a question about how all of this design stuff that I’ve been talking about during my talk and how does that fit into agile development? This question came from a data engineer.
And secondly, I had someone purchase one of my courses on the website, and when you go through the checkout process, you can leave a comment or a question for me. And so, this person who I’m going to call [Alina 00:01:11] since I didn’t get permission upfront to talk about this so I’m going to anonymize her question and kind of paraphrase here. And that’s not her real name, but I’m going to call her Alina.
So, she writes in and says, “I just started working for a growing analytics team and as part of a larger platform department. I am the product manager and we are trying to move away from being seen as a service org to more of a product organization. So, we don’t want to just build dashboards; we want to create reusable SQL that other teams can leverage when needed and provide the data to the teams in a curated data layer in an effort to empower other teams to self-serve. Any suggestions on how best to measure or track adoption, utilization, and satisfaction? Looking forward to taking your course. Thanks, Alina.”
Okay so, again, we’re going to talk about this measurement thing. And she asked specifically about measurements. I thought it was a good time to talk about that, especially since we’re talking to data people, data product leaders, you guys know all about measurements here. So, there is a couple of different things to think about here. The first thing that triggered me in this was she says, “We want to create reusable SQLs.” And the first question I have is, “So, that what?” Right?
And there’s this general promise of self-service here, but there’s a reason why—either they’ve been asked to do this or this team wants to do that. But the ultimate success criteria isn’t in the creation of the SQLs. I don’t know if the teams or the users even want reusable SQL, so even that as a question, right? So, I think we need to understand what does ‘self-service’ mean and why do we want it? And is that because of—we want to accelerate decision-making or we want to relieve work on the analytics team—which isn’t necessarily about improving the lives of our users, that’s about resource allocation.
And so, when we talk about that, we need to be talking about not just what the team’s goals are—and not to say that the data product team or the analytics team can’t have goals to reduce some of its ad-hoc type project work and all of that, but ultimately, the question is, what did the stakeholders need or value? What did the users of the data products value? And the customers, how is any of this work going to benefit our paying customers that keep the lights on, right? And so, I’m going to kind of use this word customers in this episode to really talk about the people—I’m assuming you’re working in a business, but the people that actually are served by the business. Not the internal customers; I’m going to call them stakeholders during this call, just so we’re absolutely clear.
So, she does talk about the self-service thing, so I would want to know who wants self-service, why do they want it, and what is the status quo? Because oftentimes, the status quo can tell us something about why we’re doing this. So, for example, how do we define non-adoption? Does that mean, like, literally, there’s zero adoption? And is that because there’s no solution, or the existing solution isn’t being used, or what’s the benchmark here, right?
Because if we have a starting point, it’s going to be a lot easier to then measure the change. So, part of this—and this is something that I talk about in the modules in my course when we get into the research and problem-finding part, eventually there’s a design brief. And this is something that I think all design briefs need to have is how are we going to measure success in the project? And that success, again, is not just for this data team, but it’s for the stakeholders who are paying for the project—the ones that have some kind of stake in it—the users who may not be the people funding the project but they’re the ones that get to or have to use the solution. Hopefully, it’s something that they want.
And then you have that customer, right? The person paying to keep the lights on? How does any of this work affect them? These are all things that can be measured, these improvements are things that can be measured. She also talks about this utilization and adoption.
I think those are kind of the same thing. I’m not sure what she means between those two, but satisfaction was the third word that she mentioned. There’s lots of different ways to measure satisfaction, but it requires us to know something about the people that we’re trying to serve. So, I would want to know, again, the current state, how do we know whether or not our users are satisfied now? And then the answer to that is we’re going to have to go and talk to them.
And I don’t like surveys for this. I would want the team going out and doing one-on-one research because you can’t do any of this design stuff routinely well without doing research because you’re not going to know what to design any more than you’re going to know what to build except giving people what they asked for, which is, as you know from the show is not always a really good solution because people generally present you with a presenting problem and that’s not always what’s actually needed. Typically, they’re not articulating the full depth of the requirements or the future change they want to see in their world. They’re focused on what they think they need in order to get to that change.
So, what does dissatisfaction look like? I would want, if I was talking to Alina or this was a, you know, brain-picking call or something like that, I would want to know how satisfaction is measured now and how do users define dissatisfaction? Like, do you know that they’re currently not satisfied? I would want some kind of benchmark for this if that’s the criteria. And another thing to put here is, you know, when she says, “Any suggestions on how best to measure or track adoption, utilization, and satisfaction,” there’s both two factors here. There’s agreement on what these words mean, between the team, the stakeholders, the users, the team that’s making the data products—so that would be Alina’s team—what did these words mean, and do we have a shared definition for what those things mean?
That’s really important, as is, are these actually the right nouns to be tracking, the right metrics or not? I don’t know if those are the metrics that came from Alina or those came from the stakeholders. And sometimes, frankly, your stakeholders won’t know how to measure some of this stuff. And this is one of the skills that I think design can help with because of the facilitation that we do and the questions that designers and researchers asked to get things onto the surface, we need to make sure that the right metrics, not even just agreeing on what the words mean, but that these are even the right words at all to measure. So, that’s an important part of getting this right.
And to me, once you’ve got your benchmark in place, it’s going to be much easier to measure what the change is because you’ll know where you’re starting from. So, how do you do this? And let’s get a little bit more deeper on this. So, if we’re going to really focus on outcomes and not output—so the output here would be these reusable SQLs that teams can leverage when needed, right, but that’s not the outcome. The outcome is some type of decision-making that’s going to—probably, I’m assuming with the kind of analytics she’s talking about—there is going to imply here that better decisions will be able to be made or facilitated with the solution, probably without having to go to another party like the data team here to get that work done.
But I don’t know and I don’t know the specifics of that. And the specifics really matter when you’re talking about this user experience stuff because your business, your industry, your team’s makeup, the business strategy, there’s so many variables here that I don’t think you can just have a set of generic metrics that you track for data products. When you’re deploying technology that’s supposed to improve people’s lives so that you can get some promise of business value downstream. This is not a generic exercise. You have to go out and do the work to understand that status quo, understand what the pain is right now from the user's perspective.
Is it really that they don’t have reusable SQLs and that’s what they want or think they want or is this team projecting that reusable SQLs would be the answer to self-service? And so, they’re thinking that by delivering these reusable SQLs, therefore we will not have people come to us, therefore they will be self-serving, and therefore they will be happy that we’ve made a change. Not necessarily true, right, because self-service sounds really good to us, but self-service may actually be a tax on our users, especially if you’ve been hand-servicing them, right, like, as a service organization. And she said that’s what they are. And I can understand why maybe there’s a legitimate reason or thought that, “Hey, we shouldn’t need to go to the team for every type of analytics question that comes up. There should be some self-service tools.”
This is understood, sure, and there’s lots of off-the-shelf tools that make this promise of self-service. And so, I think it’s reasonable to talk about that, but the details of self-service really matter, so that we’re not imposing a new tax on our users who may just resort to old ways and methods if that self-service user experience is too difficult. That could mean shooting from the hip, taking guesses, it could mean still knocking on this team’s door, it could be, “You know what? I’m going to spend half-a-million dollars of our budget and I’m going to call it a consulting firm, and they are going to become our little shop on the side.” So, we’ve now got our analytics team running on the side because I don’t know how to use this team’s data product or this thing that they gave to me that was supposed to give me self-service, but it’s never quite right.
So, let’s talk about this idea of ‘never quite right.’ Part of the reason I think these things go wrong—and I understand that a lot of data product teams are building platforms, they’re building abstractions, they’re building plumbing and infrastructure to facilitate answering questions later on by users. And the problem is—and this was evident to me when I was at the Data Summit Conference, just you know, one of the questions I asked was, “How many people here have direct impact on the technology that some human being is going to use at the end of the loop.” And as I recall, only about half the hands went up or something like that.
And so, what this tells me is a lot of the people really have no idea how these data products that are building are going to be used downstream. So, they’re working in the world of abstractions. And that’s okay if, like, there’s going to always be some people that are really focused on execution and coding and doing the delivery work and some of that, especially in a larger organization. But ultimately, I think the more that you have the makers of these solutions, having direct access, at least through observation if not actually participating in customer research or stakeholder research or user research, if you’re not doing that work, it’s really easy to go native and to not really understand the problem space, and to jump to abstractions.
So, I want to talk about this idea of abstractions, right? The problem with abstractions to me is that a lot of companies do abstractions too early when they don’t really understand the mental model of the users of these systems. Instead, they’re abstracting based on the datasets or the databases or the models that are being used, and maybe they’re thinking, well, this is generally, like, customer data, so we’re going to create this layer of, quote, “Customer data,” over here that theoretically could answer customer-type questions. The problem with that is, if you don’t know what the actual customer questions are that users are asking for help with in terms of making decisions, it’s going to be really hard to get the abstraction part right. And so, you can end up building something that’s never quite right that always has gaps.
And there may not be huge technology gaps, but the gaps may be significant from a trust perspective, a usability perspective, a utility perspective. And that user perspective—perception even—is all that matters, right? Their perception of the quality, usability, and utility of the data product is really reality. That’s the only thing that matters. So, if your job is to create these self-service data products, then you have to understand that their perception of the thing is all that really matters.
It doesn’t matter what the effort was that was put in, it doesn’t matter how accurate it is, how much security there is, how many considerations were put on it, how it scales, all these other kinds of things, all that matters is their perception of does this help me do my job? Does this make my life better in some small way? Their perception is all that matters. So, measuring their perception is really important.
So, how do we do that? Well, we need to understand the jobs, the tasks, the work that the users do here. That’s really important. And if these users are internal stakeholders and not paying customers, we also want to be thinking about how will this affect paying customers on the other end. Hopefully, it’s adding some kind of value to them, and this is why user experience researchers are so relentlessly focused on customers, but we can also focus on internal stakeholders and internal users as well.
But the point is, all these different bodies of people have different incentives, different interests, and different things that they care about. And so, the project or the product that we’re working on, we need to have a shared definition of what all these different successes actually mean. And again, there’s no generic flavor for doing this kind of thing. I think you need to go out and do the tough work of interviewing, researching, and getting the unarticulated stuff to the surface. And it’s in there, but we have to go and do that work to do this.
So Alina, I would want to know, who cares about adoption? Who is it that cares about utilization? Who is it that cares about satisfaction? Those a little bit sound like—and this is on the right track; I like that her team even cares about this stuff. I think that’s great from a, “How is our team doing our work?” Right?
But the users and the stakeholders and the paying customers, we need to understand what will make their life better, and then how might we measure that improvement and make that really the cornerstone of the data product work that we’re doing? That’s where it begins. Because why? Because things like adoption. Adoption, again, can be a tax.
Like, using a tool—even if it’s a self-service tool—may actually be perceived as a tax and not a benefit by the user. So, the data team may think it’s a total win because you have less ad-hoc inquiries coming inbound, but the users may perceive that as a tax. And if they don’t want the tax or they don’t understand what the value is that they’re getting an exchange, such as well, you know, putting in a little bit of tool effort over here using, you know, some Tableau or an API or whatever the heck it is, you’ll be able to make these faster decisions down the stream, they may or may not want to do things that way. And so, we really want to try to fit that solution into their workflow as much as possible so that you get the reduction in inbound calls for help and service on analytics projects, they’re getting better decision-making without a task.
You know, so reusable SQLs, I mean, just even right there. If you’re talking about exposing reusable SQLs to actual users of the system, are you sure that’s what they want? Do they want to learn how to write SQL and do they actually want to be interfacing with the product that way? The answer isn’t necessarily no. You may be serving a bunch of technical—you’re serving a technical audience and so, that might be the most appropriate way.
In fact, a traditional graphical user interface might be really bad for that audience. They may just want an API endpoint, or they want—I don’t know what it is that they want and that’s my point. These are not generic solutions, right? So purpose-built, go out and figure out what they need. Let’s jump back to this abstraction thing.
The problem with abstraction is if you’re not doing a lot of ongoing and routine research activity to go out, talk one on one, or observe one on one, or even getting a group of people to go observe somebody doing their work, using data to make decisions and these kinds of things, if you don’t have a lot of continuous exposure, I don’t know what the exact number is but you need to intimately understand the status quo and how data is used to make decisions now. If you don’t know that, it’s going to be really hard to take all those different use cases that you observe in the wild and properly abstract those into reusable data products. It’s much easier for a team to look at datasets, get a somewhat probably vague inquiry from a stakeholder about what they want, which is, like, reusable SQL statements—maybe that came from the stakeholder—look at the data, find inconsistencies, find ways to simplify stuff, so people aren’t doing crazy joins, and you know, the data is the right freshness, or whatever it may be, it’s easy to assume that would be the right solution, that could be a solution that they want. But that doesn’t necessarily mean it’s going to get used at the end of the loop, right? So, the abstraction part only works if we intimately know real, solid, and specific use cases that people want to perform using this tool.
Once we have a set of these, we can do the synthesis work, and this is where designers can help group and cluster things and this is the kinds of activities that we often do after we do research is we have to synthesize that research. Through that process, if you had some user experience designers helping you out, or researchers, you could do some clustering, for example, and figure out where are the classes of problems—not solutions, but the classes of problems that people are either doing today or they voiced that they would like to do today. Decision-making types of activities, what are the data questions that they have? We could do some clustering there, after researching, to figure out okay, here’s what it means when they say customer questions. Here’s the kinds of questions that we’re hearing a lot about that they want to answer.
So, that’s a much more user-driven way as opposed to, “Let’s look at customer data that we have and then try to create an abstraction of that, and then serve that up to users and hope that they might want to use that at some point.” It’s kind of like taking all the raw ingredients to bake your cake, but you’re putting the raw ingredients on the shelf. The customers want, you know, I don’t know, they’re looking for flour. And instead, you’ve given them, like, you know—I forget what the three parts of the grain of flour is, but you’ve separated each part the spelt and the, whatever that shell thing—the shell on the outside, or whatever it’s called, but you’ve taken all three parts of the kernels and separated those into their own bags and you’re selling all three parts of the flower to them separately. And not only—they’re just like me; like, I’m an idiot, I don’t even know what those words—the—I forget what the different parts of the grain is.
But that’s my point. They’re looking at something that’s been so abstract it out when really all they want to do is maybe occasionally they want to buy flour. You probably have some people that actually just want to buy a finished cake at the bakery, but you have some people that actually want to bake their own because they want to control the flavor and the size of the cake and all these other kinds of things. So again, you need to know who those people are that are quote, “Shopping,” in your store, and figure that out so that it’s user-driven, it’s needs-driven, its problems-driven, and it’s not reverse engineered from the data warehouse or the data lake or whatever thing that you’re using to store your information, that’s not the starting point, at least not in the world that I believe in and the one that I think is going to help you get more traction for your analytics and machine-learning work. It’s putting humans at the center of this work. That’s the whole theme of this podcast as you know.
It’s really about starting with the last mile. And I call it the last mile because the users are often at the very end of these projects in the data products world. They’re usually coming in too late and so I think of them as being at the end and I want them to be at the beginning. So, I call it the last mile for that reason.
But there’s magic in that last mile. If you go and spend the right time, and you ask the right questions, and you synthesize this data, it’s going to save you a lot of time building the wrong stuff, or what I like to call ‘technically right, effectively wrong solutions.’ No customer wants that. No customer wants—and no user, no stakeholder wants SQL. They don’t really want SQL, they don’t want machine learning, they don’t want analytics, they don’t want dashboards, those are all outputs. They don’t want outputs.
They want a promise. A promise that may or may not be explicitly stated, but there’s a promise behind each of those things, even if those are the words that they use. They said machine learning, but they have in their head an idea of what machine learning is going to deliver for them. And it’s your job—or somebody’s job—and this is where designers and researchers I think can help because they’re good at asking questions, listening, and synthesizing people’s feelings, the words that they say, what they do, into concrete statements that everybody can agree on. That is the game. If you want to lead and you want to do innovative work, that is ultimately the game.
If you want to just give people what they asked for, go write the code, deliver, execute, put it out there, and hope that it gets used, and then move on to the next project. That’s fine. That’s not the world or the people that I want to talk to, the ones that I want to help because we don’t have the same perspective on this. And if that works for you, great, you should keep doing that. I think that recipe is why we have these continued stats about low adoption, low business value. It’s because that’s how it’s been done for 20 or 30 years and that’s not what customers, users and stakeholders really want.
We are not—the industry, the data science and analytics groups are not spending the time to really understand the downstream decision-making that customers, the stakeholders, and/or users are supposedly making or think that they will be able to make. We really have to hone in on that if any of this technology output is going to serve them. So, the last thing I wanted to talk about here is the difference between success metrics, these qualitative—or possibly quantitative; hopefully, they’re quantitative things that we want to measure that will tell us how do we know if we did a good job at the end of the project? Well, the success metrics are those things.
But what are not success metrics? Well, there’s these other things that I like to call progress metrics—and this isn’t my framing or my model for it; I don’t know where I first heard that, but project or progress metrics are things that we—I think a lot of teams track right now. This is number of sprints completed; this is delivering on the outputs that we promised, or the features that we promised, or the business stakeholder asked us for a, quote, “Reusable SQL statement that other teams can leverage,” end quote, so we gave them these things. So, counting the outputs, counting the sprints.
This can actually be done with design, too. Like, this can be done improperly done in the design world, too. This can be counting the number of GUIs widgets and screens that we made, or counting the number of hours that we spent doing research with customers. That’s actually not necessarily a form of value. That is a progress metric.
Now, there’s nothing wrong with these progress metrics as long as we know that they are not the success metrics, as long as we know that they’re just there to tell us are we possibly on the right track. And I think part of this is really about helping the team, just the team itself and maybe a stakeholder understand is that boat pointed in the right direction or not? And that’s about it. But a data product leader’s job should be to own the frickin’ problem and to own the problem to the point that you only care about the problem anymore. You don’t even care about what the technology that’s being built and all of that stuff. You just intimately understand the problem space, and you want to deliver a solution for that.
And you know how to measure whether you’ve gotten there or not because you’ve done the work to develop the relationships with the stakeholders, with the users, with the customers, and you’ve articulated that to your team in such a way that they really understand it too. That’s what a real data product leader, in my opinion, does. It’s that relentless focus on the problem space and the solution space and not on engineering metrics or counting increments of work that have been done and these kinds of things. So, I do like and I think it’s okay to have some of those things, especially if you’re trying new stuff. So, I think, for example, tracking how many hours of research, like, one-on-one customer inquiry has the team done.
Like, what’s our average per analyst, or our average per data scientist? Like, let’s say our goal is, you know, four hours a week is our goal, everybody has to spend four hours a week, and that’s your metric. And maybe you’re at 2.2 on an average basis, and you’re trying to move that metric up. It doesn’t mean you’re getting any value, or you’ve delivered anything of value just because you’re spending time talking to users.
Now, I can tell you, in the long run, you’re almost certainly, unless you’re completely ignoring what they say and you don’t care, you probably are going to create better products over time. And we could go and measure that if that’s what you wanted to do. So, there’s value in that. And it is a progress metric to say, “We’re eating our vegetables, we’re eating less refined carbohydrates, so we should get to the desired outcome, which is feeling better about how I look.” And the metric for that is weight loss.
Well, the progress metric there can be how many pushups did you did? How did you eat this week? Et cetera, et cetera, et cetera. You may not see immediate impact from those things, but they are things that we can count and we can measure. So, they have their place, but the ultimate goal, if I keep using that analogy, right, the goal isn’t necessarily to count the number of vegetables eaten.
The goal would be—you know, if you’re the trainer, right—this person wants to feel good about the clothes they wear, they want to feel strong and courageous when they go up and give a presentation at work. They want to feel good when they go on a date, they want to fit into the clothes that they bought five years ago that they no longer can fit into. If you notice, I just used a bunch of words like feel. And those feelings may be the user experience outcomes that we need to measure against. And that might feel really strange, but people make decisions based on feelings all the time and then they rationalize with data.
And I think a lot of data people know this, but it’s a hard pill to swallow that feelings have this much to do with how people do stuff. And I think it’s true. Even in business it’s true. This is not just for personal life. This is also for business life.
So, when we talk about measuring user experience stuff, you may need to measure feelings. How do people feel like, “Hey, compared to last year, I feel much more empowered to make better decisions about purchasing or pricing or where to spend my ad campaigns.” Like, “I feel like I know how to set my prices correctly. I feel like I’ve taken less risk than I used to. I feel like when I need to get a quick answer to what I think is a small question, I’m able to get that more quickly than I was last year.” Those may be the success criteria that you need to track.
So, Alina, I don’t know if that’s helpful to you, but from a UX perspective, that may be the kind of stuff that you need to track. And if you can get that stuff right, there may be a way to then go in and attach some business value to those feelings there. Like, how does that translate and why does the stakeholder feel like it’s easier than it was last year? Or, “I feel less risky about the decisions that I’m making.” Well, what a the risky decision cost us today?
Like, what’s an example of what a risky decision could mean? And if we’ve reduced that risk, well, if we can measure risk—which is definitely possible—then we should be able to measure the improvement to that and then we can attach some kind of business value to that. So, this stuff is all intertwined. Even though the UX stuff may sound kind of hand-wavy and I’m talking a lot about feelings and measuring how people feel about stuff, but you can attach more quantitative stuff directly to those feelings if you’ve done the proper work, again, to understand what it’s like to be the stakeholders, the users, the customers, their world, their jobs, their workflow, the game they’re playing, what score are they keeping in their head when they go to work? It’s probably different than the score that you’re keeping.
And I like to talk about this game framing, like, Seth Godin uses this a lot and I love this kind of framing. I’m not literally talking about games, but if we think about our careers and our work and life, like, we’re kind of like playing these different kinds of games and we’re keeping different kinds of scores depending on who we’re hanging out with and which tribe or group we’re working at. And at work, this department, you know, the sales team, I guarantee you, like, the sales team, if they’re commission-based, they care about number of deals, and their lead flow, and how many deals that are closing, and what their commissions are. And the design team, they have a completely different worldview, probably, about what they’re measuring, right? In the finance department, maybe they’re measuring risk, and costs, and keeping operational costs down.
Like, these are all different games and so, it’s really important for the data product team not to impose their scoring system onto the customers, or the users, or the stakeholders, right? It’s our job to simply go out and discover what those things are, together with the stakeholders, and users, and customers. We do it together. It’s really an act of facilitation. I think this is in the first video of my seminar in my course, I talk about this.
Like, design, a lot of the role of what designers do is facilitating groups of people from different domains and job responsibilities. And this is where the innovation comes in. You get all these different perspectives on things, but to get shared understanding of why are we here? How is this tech supposed to help somebody out? How will we know if we did a good job? We’re here to facilitate that activity.
And this is something that you can also learn how to do if you don’t have professional designers or user experience people and you want to try doing this yourself, you can. And of course, it’s probably going to be slower than it is hiring a professional to do it, but it’s possible. And so, this is really kind of the beginnings of how to do it if you haven’t done it before. I hope this podcast episode was helpful to think about progress metrics, to think about the difference between progress and success metrics. My whole topic that we talked about abstraction and the issues with abstracting too early, especially from a data perspective versus from a customer problems perspective. We always want to abstract once we have really good understanding of what people do, it’s easier to create more user-centered abstractions that then they can reuse later.
So, Alina, and to all the users—I’m sorry, all the listeners out there of Experiencing Data, I hope this episode was helpful. You’re always welcome to leave me a comment. And again, if you’re on my mailing list, you know on Tuesdays I always send out the new episodes, so feel free to just hit reply; those replies go right to me, and I’m always interested in hearing from you and to know if anything that I’ve been talking about or my guests have been talking about have helped you make better decisions or changed how you approach this work of data products that we’re talking about. So, best of luck, and I look forward to talking to you again and bring the next guest onto the show, so stay tuned.