
Today I chat with Chad Sanderson, Head of Product for Convoy’s data platform. I begin by having Chad explain why he calls himself a “data UX champion” and what inspired his interest in UX. Coming from a non-UX background, Chad explains how he came to develop a strategy for addressing the UX pain points at Convoy—a digital freight network. They “use technology to make freight more efficient, reducing costs for some of the nation’s largest brands, increasing earnings for carriers, and eliminating carbon emissions from our planet.” We also get into the metrics of success that Convoy uses to measure UX and why Chad is so heavily focused on user workflow when making the platform user-centered.
Later, Chad shares his definition of a data product, and how his experience with building software products has overlapped with data products. He also shares what he thinks is different about creating data products vs. traditional software products. Chad then explains Convoy’s approach to prototyping and the value of partnering with users in the design process. We wrap up by discussing how UX work gets accomplished on Chad’s team, given it doesn’t include any titled UX professionals.
Highlights:
- Chad explains how he became a data UX champion and what prompted him to care about UX (1:23)
- Chad talks about his strategy for beginning to address the UX issues at Convoy (4:42)
- How Convoy measures UX improvement (9:19)
- Chad talks about troubleshooting user workflows and it’s relevance to design (15:28)
- Chad explains what Convoy is and the makeup of his data platform team (21:00)
- What is a data product? Chad gives his definition and the similarities and differences between building software versus data products (23:21)
- Chad talks about using low fidelity work and prototypes to optimize solutions and resources in the long run (27:49)
- We talk about the value of partnering with users in the design process (30:37)
- Chad talks about the distribution of UX labor on his team (32:15)
Quotes from Today’s Episode
Re: user research: The best content that you get from people is when they are really thinking about what to say next; you sort of get into a free-flowing exchange of ideas. So it’s important to find the topic where someone can just talk at length without really filtering themselves. And I find a good place to start with that is to just talk about their problems. What are the painful things that you’ve experienced in data in the last month or in the last week? - Chad
Re: UX research: I often recommend asking users to show you something they were working on recently, particularly when they were having a problem accomplishing their goal. It’s a really good way to surface UX issues because the frustration is probably fresh. - Brian
Re: user feedback, “One of the really great pieces of advice that I got is, if you’re getting a lot of negative feedback, this is actually a sign that people care. And if people care about what you’ve built, then it’s better than overbuilding from the beginning.” - Chad
“What we found [in our research around workflow], though, sometimes counterintuitively, is that the steps that are the easiest and simplest for a customer to do that I think most people would look at and say, ‘Okay, it’s pretty low ROI to invest in some automated solution or a product in this space,’ are sometimes the most important things that you can [address in your data product] because of the impacts that it has downstream.” - Chad
Re: user feedback, “The amazing thing about building data products, and I guess any internal products is that 100% of your customers sit ten feet away from you. [...] When you can talk to 100% of [your users], you are truly going to understand [...] every single persona. And that is tremendously effective for creating compelling narratives about why we need to build a particular thing.” - Chad
“If we can get people to really believe that this data product is going to solve the problem, then usually, we like to turn those people into advocates and evangelists within the company, and part of their job is to go out and convince other people about why this thing can solve the problem.” - Chad
Resources and Links:
- Convoy: https://convoy.com/
- Chad on LinkedIn: https://www.linkedin.com/in/chad-sanderson/
- Chad’s Data Products newsletter: https://dataproducts.substack.com
Transcript
Brian: Welcome back to Experiencing Data. This is Brian T. O’Neill. Today I’ve got Chad Sanderson on the line from Convoy. What’s going on Chad?
Chad: Oh, things are going well. Thanks for having me on Brian.
Brian: Yeah. Is the data behaving?
Chad: It is going about as well as can be expected.
Brian: [laugh]. Well, you have a lot of opinions on that kind of stuff on LinkedIn, which I like. There’s a—you have an opinion and a voice and an idea about what you want to share. And so, I wanted to bring some of that in here. But one of the things that, in particular, I think when we first connected there is you had in your LinkedIn title, not only you the Head of Data and Product Manager Convoy, you’re also a Data UX Champion. What is that? [laugh]. I like the sound of that.
Chad: Yeah, well, as you know, UX is user experience, and I think that there is a huge user experience when it comes to accessing and working with data. So, many people in our company, many different types of stakeholders from various data consumers, product managers, software engineers, analysts, data scientists have to access data on a regular basis, and each one of those people has a different workflow. So, I like to think about how to make those experiences more ideal.
Brian: What prompted you to care about this, to feel like this was an important need? I think being mindful about this is, like, the first step of getting better at it, so I’m always curious where people’s experience with this comes from?
Chad: Yeah, well, you know, I came into Convoy as a product manager, and so my job was really to understand our customers’ pain points and help them leverage data better. And one of the first things we figured out is, after getting through your traditional sort of big tech data problems—we’re not able to get the data that we need quick enough, we’re having data quality issues at scale—the next real breakdown that we saw is that people have a really hard time using the data effectively, accessing the data. And these are, you know, fundamentally experience issues. It’s not that the data wasn’t there or that it wasn’t possible to write good queries on top of it. It was, it’s just the tooling and the systems that we had did not allow people to do that effectively. And once we understood that, then we realized this was a pretty large investment we could start making.
Brian: Got it, got it. And was there a particular moment or something came to a head, or was this just, like, a slow buildup of complaints, or, like, just lack of views? Like, what were the signals that you had a problem with this such that you needed to address it?
Chad: Yeah, well, I think that one of the turning points for us is, for a long time, I was under the impression that data was almost identical to software, where software engineers, they write code, they push that code to production, they monitor it. And all of the tools to do that really exist in the data ecosystem today. There are tools to write your SQL queries, there’s tools to push those queries to production, to define ownership. And yet, despite us having these tools that were very similar to the software engineering workflow, we were still getting a massive amount of complaints. And at one point, at the end of 2020, the data science leaders in our organization essentially said, “Hey, look, like, things are getting so bad for us, it’s so hard to access the data, it’s taking us so much time, working within our data model is so painful that we need to hire a data engineer to embed on every single team at the company.”
Which would be, you know, 25 people. Convoy has a relatively small product and engineering organization, it’s a late-stage startup, but our data team was about 60 people at that time, so you can imagine we’d be essentially increasing our data team by 50% just to deal with, you know, what I felt were usability issues. That was such a large cost that it really caused us to start looking at some of these problems a bit more seriously.
Brian: So, tell me how do—you know, if you’re not coming from a design or user experience background, where did you start? Like, what’s the alternate remedy to hiring more muscle to [laugh] muscle your way through the problems?
Chad: Well, we didn’t know the answer to that either. So, we started just by having conversations. You know, my background is I’m not a designer, but I was originally a journalist, and so I’m pretty good at asking people questions. And we did customer interviews and user research for about three months. And we didn’t just stop with the people on our own team; we actually went out to other companies, other data teams, other data science teams, and we start interviewing them as well to understand, you know, is it just Convoy that has a sort of uniquely bad experience, or is everybody going through a similar thing, and who’s solved like this already?
And we found that it was not a uniquely bad experience; it was a common problem shared across many, many different companies. And when we compiled all of this user research together, we looked at the pain points that we kept seeing coming up over and over and over again, and based on all that pain, we were able to try to root cause it.
Brian: Got it. You mentioned you were a journalist, and I think there’s a lot of connection there to the work of research. And one of the questions I get sometimes, like in my workshops and the training here is like, “Okay, so I need to go out and do this research stuff. What do I ask? I don’t know what to ask.” Where do you go with that? How did you get these people to open up and to open up on the right stuff? Tell me, how you think about that?
Chad: The best content that you get from people is when they are really thinking about what to say next; you sort of get into a, you know, free-flowing exchange of ideas. And so, it’s important to find the topic where someone can just talk at length without really filtering themselves. And I find a good place to start with that are, just talk about their problems. What are the issues that you’ve—what are the painful things that you’ve experienced in data in the last month or in the last week? And you don’t have to talk about any particular issue, you could tell me about a specific incident.
And what I like to do is, as they sort of talk about their pain, really try to drive down to as close to the root cause as I possibly can. So, if someone in data is saying something like, “Yeah, you know, it just took a really long time for me to access a dashboard,” then I want to ask follow up questions like, “Okay, when you say it took you a long time, do you mean, it took you a long time to generate a query? It took you a long time, like, it was very slow? Is it always a long time? Is it a long time for this particular dashboard? Is it a long time for every single dashboard?” And the more that you understand about the specific nature of the problem, the more you can start to find these, like, similarities across different customers.
Brian: Yeah, I think that’s really great. I often recommend people, especially if you’re talking about a particular project or something like this, is to ask them to show you something they were working on recently, particularly when you’re talking about in a problem space. It’s a really good way because it’s usually fresh in the mind; the frustration is probably fresh. And from there, like, all these tangents will probably open up because you’re going to start getting into the immediate problem, and then you might get into the systemic-level problem, and maybe some organizational stuff, and it will just turn into this mass—you’re probably have too much information, which is the opposite problem of not knowing what to ask about. So, I think it’s really great to start with something recent that they can still—it’s still resonating with it, there’s some emotion behind it because you’re going to get more of that unfiltered kind of information, which is what we want.
Chad: I find that oftentimes software engineering and data teams can fall into a pattern of asking customers what they would like instead of what their problems are. So, let’s say, you know, “Would you like the data to be faster?” It’s like, well, yeah, of course. Everybody wants the data to be faster. If they’re not doing that, then the even worse approach is saying, “Hey, we’ve decided to make the data faster. Do you like this?”
Oftentimes—this is very common across all software engineering teams—to sort of easy to fall in love with your own projects before validating that a customer actually has a need for them. So, by taking the approach that we did, we actually figured out that there was a huge gap, not just in, sort of, our mental model about what a customer data workflow should look like, but there was actually a huge gap just in the industry, like in the current set of tools that existed, that there was no solution for and we would have to go and build something to solve that, fill that gap.
Brian: How do you measure that you’re getting better at servicing all this audience? Is it just kind of listening for comments in the watercooler? Like, how do you guys know if you’re improving?
Chad: I do think that the qualitative feedback is extremely important for us. We do not have a direct connection to business value in the same way that a team that’s building out a pricing model or something and can directly impact revenue does. So yeah, I actually think those, sort of like, watercooler comments are pretty effective. You can get surveys that are pretty good as well, just for general sentiment. Like, you know, how is our data environment relative to other data environments that you’ve worked in? Is it better or is it about the same? Is it worse? Is it way worse?
If you get a lot of people saying that it’s way better, obviously that’s the place that you want to be. But then it’s—I don’t think it’s just limited to sort of your typical data consumers, like your data scientists and your analysts, but once you start expanding that out to folks, like, you know, executives or salespeople or marketers, you can start asking some really interesting questions like, “How effective you feel you are at figuring out what our customers need based on the data that we provide?” Those are some generic questions, but just getting a general trend over time has been a really great way for us to understand if we’re doing a good job, asking good questions, and providing good solutions.
Brian: Does your team do any type of measurement of this tool? You talked about speed, performance, ability to write queries, all these kinds of things. Do you guys measure, especially if you’re building out some of this tooling, custom tooling for these different audiences, do you do any type of evaluation of the user experience directly, you know, usability testing, or something along those lines? Is it mostly build, deploy, put it in production, and then wait for feedback to come back, or—talk to me about that.
Chad: So, for us, because a lot of the things that we build are zero to one, we don’t actually expect the experience of using the product to always be that good. And we actually accept that getting a lot of people who hate our tool doesn’t necessarily mean that we’re doing a bad job. In fact, one of the really great pieces of advice that I got is, you know, if you’re getting a lot of negative feedback, this is actually a sign that people care. And if people care about what you’ve built, then it’s better than creating some sort of overbuilding from the beginning. So, we do get a lot of negative feedback, but we get a lot of—we also get a lot of great—generally the trend is you get this amazing amount of extraordinarily positive feedback very early, like when you’re unlocking a new use case that people weren’t able to do before, or it makes their lives, like, dramatically easier.
But then you get a pretty huge amount of follow on feedback. So there’s, like, a sort of a sharp increase in, you know, additional product enhancements and things that would make everybody’s lives easier and not take them as much time. And in our case, it’s the relative value of making great experiences, generally tends to be lower than creating a experience that is essentially filling a gap where previously people were just doing a bunch of manual work. So, one of the ways that we actually measure is we look at pipeline completion rate—or workflow completion rate, I guess, is another way of thinking about it—where we lay out, in order to complete a task based on data, here are all the steps that one has to go through. You have to do A, you have to do B, you have to do C, you have to do D.
And in some cases, those steps are quite simple and a person could perform them on their own manually with little effort; in some cases, they are extraordinarily complex, and they take days or weeks. And so, our goal is to essentially figure out how can we complete the greatest percentage of the pipeline that provide a solution that makes it radically easier to basically create a end-to-end workflow. And so, the relative level of completion of the workflow is one of the things that we really care about. And then within each step of the workflow, we generally try to measure customer sentiment. Like, can people get their tasks done effectively, leveraging our tools, and then we also measure the relative sentiment of the entire workflow. So, how is this end-to-end experience? And then we compare that over time.
But I mean, one of the problems was sort of the sentiments, the sort of like NPS scores and those types of things, is that we don’t have a tremendous amount of customers, and so depending on what that particular customer is focused on, like that quarter or that month or whatever, it could significantly change our scoring, right? If they’re doing something that’s very, very complex and we don’t facilitate that very well, they’re probably going to give us lower scores. If they’re doing, you know, a bunch of simple things and we make each one of those simple things very easy, they’re probably going to give us higher scores.
Brian: Got it. So, it sounds like all your measurement is based on analytics, then, trackable metrics that you embed in the software, or your tool stack, or some kind of something like that?
Chad: At least for the sentiment piece, it just really comes from talking to our customers and asking them questions and doing surveys and things like that. The completion piece is based from our own, sort of, qualitative assessment of what needs to exist in that workflow. There’s no, sort of, reality of when a workflow stops and when it ends; it could go on infinitely. Someone’s workflow could be ten steps, another person’s could be fifty, another person’s could be five, and so we have to apply our best judgment and make a case for why we think it should be these twelve steps, and then we grade ourselves on, you know, are we providing solutions for each of these twelve steps.
Brian: First of all, I love that you guys are focused on at looking at this workflow stuff. I don’t think this is always thought about and usually this is something that someone doesn’t—even the person that’s doing the work, the user—doesn’t always think about it in terms of we have these twelve steps because no one’s really ever broken it down. Like, plug the microphone in, power the amplifier on, close the windows and turn off the air conditioning, enable Bluetooth on headphones. Like that’s not how we think about. It’s like, “Chad’s going to come record on my podcast and, like, I’ll load up Zoom and hit record, right?”
Nope, there’s actually all these other, like, things that had to happen for us to get on here. So, I’m curious, do you think about enabling all the steps or do you guys do any work to say, “You know what? Step three is where people, just they’re ready to blow their brains out because it’s so difficult, so even though step one and two, like, there’s some data prep, or some ETL, or some, I don’t know, pre-work that might need to happen, we’re going to double-down and do you know, 2X effort on step three because that’s where the pain is largest.” Do you do anything like that to kind of weight the efforts versus covering the end-to-end? Like, how do you think about that?
Chad: So, we definitely think about that. I mean, all of this is really sort of our best intuition and logical reasoning. And we’ll write out these hypotheses in a document on where it’s most important to focus. What we found, though, sometimes counterintuitively, is that the steps that are the easiest and simplest for a customer to do that I think most people would look at and say, “Okay, it’s pretty low ROI to invest in some automated solution or a product in this space,” are sometimes the most important things that you can do because of the impacts that it has downstream.
So, to give you an example of this, in data specifically, data consumers are thinking about discovery, right? This is sort of a whole area of sort of data infrastructures discovery: “How do I find the data that I need?” The way that most tools work today is they sit on top of your data environment and they show you, here’s all of your tables, here’s all of your columns, and all the metadata associated with those things, right? And people will go through that workflow and it’s fine. When we started, like, really understanding the workflow around discovery, what we found is that looking at the tables and the columns was just a single piece of a much longer discovery workflow where they needed to actually understand the semantic context.
Like, yes, this table might refer to something called shipments, but what does the shipment actually mean? Like when we’ve defined it somewhere, like, what is that? Or maybe there’s a column called program type, but in the real world, what is a program type? And there’s multiple values of program type; what are those mean? There’s this really important step when the data is actually being defined, of capturing that important context that nobody was doing. So, it didn’t even get registered as it as a pain point. Nobody was even doing it.
And so, we realized, like, there’s actually a missing piece in the workflow, where someone has to define the semantic context of, like, what this data actually means, how does the business work, and then we need to map it to what’s being collected in the real world. And if that piece happens, all of those discovery issues that are happening downstream, all those problems, they kind of go away and you get a nice, like, seamless experience where you can go from context to the underlying datasets. That’s something that we learned is that just because a particular step in the workflow is really, really painful, we may try to solve that, but we can solve it in different places in the workflow.
Brian: Yeah so, you know, in the UX world, we would call this a mental model. This is a great example of what you gave, like, what is a shipment, right? And there might be a shipments table [laugh] somewhere in there. And it looks like it’s like point in time, there’s a bunch of date stamps, there’s a product IDs in the cart or whatever, and some ID for the shipment itself. And then there’s the conceptual model, the mental model user has—which may be different perhaps for a data scientist or a business manager or something—about, what is a shipment?
A shipment has two addresses: a from address, a destination; it happens over time; things change along this timeline during the shipment, so it’s going to which postal carrier? Oh, it’s going to United States Postal Service. On and on and on, right? And so, when we talk about the data part, it’s like, well, are we talking about the data model perspective, or the user’s perspective about what a shipment is? This is an immediate signal for me [laugh] actually, when I look at data products and user interfaces is when I see a data model that’s literally been put into the user interface where it’s basically all the object tables are literally displayed and you’re supposed to, like, look at rows and columns of stuff.
It’s usually a red flag. Not always, there are some reasons where you do need to actually look at the data model itself, but that’s usually a warning flag that they have not modeled this experience or the design of the product around the mental model of what the customer has. I’ve seen this repeatedly happen.
Chad: Yeah. Yeah, that’s exactly right. And one of the things that you find that a lot of modern organizations these days is that modeling is frequently a post-hoc process that people do in order to answer the immediate question that they have. And that has a pretty tremendous amount of consequences because if you are a siloed business unit—like let’s say I work on the shipment success team and my goal is to understand whether a particular shipment arrived on time or not and if it was late—I’m only going to have a vision of that particular workflow and that in that particular process. And so, if there’s some important data that could be modeled better which would be tremendously useful to the customer experience team, or the operations team, or whatever it is, they’re just not going to have access to that data and it’s going to limit the amount of questions that they can actually ask.
We see this over and over and over again, about how design decisions in the data model from years ago manifest today in questions that are just, like, really, really hard to answer, and it requires days, weeks, sometimes even months of work to produce a insufficient answer that question, when if it was modeled correctly, it could have been answered in 30 seconds.
Brian: Yeah, yeah. Maybe—and I didn’t really go into this at the beginning—briefly, tell it what is Convoy, just so our users have a little bit of context about what that business is. And then maybe you could tell me a little bit about the makeup of your team? Is it data engineers, software engineers, and other UX people? You know what—tell us a little bit about that.
Chad: Yeah. So, Convoy is a really interesting company. It’s what we call a digital freight marketplace. So, that means it sits in the middle of a shipper that’s trying to move some freight around the country and a carrier that is trying to take that freight. It’s a B2B model.
The carriers usually owns a fleet of trucks, and Convoy initially was a broker. So, we’re an automated broker. That means a shipper would give us a shipment, we would reach out to our network of carriers and say, “Hey, we have this shipment. Who wants to take this load?” And what we’ve become is more of a marketplace where shippers can, sort of, automatically push their loads to the marketplace, carriers can then bid on those loads in an auction model. Lowest bid generally wins, but not always, and then we monitor how that load is then taken from, like, one facility to another facility.
And there’s a lot of really interesting things that you can start doing in that business model to make the flywheel spin a lot faster. Now, we just unveiled a few months ago, something called Convoy for Brokers. So, we actually allow other brokers to put their shipments into our marketplace if they’re having trouble fulfilling them. So really, really cool business model. The interesting thing about it, or the different thing about it I would say, compared to previous positions I’ve had, is that it’s not really big data in the typical sense.
We don’t have that many customers. There’s around 50,000 carriers that we work with, a few 100, maybe just around 1000 shippers, and those shippers are some of the biggest businesses in the entire world. We’re generating a pretty tremendous amount of revenue, but there’s just not that mu—there’s not a massive volume of data, but that data is tremendously complex. There’s a lot of lifecycles that are happening. There’s a shipment lifecycle, there’s a shipper lifecycle, there’s a contract lifecycle, there’s a payout lifecycle, and people need to be able to understand all of these lifecycles pretty effectively.
In terms of my team, I work on a team that’s called Data Platform. We’re a data infrastructure team. And it’s about half data engineers, half software engineers. So, we are simultaneously maintaining all of our data environments and we are building products on top of those data environments to facilitate relevant data-related use cases.
Brian: You said you’re building products on that. Tell me a little bit about what is a data product.
Chad: So… there’s, there’s, there’s data products, I find data products can have a few different meanings. One of the best versions—best definitions of data products I’ve heard is, like, it’s just a dataset. Like, and some datasets, it has a clear owner, there’s a clear customer that it serves, and there is a problem that it answers.
We don’t supply that type of data product. We supply data products in the sense of there is a product, a software application that consumes data or facilitates working with data. So, Snowflake is a data product, or DBT is a data product, or Fivetran is a data product. So, that’s mainly the area that we play in is either bringing new products and, like, SaaS services, into Convoy to integrate them together and create a nice end-to-end experience. Or when there’s gaps in that experience, we will build something ourselves if we feel that’s valuable enough.
Brian: Is the process and the mentality of designing and building products in the software space, does that fully translate over to building data products? Or are there, in your experience, are there some, like, gotchas and things that we need to not carry over?
Chad: Yeah, I think there’s quite a few actually. I think one pretty big gotcha is that in sort of the typical B2C product space—or even B2B that’s not data products, like, you can afford to build MVPs that are not robust and you can scale them later. And you say, “Okay, I want to learn something. We’re going to experiment. We might be wrong, we might be right, but we’ll do something sort of very, very lean and then we’ll build on top of it and optimize.”
But that is a lot harder to do if you’re talking about data products because if you’re a part of a critical path, like, a critical workflow, everybody’s going to start taking dependencies on you. So, if we build some system, some event instrumentation system for example, and we make these events real time and everybody can consume them and services can consume them, that actually has to be really, really robust, basically, from the first day because if we go in later and say, okay, we’re going to do a refactor, we could potentially be affecting machine-learning models, we could be, you know, breaking critical reports downstream, there’s, like, a lot that can go wrong. We have to think about scalability and stability, and hardening a lot of our tools much, much earlier than I think a lot of other a lot of other products do. If something falls over, you know, if, like, a feature in an MVP product doesn’t really work, like, okay, it’s fine. You don’t have access to that feature right now.
But if our product falls over and the data is not accessible, that’s a really big deal and we could significantly impact our business. So, that’s one. The other thing I would say, kind of comes back to customer conversations, the amazing thing about building data products, and I guess any internal products but you know, my specialty is data products, is it a hundred percent of your customers sit ten feet away from you. That is not common in most other, you know, software-based businesses. Like if you’re doing interviews with your customers, like, you might be getting—you might be pulling from a biased set of users, right? Okay, it’s only the people that actually want to talk to me and maybe that says something about that particular cohort.
But when you can talk to everybody, you can talk to a hundred percent of people, you are truly going to understand the complete range of—you’re going to have every single persona right there, available to you. And that is tremendously effective for, like, creating compelling narratives about, you know, why we need to build a particular thing. It’s great for getting buy-in, it’s great for getting design partners, it’s really, really easy to bring design partners in to trial stuff. I find that data scientists and data consumers are some of the most willing people to try out data-related infrastructure because A it’s fun, it doesn’t require a tremendous amount of work, and it fits right into their workflow anyway, it’s not a massive cost for them.
So, those are two things I would say: it has to be a lot more robust from the beginning, but also your ability to collect, like, very high quality qualitative data about your target persona is much better.
Brian: How do you think about somehow producing low-fidelity work, or the idea of building half a solution that might be hardened, but it’s a half-solution that’s hardened as opposed to a giant thing that may have, you know, problems with utility? Because, you know, in the enterprise side, one of the issues is, it simply doesn’t get used, the marketing team goes and hires a, you know, data science team or whatever, to stand up a project outside of the IT department, or whatev—they do end-runs, you know, there’s an option to just ignore it. So, I’m curious, do you think about what is half a solution look like? And is there a way to build a hardened yet half-solution or to prototype something in order to understand will this actually make a difference or not? Do you—I don’t know if you think about things that way or not.
Chad: Absolutely. We really like prototypes. The way that we tend to work is our first iteration of something will be, like, a combination of open-source tools, maybe with a little bit of software engineering thrown in. Sometimes we’ll spin up a very, very small service with—you know, do about a day of work, if there’s any, you know, UI that needs to be present, maybe we’ll spend a sprint or two sprints on a back-end. But generally speaking, we’ll try to leverage the products that we already have.
We’ll find a design partner that really believes in our vision and can think a bit abstractly about the problem, right? So, we’ll say something like, “Hey, listen, we want you to imagine what this would be like if there was a full-fledged experience here.” And we’ll sort of take them through each step in that experience, some of which might be fulfilled by tools we already have, some of it might be a process, right? Like, we want you to follow these steps in this order, which we understand is not going to scale until we have a product that actually facilitates that, but just sort of bear with us in the meantime. And if we can get people to really believe that this is going to solve the problem, then usually, we like to turn those people in into advocates and evangelists within the company, and part of their job is to go out and convince other people about why this thing can solve the problem.
One of the mistakes that we made pretty early on was assuming that—it was the exact same problem you described. It was like building out something that’s, like, relatively full-featured and then launching it. It wasn’t that the thing that we built was wrong, right? It wasn’t that it was the wrong thing to build. It was that I think very technical people have a high amount of distrust for things that disrupt their workflow that they didn’t predict coming, or that they don’t have—like, they didn’t provide the rubber stamp, like, “Yeah, this is the thing that I want.” So, just bringing in someone that’s well-known and well-respected from the beginning and help them, you know, guide the decision and sign off on it has worked wonders for us.
Brian: Yeah, I mean, just the concept of having a design partner, this is something I advocate for, you know, when I train groups as well. Even the better framing I’ve heard recently is it’s designing with your user and not designing for your user. And I liked that idea of, like, they’re part of the creation of this thing. It’s not—even though we’re might be in a—you know, as a data platform, you might think of yourself in somewhat of a service, kind of, business in a way or you’re running a service team, you’re still doing it with them; it’s really hard to do it for them in isolation, and so that regular interfacing there can be really powerful. And making sure that there you have some champions there, and… it’s much easier when the ideas spreads, right?
Like, [laugh] you know, they start talking about themselves and get the whole data science team behind it because one person is so jazzed on how it’s, you know, “I get to work on modeling finally.” Like, [laugh] you know, instead of, like, whatever, you know? Just getting ready to work on modeling. [laugh].
Chad: Yeah. I mean, we have found that the products that get by far the most adoption are the ones where we have a really strong evangelist on our customer side. And these are people that really believe in the problem and they believe that we could be doing things better. And I would say that if there is any significant problem at the company, there is almost always at least one person that has a very high, like, vested interest in that problem, just, like, through law of probabilities, if you sort of plotted everybody’s vested interest in any particular problem, somebody’s going to be on the far right end. And that’s the person that, you know like, when you find them, and you give them a—you work with them to produce a solution, it becomes as much their work as it is your work.
Brian: You talked about kind of, you know, working with these partners. And who does this work on your team? Do you have UX people that do this? Do you train your team how to do this? I don’t think a lot of you know, data engineers, data scientists, analysts know how to do this or want to or think that they should be. So, how do you do that? And do hire for this? Or, like, tell me a little bit about who does this work?
Chad: Yeah, so I actually train our software engineering team and data engineering team on how to do this. We are—sadly, I wish we could hire, you know, full-time sort of UX people to come in and do a lot of this work, but you as an infrastructure team, were just not looked at as the top priority for UX resources, unfortunately. So, we did do a lot of training. And what I did was I came up with essentially a templated process to follow. And I say, if we’re going to be working on a new effort, any new initiative, here’s how you do it.
You start off by having at least X many customer conversations. When you have those customer conversations, here’s how you properly ask questions, right? You sort of—the Five Why Model and different things like that. Once you have those customer conversations, then, like, this is how you, sort of, derive insights from those conversations. Like, you write out, like, what are the most common problems that people seem to repeat over and over again, what seems to be the root cause for those problems.
If you know the answer to that clearly, then go create a document where you, like, clearly describe what the problem is and what the root cause is. If you’re not really sure, then go back and go deeper and see if you can drill down farther. And then once you have, sort of, this document that describes what are the problems that we have, lay out your vision for the future. And don’t even focus on products, don’t think about features, don’t think about anything like that; just describe the ideal state of the world where this problem does not exist. And once you have some version of that, go back to our customers and see if they agree. See if they agree with the problem, see if they agree with the long-term vision.
And then if everyone agrees, and they say, “Yeah, like, this is really exciting and we think that you’ve captured the problems effectively,” then we start pulling in people from design partners and we can actually start creating requirements. I have gone basically hands-off on that process now and our engineers, each engineer, is doing that on their own. For every new feature, they’re generating a document. I basically only come in at the review stage to ensure, like, yeah, are we actually, sort of, following the process decently enough? And that’s it. So, it’s really compounded our ability to get, I think, meaningful things out the door pretty quickly.
Brian: That’s awesome. I love that you shared that. And this kind of connects me to my last question here, but where can people learn more about your work? I know you have a new—I just saw you have a newsletter, the Data Products newsletter which I am not on, so I will be subscribing to that. Where can people follow you and your publishing? Where’s home base for your brain? [laugh].
Chad: Yeah. So, I release the vast majority of my content on LinkedIn, which just—linkedin.com/in/chad-sanderson with a dash between Chad and Sanderson. And then like you said, I also have a newsletter on Substack called Data Products, just dataproducts.substack.com, and every couple of weeks or so I release a new article. I really think a lot about, sort of, of data philosophy and team organization and data products and that type of content.
Brian: Yeah, well, if you guys—again, LinkedIn, I’ve been following you for a while, I love the content. And some of it’s not—I don’t follow all of it on the technical side of it, but you could tell you have a strong opinion. It seems like it’s always rooted in something customer-centered there, the approaches there, what’s wrong and how we can make it better. So, definitely go check that out.
Chad, this has been great. Do you have any closing thoughts for my audience of data product leaders and people who are trying to bring more product and design-driven approaches to building out these platforms and experimentation tools, things of this nature?
Chad: Yeah. I mean, we talked about it quite a bit, but I really cannot recommend enough thinking in terms of end-to-end workflows. The data consumers, I think, are one of the most underserved populations inside any particular company, certainly much less so than than software engineers where a lot of time and attention and focus has been paid to them and their workflows. I look at something like GitHub, which is kind of a staple of any product organization as a workflow tool. Jira is obviously a workflow tool that serves, you know, product managers and project managers. So yeah, really just pay attention to that workflow and I think there’s a tremendous amount of learnings to be uncovered.
Brian: Awesome, I totally agree with that. There’s always something that comes before your thing that they’re using and they’re always going somewhere after they use the thing. And the more you understand the A to Z part, right—there’s really not even A and Z; there’s something before A and after Z—but the more you understand that it’s, like, always on a continuum, I think that’s really sound advice here. So, Chad Sanderson, Head of Data Platform at Convoy. Thank you for coming on Experiencing Data.
Chad: Thank you. Great to be here.