Today I’m joined by Marnix van de Stolpe, Product Owner at Coolblue in the area of data science. Throughout our conversation, Marnix shares the story of how he joined a data science team that was developing a solution that was too focused on the delivery of a data-science metric that was not on track to solve a clear customer problem. We discuss how Marnix came to the difficult decision to throw out 18 months of data science work, what it was like to switch to a human-centered, product approach, and the challenges that came with it. Marnix shares the impact this decision had on his team and the stakeholders involved, as well as the impact on his personal career and the advice he would give to others who find themselves in the same position. Marnix is also a Founding Member of the Data Product Leadership Community and will be going much more into the details and his experience live on Zoom on November 16 @ 2pm ET for members.
- I introduce Marnix, Product Owner at Coolblue and one of the original members of the Data Product Leadership Community (00:35)
- Marnix describes what Coolblue does and his role there (01:20)
- Why and how Marnix decided to throw away 18 months of machine learning work (02:51)
- How Marnix determined that the KPI (metric) being created wasn’t enough to deliver a valuable product (07:56)
- Marnix describes the conversation with his data science team on mapping the solution back to the desired outcome (11:57)
- What the culture is like at Coolblue now when developing data products (17:17)
- Marnix’s advice for data product managers who are coming into an environment where existing work is not tied to a desired outcome (18:43)
- Marnix and I discuss why data literacy is not the solution to making more impactful data products (21:00)
- The impact that Marnix’s human-centered approach to data product development has had on the stakeholders at Coolblue (24:54)
- Marnix shares the ultimate outcome of the product his team was developing to measure product returns (31:05)
- How you can get in touch with Marnix (33:45)
Quotes from Today’s Episode
- “The problem brought to the data science team was the data science-specific component that was deemed necessary [to build] instead of [the] problem as a whole, [with] data scientists [...] also in the room trying to figure out, okay, ‘How could we solve this?’” — Marnix van de Stolpe (14:39)
- “There’s no such thing as ‘the organization’. The organization is you. [...] You can wait until the organization changes, but the only way to do it is to start making those moves yourself and hope for the best.” – Marnix van de Stolpe (20:10)
- “This is a chain of people with the best intentions, but nobody saw the holistic, like, okay, if this is the goal, if you actually want to find these specific products, as a data science team, we can help you, but [our questioning needed to go deeper to ensure we are providing the right solution for the right problem].” – Marnix van de Stolpe (09:25)
- “The paradox is that you care for your mission and people, [but not the] bureaucratic consequences to your personal career. You have to really believe that this is a better way of [creating data products] and [...] accept that for your team, it’s probably not going to be the easiest way to a salary increase or more recognition from [your org].” – Marnix van de Stolpe (19:35)
- “No company should be claiming that adoption is the problem. Either you’re not solving the right problem, or you’re not having the right solution.” – Marnix van de Stolpe (23:13)
- “Maybe your solution is perfect for the problem, but just nobody cares about the problem. So, whatever you do with your solution, if the world doesn’t change, nothing is going to happen to your adoption either.” – Marnix van de Stolpe (24:13)
- “Especially with data science, in the way that I think of it—because data science isn’t really a protected term in any way—so everything could be data science to me, it’s very often about understanding the problem so well, so deeply, that you can come up with totally new ways of attacking it.” – Marnix van de Stolpe (27:40)
Brian: Welcome back to Experiencing Data. This is Brian T. O’Neill, and today I have Marnix van de Stolpe from Coolblue on the line. How’s it going, Marnix?
Marnix: Yeah, it’s going great.
Brian: Yeah, yeah. And I also want to acknowledge, you’re one of our original members of the Data Product Leadership Community, so it’s fun to have this chat with you here. And this is also a little bit of a preview chat of an upcoming webinar that we’re going to do. I’m super excited to hear about this, the title is really compelling. Do you want to tell them what it is or do you want me to tell them? [laugh]
Marnix: Yeah, so we decided on the title together, I guess. You came up with most of it. But it’s about how we threw away about 18 months worth of machine learning work and started again and are now trying to adopt a product mindset at Coolblue, especially also for the data part.
Brian: So, you’re a product owner is your formal title there. Can you just give people what is Coolblue—if you’re not from the Netherlands, maybe you don’t know—what is Coolblue? And what exactly are you a product owner of?
Marnix: Coolblue is an e-commerce company in a way, but very different from e-commerce companies like Amazon. So, what we actually do is we make sure that we really understand what the customer wants and we optimize it fully for them, end to end. And that means that we actually have a very small assortment, so we’re more like the brick and mortar corner store with specialists, but then at scale, and only in the Netherlands—or well, Germany and Belgium as well. We’re known for, for example, washing machines because you really want those—nobody goes shopping for a washing machine, so when you really want a washing machine, right, there’s a small disaster happens because the thing broke, there’s probably laundry in it, kids want to go to football practice or whatever, so you want to know now that we’ll go and come fix it whenever you’re at home. Can be tomorrow, can be tomorrow morning, whatever. So, that’s what we do, end to end.
Brian: And in terms of your product ownership, is there a product line you own or is it the data [laugh] as a product? Like, tell me about that.
Marnix: Yeah also, what we’re going to be talking about, I was the product owner for the [unintelligible 00:02:35] so-called data science core team, so the centralized team that did data science, right? Now, I’m more on the route planning side and on deciding how we actually should use our delivery capacity for delivering those washing machines day-to-day and how we can optimally use that.
Brian: So, maybe give us—I mean, you throw away 18 months of machine learning work. That’s an expensive thing to eat. And it’s not just expensive, like time and, you know, the people’s labor and all this and there’s hard costs associated with that, but that’s also, like, politically and personally, that’s got to be a tough thing for the team to swallow and to acknowledge, like, we need to stop here and we need to reroute and go this way, no pun intended, since you’re working on routing. But tell me about that past and making that decision. Was it a difficult decision? Like, tell me about that.
Marnix: I must say, I wasn’t there for the full 18 months. So, I joined, I think, maybe a half a year or—no, I think a year, actually, before we threw it—no, a half-a-year—half a year before we throw it all away, something like that.
Marnix: Came from a different team, different product owner role, and I got into this team. And I tried to understand what we were doing, and there was a lot of unclarity about why we were doing things and what people needed from us. Yeah, that in turn led me to believe that probably what we were doing wasn’t what we should be doing. And then the outcome was that we had to stop this team. And there were indeed a couple of people really unhappy with that, and made some enemies there, but that’s okay. I still think it was a wise choice.
Brian: What was this—like, was this a, I just woke up one day and I decided, like, that we got to stop or was this a gradual decision, and it was discussed with your management, and it was like, kind of like, eventually you grew into this? Like, there was some signal, right? Like, what was the signal that?
Marnix: Yeah, the signal was that I didn’t understand what the team was doing at all. Which is okay, usually, for a product owner with this very specialized data [people 00:04:35]. I have been fortunate. I also used to be a data scientist, but I didn’t understand why we were doing certain things or what we were trying to accomplish. And when I asked what the goal was, basically I had this one simple question: “Hey guys, what’s the goal? What are we trying to achieve?” And it didn’t matter who I asked, nobody knew. That was the red flag, I guess.
Brian: [laugh]. Build a model. That’s the goal [laugh].
Marnix: And it didn’t matter who I asked. I could ask my manager, I could ask the people we were building it for, I could ask the people delivering the data we were using, nobody knew, really—
Marnix: Could articulate it in such a way that we were building made sense. Everybody has their own little goal that was clear, right, and this is what we’re trying to achieve. And so, these were tiny little things, but they were just nothing alike.
Brian: What was the makeup of the team in terms of quantity, job titles? Like, give people kind of a flavor of—and I don’t know, did they report into you or were you kind of like, in charge, but no resources, and politically, you had to kind of build this team to work for it. Or did they actually report into you as a product owner? Like, tell me about that.
Marnix: Nobody reported into me because we want to have that separate. Because I also can’t teach them anything or evaluate them on their specific data science skills. So, this was a team with three data scientists, I think, at the time, that had been at the company for a really long time, as well, I think, four or five years and then… something like it. [Especially 00:05:55] senior data scientists, three of them, they were reporting into the manager of data science kind of role here.
Brian: So, it was just the four of you, then, together? You plus these three data scientists?
Brian: Okay, got it. So, they didn’t—it sounds like they were having a hard time linking their work back to some type of business or customer or end-user value. They couldn’t express that. Is that effectively what was going on?
Marnix: Well, so they knew. Everybody knew. The issue was that everybody had a totally different understanding of the words that we were trying to achieve. So, let me give you some context. So, one of those things at Coolblue, we decided that nobody wants a 30-day return period, or a year-long return period or whatever, for the project. They don’t want to return at all. Nobody wakes up in the morning deciding, boom, I’m going to return this 20-kilo microwave now. Ha, it’s going to be awesome. Nobody. Ever.
So, we’re very strict on what kind of products we want to sell to people because we want to avoid returns as much as possible. So this was saying that [unintelligible 00:06:57], so we were avoiding returns. Now, the specific type of returns we were trying to avoid were when we make an error somehow. We put a different color picture of the product on the website and people started returning it way more because they expected a different product. And we want to catch that as soon as possible. Because returns come in over that 30-day period and we want to make sure that as soon as we flag it, there’s some time still. So, while we don’t fix it, we’re sending stuff we know we’ll come back, and we will send it to another unhappy customers because they can’t get their own product.
So, that was the goal. And everybody called this the expected return rate. That was the goal. Or that was the solution to all of our problems. If we would have—if we would know what return rates of these products we would expect, then it would be very easy to see which ones were clearly not—are higher than expected and therefore, those could—something could be wrong with them.
Brian: Tell me why that was the wrong KPI to be focused on, in your opinion. And where the disconnect with the data science team was.
Marnix: So, for the data science team, this was great. They were there to create an expected return rate. And that’s really difficult, and actually, they started all sorts of survival modeling stuff and really complicated stuff. But then nobody really asked, how are you going to use it when we have this? And it turned out that they were going to use it in a dashboard and then people had to actually look at their specific products, say washing machines, and then they would have to see on the highest level of all the washing machines whether the return rate this week is higher than expected and they had to drill down to find out which of the products actually was the one with the higher than expected return rate. And then they would have to find that one and decide what might have gone wrong with it to actually start fixing it. And then it doesn’t suddenly sound like such a great idea anymore.
Brian: Right. I just wake up and I just want to look at the return rates on my products. And oh, it’s point six instead of point seven. Should I care? Is that a—should I care about that?
Marnix: Should I care about that?
Brian: So, now I’m on the website, looking at my product images trying to see well the pictures look correct and the spec looks correct and the returns don’t mention anything about the color being wrong, so I guess I’ll go to the next one, right [laugh].
Marnix: That’s about what we were trying to ask of them, right? You also mentioned this is a chain of people with the best intentions, but nobody saw the holistic, like, okay, but if this is the goal, if you actually want to find these specific products, as a data science team, we can help you, but obviously, they came to us with a, “We need an expected return rate.” Well, “What for?” “Well, to find outliers.” “Okay, I guess we can try and give you an expected return there.” That’s where our questioning stopped.
Brian: Oh, okay. I see. So, they actually came in with that language, the stakeholders that asked came in asking for an expected return rate?
Marnix: I’m pretty sure I’m not—
Marnix: I wasn’t there.
Brian: Don’t—got it. Understood.
Marnix: That’s what I got from it when I joined the team, yeah.
Brian: So, was it—I mean, at some point, somebody asked for that, I don’t—the returns team or the product line manager. I mean, I don’t know who started it, but let me oversimplify the question. So, you know that they’re working on expected return rate“. Well, who asked you to do that?” “Well, John did over in whatever.” “Okay, I’m going to go talk to John about it.”
Tell me more about where the disconnect happened because it sounds like John—or whoever it was—had some—they did have a mission here [laugh]. They did have a benefit in mind of the work. So, where did you feel like you didn’t get enough feedback that it made sense to continue working on this at all?
Marnix: Ah, okay. So, we did actually continue working on it. We just threw away everything that was built up until that [unintelligible 00:10:46] point—
Brian: Oh, okay.
Marnix: Because we understood that that specific solution would never work. But no, what we did, so I came in, and there was indeed somebody who was tasked with reducing the returns, and there were some analysts helping. And I had my team there, and I had just read Continuous Discovery Habits by Teresa Torres, and she uses this opportunity solution tree and I was eager to start using it. It didn’t exactly fit the purpose that she described, but couldn’t really care less at that point.
And what it had at the top was a desired outcome and then there’s a couple of opportunities on how you might be able to move the needle on that specific outcome, and then [unintelligible 00:11:27] solutions that could actually address one of those opportunities. So, I knew that there were a couple of oppo—sorry, solutions because we were building those. And I knew that there was some sort of outcome which had to do something with reducing returns. And then my question was, “Okay, guys. Help me connect the dots,” with the people actually asking us to build this expected return and the other stuff. And there, you slowly started to see that there was going to be a disconnect and nothing would ever connect back up to the outcome we were trying to achieve.
Brian: Mm-hm. So, you talked about one of Teresa Torres’ models that you were using, and your staff—I’m curious about the discussion with your staff, I imagine—or not your staff, but the data science team—about mapping the current implementation or solution back to the desired outcome. How was that conversation? I assume that they were involved in that conversation, right? You’re like, “Hey, I’m a product owner. I’m trying to map this back to this benefit which is to reduce returns. Like, the overall goal here is reduce returns.”
It’s not to build a solution, right? It’s the benefit. So, tell me about that conversation? Like, did it click for them? Did it not click? You [laugh] mentioned making some enemies, and like, so I’m [just 00:12:35] curious where the gap was between the technical work and people and your goal, which is to drive this benefit home?
Marnix: They actually really liked it because they were confused as well about what direction we should go. And also the people that we were building this for liked the exercise, even though they weren’t entirely sure what was being asked of them exactly. So, they could also actually take all this information back out to their stakeholders or boss or anything to help figure out what are we actually trying to solve and what part should we be focusing on. So, I already gave it away that the part that we should be focusing on is really on this product where we make a mistake and suddenly they become unsatisfactory to our customers. And that’s also why we started a new initiative which we called ‘Suddenly Unsatisfactory Products’ instead of ‘Expected Return Rate,’ which is a tiny difference in language if you like, but it’s a totally different mindset of the type of solution we’re going after.
Brian: Can you say that one more time?
Brian: What was the new one?
Marnix: The new one was that we wanted to find products that had suddenly become unsatisfactory to our customer—
Brian: Oh, okay.
Marnix: —because that is the goal.
Marnix: And the expected return rate is only a tool so that some human can try to find those products in a data set where they have to drill down. And that’s actually one of the big connections that we managed to get out over a period of probably still months. And that then helped us to say, “Okay, let’s drop all this work and start a new endeavor to try and get this valuable idea, still, off the ground.” The enemies, there were more, I had decided that this way of working didn’t work that well because this team was a centralized team with only data scientists and they didn’t know anything of the context and they were unable to actually talk to the people around them enough to figure out that context, to [communicate 00:14:32] that context because they were so siloed off. So, they were really brought in only for the data science-specific task.
The problem brought to the data science team was the data science-specific component that was deemed necessary instead of deciding that we want to solve this problem as a whole right and having the people, the data scientists, in this case who were able to solve the problem, also in the room trying to figure out, okay, how could we solve this? But then I had to go through some hoops to get that team into a domain instead of, I think, a centralized thing. And decentralized thing is now going as well.
Brian: Oh, it is. Okay. That was kind of leading into my next question was did the working model change? Did you bring whoever the stakeholder is on the return side who cares about this issue of, like, let’s not create product problems for our customers and let’s get ahead of those, somebody cares about that. I don’t know what their title is, or whatever; there’s a department, but you can see a whole service here.
It’s like, we want to prevent it, but we’re not always going to be able to prevent it, so when it does go wrong, then this is the process. There could be, you know—there’s service, there could be user experience design, there could be interfaces involved here, there could be workflows, there—I don’t know. There’s, like, a whole bunch of stuff that’s not just building the model for the return rate, or whatever the new model thing was. So, talk to me about, like, who else got involved? Did you get data scientists in a room with these non-data science people to kind of talk about what it’s like to be the returns manager, or—you know?
Marnix: We did actually, yes. What we already figured out is that even finding products that’s not going to get returned more is not enough, really, because somebody still has to find the root cause of what made them what caused it. So, one of the reasons that we had at the time was there were a lot of air conditioners sold, but they were [a bit 00:16:18] yellow because that stood in the wrong spot. So, there were suddenly a lot of them coming back with the comment, “Yeah, they’re yellow. I ordered a white one.” So, that’s totally different from the different picture, really, but you still need to figure that out, and then before sending the model, if you can check in the box, okay, not yellow. Okay, fine to go.
So, you also wanted to reduce the time to actually find or to maybe even propose the most probable reason, or to help people do that as well. So, in the end, it’s all about a return on investment [gain 00:16:48], and the investment is the people’s time that are going to investigate, are we going to fix this specific issue, and the return is we’re not selling the wrong product to customers. And if we make the investment smaller, then we can look at more products and that’s good. And we do that by aiding this investigation, as far ahead as we can. So ideally, if you want to make this a good product, you definitely need a lot more than only data scientist.
Brian: What’s the culture like now when you’re building data products? What’s the culture like in terms of how you do that? Who is involved? How did the data scientists that you work with react to that culture and how you do it? Is it just oxygen? It’s just the way we do it now. We bring in cross-functional teams? Like, tell me about that, what it looks like now?
Marnix: Yeah, well, as I said, so we’re moving. The only team at this point—data team, kind of—that is a cross-functional team, and we, in this case, also really need a couple of back-end developers to connect all of the decision-making into systems, and we’re a bit the weird one out. So, it’s really difficult for people to understand what we’re trying to do and how we’re trying to do it. We actually just had a conversation about it within the current team where we did add a couple analysts, but no real, like, systems that can develop [unintelligible 00:18:08] about what do we really do, and how is that maybe different from what other teams do, and how would we like to maybe be evaluated, or how does this work. Because we see results that’s clearly a lot better and we’re building on the stuff that gets used, that’s useful—so that’s good—and we’re solving the right problems, but it’s so different from what the organization is doing around us, sometimes, that it’s not always easy to keep rowing this direction.
Brian: For people that are listening, that maybe you’re in your shoes, and maybe they’re trying to make a pivot and they’re not—maybe they can’t, they’re just, like, I can’t throw away… so much sunk cost. Like, politically, maybe they’re not ready to pull that trigger yet and take the dive off the cliff and go to something new. What are some of the things that you need to be aware of? Like, in hindsight, are there things that you would do differently? Are there—and even when you emerge, right, you’re saying it’s not just, like, I emerged, and then everything was rosy on the other side. You’re still wading through this process. Like, anything in hindsight, you can share with people about that?
Marnix: A couple of things. One of them is actually in one of the books I’ve read about a submarine. So, it’s David Marquet’s book, Turn the Ship Around! And he has this line that really resonates with me. It’s caring, not caring. The caring, not caring paradox.
And the paradox is that you care for your mission and people and you don’t care for the bureaucratic consequences to your personal career. And I think that’s kind of what it takes at this point. You have to really believe that this is a better way of working and you have to accept that also for your team, it’s probably not going to be the easiest way to a salary increase or more recognition from the organization around you. But, as Teresa mentioned—Teresa Torres mentioned—there’s no such thing as the organization. The organization is you.
It’s like when you’re in a traffic jam. You’re not in a traffic jam. You are the traffic jam. There’s no other way. So, you can wait until the organization changes, but the only way to do it is to start making those moves yourself and hope for the best. But that is what it takes.
And it takes, in a way, a very special kind of person that are willing to go on this adventure with you. And there’s some backing from management and some backing over here or there, and there’s—you know, this is this company is amazing and they have a very, go for it mentality, very flexible mentality as well. So, if you have a good idea, go for it. Try it, go try it out. But you’re doing it then, the people that also want to go for it and go try out this new way of working.
Brian: I want to tangent just for a second and address the data literacy advocates out there about what you just said here, which is, if you’re in this traffic jam, like, you can wait for everything around you to change or you can make a change yourself. And the challenge I have with the data literacy camp, it’s not that increasing data literacy is necessarily a bad choice or desire or end state, but this idea that the way to improve adoption of our data science work is to fix all of the people around us by educating them and improving them so that they can finally understand what it is that we do.
Marnix: I think it’s insane.
Brian: Yeah [laugh]. Well, the other way is, well, maybe if we try to make our work more digestible, more applicable so it doesn’t require as much training or understanding of statistics or whatever it may—or a complicated visualization, that maybe our work would have more impact quickly, that’s the other mindset, which is like, why don’t we get better at the business and the user experience piece instead of waiting for the organization to catch up with literacy? I just—training, you know, 50,000 people or train ten people to change how they work, and I think that’s a really profound thing. And it does—and maybe there’s a cost to it. Maybe there’s—politically there can be a cost to that.
And I think that just shows real leadership that you’re willing to stick your nose out there and say, “We’re going this way.” Like, “I’m going over here. We’re going to do things this way. It’s better for you, it’s better for them. It’s actually better for all of us. It’s not just about me or anything like that.” It takes real leadership to do that, so I applaud what you’re doing. I mean, even if it doesn’t work, it’s like just having the bravery to go and say, “There’s a better way here for everybody,” it takes a lot. So, congratulations to you for doing that. And sharing and being public about it. I think that takes a lot.
Marnix: Yeah, thank you.
Marnix: [laugh] [It’s 00:22:57] always good to hear. But I completely, I completely agree with the literacy. It’s an uphill battle; you will never win it. And also, it doesn’t make too much sense to me, realistically. And even the whole adoption thing, I don’t know, there’s—no company should be claiming that adoption is the problem.
Or at least, they could be claiming that the notion of their product is problem, but their probably, either you’re not solving the right problem, or you’re not having the right solution. It just there’s no value, apparently, right? So—and yes, that might mean that actually, there is some intrinsic value in the technical part if you add the user understanding, people will start using it, right? So, there’s ways to overcome this, but if you’re trying to market a solution and you’re trying to search for adoption, it kind of—it’s a backwards game if you’re not changing the solution.
Brian: It’s a leading indicator, right? It’s a leading indicator that something’s off, but it’s not the real problem. Like, “Oh, the adoption is low. Increase the adoption.” It’s like, well, maybe just getting more adoption isn’t—you’re totally right. But it’s a strong leading signal that something’s wrong, you know?
Marnix: Definitely, definitely. But it doesn’t necessarily mean that the solution is [wrong 00:24:09]. I could mean that the problem you’re solving isn’t interesting. And maybe your solution is perfect for the problem, but just nobody cares about the problem. So, whatever you do with your solution, if the world doesn’t change, nothing is going to happen to your adoption either.
But it takes the human-centered approach to figure it out, hey, this is not the right problem, even though we thought it was. But actually, we can [unintelligible 00:24:29] and so we can do something with it, and then obviously, adoption should go up. But this sort of search for adoption of a data product, I don’t know.
Brian: Yeah, I agree. It’s a—you have to be thinking a couple of steps ahead and look beyond the surface, the leading indicator, to understand the reasoning for that, and that’s not a technical skill. It’s a different kind of skill that’s required to do that, and I think it’s something that can be developed. But tell me a little bit about these days, like—and I know you’re in progress here—are you seeing any change in the stakeholders you work with? Like, I assume they probably don’t see all the sausage make—how you’re changing the factory about how the sausage is made, but like, are they seeing any difference that you can tell? Or is it maybe you’re working with these stakeholders and users more frequently?
Is there anything they can tell that—that you get that you can tell us about what you’ve perceived they’re getting out of it or benefits that they’re seeing or even if it’s just qualitative feedback that you’re getting? And this is so that people understand the leading indicators for if I make this change, I might know if it’s working because I’ll see—and then you fill in the blank [laugh].
Marnix: So, people are happy that you’re there, I think, that people really impacted. And they’re happy with the honesty and transparency that you bring, that sometimes things also don’t go well. And they understand in what kind of—maybe—environment you’re working or what systems you have to deal with. And they will still—will maybe even applaud you. Like, they will say, “Hey okay, but the current state isn’t ideal, but you’re at least trying to understand what I’m trying to do, therefore, I’m okay with whatever you decide to go build. And I will also trust you if you tell me that you’re not going to help me for at least another half a year because it just won’t help me yet, or it’s too difficult, or you’re first focused on this other thing that I can understand makes good sense that you also want to fix that first.” I think that’s the main advantage.
Brian: It sounds like your—if I replay that back—your stakeholders, the end-users of these data products you’re making, they feel a genuine sense of empathy that you actually give a shit about what their life is like, what’s it like to be the returns manager or whatever, and the simple act of questioning and being there and trying to understand them builds the trust so that when you need to make a solution or if you say, “Sorry, I can’t help you right now. We need to go fix this other—you know, maybe it’s a data quality issue or some other thing,” they now know that you have their best interests in mind. And your goal is not to just build some visualization and throw chart and throw it over the wall; it’s actually to, you know, reduce these returns that are coming in that made us guys losing sleep over this because the yellow dishwashers keep coming back [laugh], you know?
Brian: So, is that basically what you’re saying? Like, they’re feeling the empathy, they’re feeling that you guys actually care?
Brian: Yeah. It’s a great first step. I mean, you’re building the trust there and this is how I think great products are built, especially internal data products, is that understanding. There’s a two-way understanding there and now we’re all working towards the same goal, you know?
Marnix: Yeah. And what you—especially with data science, in the way that I think of it—because data science isn’t really a protected term in any way—so everything could be data science or [data-y 00:27:49] to me, it’s very often about understanding the problem so well, so deeply, that you can come up with totally new ways of attacking it. And you need your team to be able to understand the real core problem so deeply because they are the ones that can come up with these different angles of attack. I mean, I’ve been a data scientist for a year, so I have some—I can come up with stupid angles of attack, they’re not brilliant, but fortunately, I have a team that, once we understand the team, they can also challenge me or my understanding of the [problem 00:28:26]. They can challenge the people around us on that, “Okay, but is this really what we’re trying to achieve?”
And that’s a really core question that also, as everybody [unintelligible 00:28:34], maybe also with stakeholders where you’re suddenly going to think about edge cases. Okay, sure. So, you’re saying that this is what you want to achieve, but then if we would build something like this, would it help you? Why would that help you? Explain, like, tell me. And then they’re like, “Yeah, no, that wouldn’t help me at all because blah, blah, blah, blah, blah, blah.”
And via that training and understanding and also showing that you know by teasing out really what would and wouldn’t work, yeah, you get to better solutions. And you can also help the people that are really impacted, they also really help you in the ideation if you’re lucky. And that’s where, yeah, a lot is possible, suddenly.
Brian: Is that, when you were kind of—I was picturing you in a meeting room with, like, a whiteboard, and like, “Would this help you?” And you’re drawing, and they’re drawing. And you’re like, “No, that would not because that leads down this rat hole and I don’t want to spend my time looking in Tableau.” And then you go this way and—is that what it was like where this is—you’re, kind of, rough prototyping here with them on the fly, or tell me about that.
Marnix: Yeah, it’s even a what-if, hypothetical scenario. So, let’s say we build our expected returns thingy because that’s actually one of the tricks that I often use. Okay, say we have it. It’s there. We build. Now, what? What do you do with it?
So, say that it actually shows for whatever products that is high. Would that help you? And then they start, “Well, no, not really because I don’t know which product.” “Okay, so that’s a problem then,” or—right? So, this is already enough. And even the other way around, you could say, “Well, I guess you’re saying that on average because you’re still looking on the highest level of washing machines, if that shows zero, but actually, it turns out that there’s three that are really getting returned way more, and there’s three that exactly cancel it. Would that be okay? Would be bad?”
That’s enough. Because obviously, they will shout, “Well, that’s bad.” But then immediately, you also made them realize that the whole idea of going in from the top and trying to figure it out from that, like, aggregate level will not work. And that’s enough of an example. And yes, you can use a whiteboard, but sometimes even these kinds of weird examples work.
So, I’m coming from a physics background and what we often do, we do these, sort of, mind experiments, where you say, “Are you trying to figure out where does this going to if it will go to infinity?” Those things work wonders. If you can translate them back to something that people, you know, can still relate to. That’s the other option.
Brian: And kind of wrapping up here. Have you had a chance yet to push any of these data products into production, to get feedback from these people maybe before that, and see any results or benefits of this process? Like, hey, they’re actually using the new, not the return rate, but the whatever the, I forget what the previous metrics was—
Brian: —that we’re talking about, but have you seen any result now of this way of working?
Marnix: So, yes or no. So, from this whole return exercise because it took, well… a year-and-a-half to build the wrong thing and then maybe a month or two to build the right thing, by then, the returns department got shuffled around. It wasn’t that much of—so actually, this, we decided that this is actually way of customers giving us feedback because they’re telling us that this is not the right product. So, we moved it to a different department as well. So, this thing just got canceled.
So, we really threw everything away and then we stopped with the [team 00:31:56]. However, immediately after I started the new team, which is now in this delivery space where we have to figure out how much work we can accept basically, to make best use of our capacity, and that’s being used. And the way we did it is by constantly making sure, okay, what are you doing now? So, we didn’t go into the old data science-y, math-y way. We actually went for the impact-y way, which is way more data engineering, way more software engineering, build… well, actually, we automated in a way somebody [gut 00:32:30] feeling, whatever they were doing, we were trying to emulate that as best we could until they were confident—because this person was doing it on their own—until they were confident enough that we were allowed to be in the driver’s seat.
From that point on, we were in the driver’s seat. And then we could start working on the math and making the decisions more clever, better, et cetera. So, that’s where we’re at now. And it’s getting used constantly. The other approach would have been to lock ourselves up for half a year or so trying to, in some lab-like environment, figure out some hypothetical way that could be better and then trying to see if anybody would ever like to adopt it.
And interestingly, also, the guy that was I doing it left and [unintelligible 00:33:15] went to a different job. So, I’m not sure if it would have gotten huge, just because of the lead time it would have taken because this guy wasn’t there, would somebody else have had the guts to put this weird type of computer scientist at the steering wheel? I don’t know. So, because we were already doing it and we were already—we had shown that we knew what we were doing and had built that empathy, we are now allowed to make and [unintelligible 00:33:44].
Brian: Marnix, thanks for coming on here and sharing all this with us. I wanted to give you the last word and also, tell people how they can get in touch with you besides, if you’re not in the DPLC, is it LinkedIn or Twitter? So yeah, last word to you.
Marnix: Oh, yeah. Thanks for having me. There’s no last words. I just loved being on the show.
Marnix: So, it’s nice. I’ve been listening to this show for, I don’t know, two years or so. I don’t know, really long time. I think [since really 00:34:07] the intro.
Brian: Oh, wow. Well, thank you.
Marnix: Because it really helped in this journey. So, thank you for doing that.
Brian: Thank you. I’m glad.
Marnix: And I am really bad at any social stuff for reaching out stuff, but LinkedIn is your best bet. I will try to actually—I get notifications on my phone, so I will accept. So, let’s do LinkedIn. And it’s@marnixvdstolpe.
Brian: Excellent. We’ll link that up in the [show notes 00:34:34], for sure. Well, great. But Marnix, thanks again for being so transparent. I really applaud your transparency and you’re willing to kind of really open up the—I always use the sausage metaphor because I like meat, but [laugh] showing us how it’s being made over a year little corner of the world and let others learn from your experience. So, thank you so much for doing that.
Marnix: Thank you.