Berit Hoffmann, Chief Product Officer at Sisu, tackles design from a customer-centric perspective with a focus on finding problems at their source and enabling decision making. However, she had to learn some lessons the hard way along the road, and in this episode, we dig into those experiences and what she’s now doing differently in her current role as a CPO.
In particular, Berit reflects on her “ivory tower design” experience at a past startup called Bebop. In that time, she quickly realized the importance of engaging with customer needs and building intuitive and simple solutions for complex problems. Berit also discusses the Double Diamond Process and how it shapes her own decision-making and the various ways she carries her work at Sisu.
In this episode, we also cover:
- How Berit’s “ivory tower design experience” at Bebop taught her the importance of dedicating time to focus on the customer. (01:31)
- What Berit looked for as she researched Sisu prior to joining - and how she and Peter Bailis, Founder and CEO, share the same philosophy on what a product’s user experience should look like. (03:57)
- Berit discusses the Double Diamond Process and the life cycle of designing a project - and shares her take on designing for decision making. (10:17)
- Sisu’s shift from answering the why to the what - and how they approach user testing using product as a metric layer. (19:10)
- Berit explores the tension that can arise when designing a decision support tool. (31:03)
Quotes from Today’s Episode
- “I kind of learned the hard way, the importance of spending that time with customers upfront and really digging into understanding what problems are most challenging for them. Those are the problems to solve, not the ones that you as a product manager or as a designer think are most important. It is a lesson I carry forward with me in terms of how I approach anything I'm going to work on now. The sooner I can get it in front of users, the sooner I can get feedback and really validate or invalidate my assumptions, the better because they're probably going to tell me why I'm wrong.”- Berit Hoffmann (03:15)
- “As a designer and product thinker, the problem finding is almost more important than the solutioning because the solution is easy when you really understand the need. It's not hard to come up with good solutions when the need is so clear, which you can only get through conversation, inquiry, shadowing, and similar research and design methods.” - Brian T. O’Neill (@rhythmspice) (10:54)
- “Decision-making is a human process. There's no world in which you're going to spit out an answer and say, ‘just go do it.’ Software is always going to be missing the rich context and expertise that humans have about their business and the context in which they're making the decision. So, what that says to me is inherently, decision-making is also going to be an iterative process. [...] What I think technology can do is it can automate and accelerate a lot of the manual repetitive steps in the analysis that are taking up a bunch of time today. Especially as data is getting exponentially more complex and multi-dimensional.”- Berit Hoffmann (17:44)
- “When we talk to people about solving problems, 9 out of 10 people say they would add something to whatever it is that you're making to make it better. So often, when designers think about modernism, it is very much about ‘what can I take away that will help it make it better?’ And, I think this gets lost. The tendency with data, when you think about how much we're collecting and the scale of it, is that adding it is always going to make it better and it doesn't make it better all the time. It can slow things down and cause noise. It can make people ask even more questions. When in reality, the goal is to make a decision.”- Brian T. O’Neill (@rhythmspice) (30:11)
- “I’m trying to resist the urge to get industry-specific or metric specific in any of the kind of baseline functionality in the product. And instead, say that we can experiment in a lightweight way in terms of outside of the product, health content, guidance on best practices, etc. That is going to be a constant tension because the types of decisions that you enact and the types of questions you're digging into are really different depending on whether you're a massive hotel chain compared to a quick-service restaurant compared to a B2B SAAS company. The personas and the questions are so different. So that's a tension that I think is really interesting when you think about the decision-making workflow and who those stakeholders are.”- Berit Hoffmann (32:05)
- Sisu: https://sisudata.com
- Berit Hoffmann on LinkedIn: https://www.linkedin.com/in/hoffmann-berit/
- Sisu on LinkedIn: https://www.linkedin.com/company/sisu-data/
Brian: Welcome back to Experiencing Data. This is Brian T. O’Neill. Today, I have Berit Hoffmann on the line, the Chief Product Officer at Sisu. How are you Berit?
Berit: Doing great and excited to be here. Thanks, Brian.
Brian: Excellent, excellent. So, you guys have an intelligence product, a business intelligence analytics software application, and I thought your profile looked interesting. And you were referred to me by a past guest we had on the show, which is often how I find my current guests. So, welcome. It’s really nice to have you here.
And we’re going to talk a little bit about, kind of, the design/product relationships, specifically in the context of analytics because a lot of places that build analytics solutions in house, like enterprise teams, don’t typically have either product management thinking or design in those roles. There’s dashboard and UI developers and analysts and data scientists. And so on the software side of the world, it’s very normal for us to have product management and design at the table. So, I wanted to talk a little bit about that today.
But the first thing I want to ask you about when we did our screening call, you said somethi—you said to ask you about the ‘Bebop ivory tower.’
Brian: What is the Beb—it’s a great name, by the way. I like—I’m going to write a jazz composition called that. But—[laugh].
Berit: Yeah, no, absolutely. So, what I was referencing there, so Bebop was a startup that I was at… almost ten years ago, now. Jeez, time flies. But this was a startup that eventually got acquired by Google in 2015, but spent several years there before the acquisition. And when we were chatting and talking a little bit about the importance of design thinking and really getting close to your user whenever you’re building a product, I immediately went to Bebop because that’s where I learned, I would say in some ways the hard way [laugh] about the importance of really grounding everything you do in the user and spending that time up front.
And you know, when I mentioned the kind of ivory tower design experience at Bebop, I would say that was kind of where we started, mistakenly, was a lot of what we were building, we were basing off of our own personal experiences. And that’s what I call, kind of, the, you know, sitting isolated in an ivory tower, just, kind of, thinking, “Well, wouldn’t it be cool if…” and, you know, “Let’s go build XYZ because we personally think it would be nice, and that’s how we personally would have wanted it to work.” But so often that leads you really astray. And that led to, at Bebob, us doing several [unintelligible 00:03:10] rebuilds almost, of the product that we’re building. And so that’s why I say I kind of learned the hard way the importance of spending that time with customers up front, and really digging in to understand what problems are most challenging and difficult for them because those are the problems to solve, not the ones that you as a product manager or as a designer think are going to be the most important.
And it just, you know, is a lesson I carry forward with me in terms of how I approach anything I’m going to work on now. You know, the sooner I can get in front of users, the sooner I can get feedback and really validate or invalidate my assumptions, the better because they’re probably going to tell me why I’m wrong.
Brian: [laugh]. Yeah, as I recall, you’re fairly new in your current role as Chief Product Officer here at Sisu. How has that translation to your current role happened, especially in the, kind of, data analytic space where I think design is still fairly new, the idea of product management and the AI space in the enterprise is also very new? How does that apply here? Were you glomming on something from the traditional software world that didn’t neatly fit, or it did slide in really easily? Like, tell me how you guys do it—
Brian: —bake the cake.
Berit: Absolutely. So, this was actually a big part of—you know, a question I got so it was a big part of my decision whether to join Sisu. I spent a lot of time during the interview process talking with Peter, our CEO and founder, and really trying to suss out whether he saw design and product experience as critically strategic to the company’s success. Because at the end of the day, that’s, I think that style and brand of product thinking and product leadership that I bring is going to be—a consistent theme for me in my career has been gravitating towards really complex underlying problems and wanting to build really simple, intuitive solutions for them. And so there’s—you know, as we just started talking about kind of an approach and a philosophy that I believe in, in order to make that happen, which is very grounded in the user, it’s very grounded in doing that, whether it’s conceptual inquiry with users, all the way through concept testing, all the way through, actually, usability testing.
But it’s an intensive process to—you know, a lot of people like to pay lip service to, we want to build an intuitive experience, and you know, we want to be a design-first or, you know, design-forward company, but it requires quite a bit of investment in terms of people and time. And so that was something—I share all of that to say, it was something that, for any role that I was going to take, I wanted to make sure that from the CEO, and just shared values across the leadership team, that we all fundamentally believe design and the product experience that we were building was going to be critical to our success. And that’s something that I am happy to say, both came through in the interview process with Peter and has stayed true since joining. So, I joined about a year-and-a-half ago. And you know, he—so Peter’s background, he has an academic background, incredible, incredible experience and trajectory as a tenured professor at Stanford, and when he decided to start Sisu, and has actually subsequently left Stanford, the reason was because he saw a big opportunity to have more impact on the world by delivering and developing a product that companies could use in their day-to-day to actually get more actionable insights from their data, rather than just kind of researching how might we—or kind of even being embedded in various research efforts with companies.
And so, I think he really recognized the importance to make that—to realize that impact and make that generally accessible and available to more and more people, the product experience, and making it intuitive and something that not just, you know, super sophisticated statisticians could understand but something that those of us who are working in various business functions and trying to make data-driven decisions, but maybe don’t have the extremely sophisticated capabilities around data science, or at least are bottlenecked on that, right? Because every company, even if you have dedicated analysts and data scientists are almost inevitably bottlenecked. And so I think, long way of saying, he really understood and saw the importance of building an intuitive experience to help broaden and maximize our impact, and that has really played out. I’d say, probably one of the biggest tests of that was shortly after joining. I pushed pretty hard for us to do an entire redesign of the product because I felt very strongly that we didn’t have the right mental model and, kind of, foundation to build on, and that was evidenced—again, this wasn’t just my opinion, this was evidence by going and talking to customers and doing some of that user research and trying to dig into how they were thinking about things.
And you know, I think that was, like I said, a big test was are we going to put in the six, nine… you know, sometimes it can be twelve-month cycle to fundamentally rebuild this product because that’s how much we value getting the foundation and user mental model right. And we did. And I’m really happy that it’s played out.
Brian: That’s great. I was going to ask you to respond to the—hypothetically—if P00it was Peter, correct?
Brian: If the response was, “That sounds very expensive and slow what you want to do? Tell me why it’s worth it for us to go do that.”
Berit: Yeah. Well, I think the probably the most important thing I would say is that’s the point where I should not and cannot be relying on my opinion, exclusively, to make that argument, right? It has to be grounded in what have we seen, heard, learned from either current customers [unintelligible 00:09:34] process we talked to, that gives us conviction we don’t have it right. And so that was a big part of the first few months was really digging in and trying to understand that, get the evidence, not just to convince Peter but, candidly, to develop my own conviction, right? [laugh]. Because—
Brian: Right. Yep.
Berit: —it is a big investment. You know? I guess my response to that question would be, “You’re absolutely right. It is a big investment. We shouldn’t take it lightly, but if we’ve really done our homework, then we can develop, I think, enough conviction either way to say, it’s the right bet to place, or it’s not.” And in this case, I think it was.
Brian: Yeah. Yeah, one of the things I see people struggling with that go through my training and stuff like this, especially coming from the technical side, the data side, is there’s a tendency to look at the building of data products, whether they’re internal solutions, or software, or what have you, it’s look at all the available data and then think about how many screens we need and which visualizations, and then tooling around that stuff, and then whatever that is at the end is the-that’s the design. And we don’t call it design because those are dashboard developers or whatever, but that’s basically the approach. And, you know, as a designer, and someone—and product thinker, for me it’s all about the problem-finding is almost more important than the solution-ing, because the solution gets so easy when you really understand the need. It’s not hard to come up with good solutions when the need is so clear.
Which you can only get through conversation, inquiry, shadowing, all these exercises. So, I’m curious, you had mentioned you had a design leader in your past that taught you a bit about problem framing. So, I’m wondering if you could tell me, kind of, what was the before Berit and what was the after Berit of that? And maybe it wasn’t just this one person that helped you out, but what changed in your thinking? Like, help paint that picture before, after around that.
Berit: Yeah, absolutely. I think the biggest difference—and you kind of hit on this—is the percent of time you spend on understanding problem as opposed to thinking about the solution. I would say the before state, it was diving into the solution and kind of assuming that I already understood the problem, or glossing over that in some sense. And then, you know, very quickly—because I think that can actually sometimes feel very tempting because it’s the fun part. Your ideating, you’re coming up with all these different solutions, and to your point, the end result then is a million different fill-in-the-blanks, right, and a million different dashboards, a million different ways of doing something.
Berit: Outputs. Exactly. So, the biggest delta is just how much time we spend. And I have now come to think of my job as probably 80% are we solving the right problem? And there’s a great visual that’s always in my mind for this process, which folks will refer to as, kind of, the Double Diamond Process, which is, you know, if you think about the lifecycle of defining or designing a project or something you’re working on, there’s this first stage up front, which is all about the problem context. And that’s kind of the first diamond in this process.
So, again, it’s actually important to take time to diverge on what problems could we solve, right? There’s so many different problems we could solve; we need to probably understand the landscape of those problems before we can converge on what’s the right problem to solve. And I call that first diamond is basically where we are figuring out, are we designing the right thing, right? Have we found the right problem to solve? It’s only once we’ve aligned on that, that we actually then can move into, how do we design the thing right? And that’s your solution context, right?
And again, that’s the point at which you can go back and—your second diamond—diverge again, on all the different solutions, ideate around those, and converge on the right solution. But like I said, I think the biggest delta for me has been how much time I spend on that upfront problem context. And interestingly, that, kind of, double diamond, I guess the last part of it is that actually you build, you iterate—or you build, you ship, you learn, and you iterate. And you’re right back to what problem do we need to solve based on what we shipped? So, we actually now at Sisu map our internal project review meetings. So our, kind of, cross-functional, it’s checking on how a project’s going, we map those to the context of that double diamond process, and we say, you know, “Okay, is this a project review meeting? Number one—which is all about the context—we’re not even going to talk about a single solution in this meeting. We’re literally just going to talk about what problem are we solving.” And that’s, I think, been the biggest change.
Brian: Yeah, yeah. Talk to me about—is there anything unique about designing for decision-making? Which, I’m not sure in Sisu whether or not a user actually takes action on a decision within your tool such that feedback is somehow recorded, or whether or not it’s like I, you know, I leave your app and I go elsewhere in my job and I then take action on this thing. “I’m going to buy more widgets because you guys told me I should and I believe the analysis I got.”
Can you talk to me about this act of designing for decisioning? Where does it break down? Have you learned anything about how to do it better that’s maybe not the same as traditional software development, where it’s different with data products? So, especially when nothing is going to tell you the perfect future, right? You absolutely should do this and we’re a hundred percent certain, you know, unless you’re [unintelligible 00:15:29] machine learning, and it’s for—I mean, there’s always even—there’s all probable probability still anyways—but tell me about that. Is there something you could share there that’s different about designing for decision-making?
Berit: Yeah. Well, there’s two things that you said that I want to underline and I think are probably the biggest takeaways for me. First is you said, designing for decision-making. And that is, it sounds simple, but that’s one of the most fundamental things is, we’re in a space that people—it’s a crowded space and people typically call data analytics, right? Or AI analytics, et cetera.
And if you’ve just—again, it’s nuance and it’s just in the way we’re speaking, but it matters. When you hear people saying, “You’re building a data analytics tool,” that’s really different than saying, “You’re building an engine or a platform to make decisions.” Right? Because—and this is where I actually will say to even candidates when I’m interviewing them, “If all we end up doing is building a analytics tool, we will have failed. The world doesn’t need another analytics tool.”
There’s a million and one analytics tools out there. And, again, I think that would be such a missed opportunity. And yes, we could apply, you know, a little bit smarter ML to the analytics engine, or more scalability, and all of those things need to happen, but I think the mistake would be thinking that we just are building for the act of actually doing the analysis. Where the gap is, and we can call this the decision gap, right, the gap is not just about can you do the analysis, but it’s actually how do we go from analysis to decision? Which at the end of the day is a multi-step, multi-user workflow.
And if you think about traditional BI tools, and dashboarding tools and, really, any analytics tools out there, they weren’t fundamentally designed for a workflow. They weren’t fundamentally designed for collaboration around consuming those insights and iterate, right? The second part of what you said that I wanted to make sure we underline is that decision-making is a human process. There’s no world in which you’re going to spit out an answer and say, “Just go do it,” and people are either going to trust it, or—it’s just there’s—software is always going to be missing the rich context and expertise that humans have about their business and the context in which they’re making the decision.
So, what that says to me is inherently, decision-making is also going to be an iterative process. We’re going [blank] the question; we’re going to find an answer that might prompt another question. And different types of questions, right? We’re going to ask, “What questions? What happened to the metric?” That’s going to prompt a why question, “Well, why did it happen?” That’s going to prompt, maybe, another what question, “Well, I found this interesting driver. Let me go now and ask what happened to that driver over a longer window of time?”
And you’ll kind of jump back and forth between that, and eventually maybe ask a, “Now, what?” question. To your point, “How should I think about if I took this action, what would the impact be?” Or even executing that action. So, I think the key here is saying, again, iterative workflow, where the end goal is decision; the end goal is not analysis output.
And acknowledging that it is a human process. And what I think technology can do is it can automate and accelerate a lot of the manual, repetitive steps in the analysis that are taking up a bunch of time today, and especially as data is getting exponentially more complex and multi-dimensional, the technology can help you hone in on where out of that incredibly complex world of all the different things I could be looking at, what are the most important things? How do I prioritize my time? How do I focus? But then it’s that human who’s going to be part of doing the iterating, part of doing the, “Let me think about this in the context of our business. Let me ask this next follow up.” And then ultimately, executing the decision.
Brian: Has there been any—in the process of maybe going through that redesign you talked about, or even just part of your regular diet of shipping or doing testing and this kind of thing, where you’ve learned, “Hey, you know, our assumption about this was totally wrong. You know, when we got XYZ in front of somebody, here’s what we learned, particularly around this decision-making.”
Berit: Probably the biggest thing that sticks [laugh] out to me is when I joined… Sisu, and I absolutely bought into this kind of directive as well, we said we were really focused on answering these wide questions. So, I mentioned before this, kind of, what questions about what happened to you metric, and then there’s why. Which we felt there’s a big gap in the market for helping answer why, and helping dig in—and when I say ‘answer why,’ what I mean is being able to look at all the different subgroups in a data set that could be influencing or impacting metric performance, and pulling out the ones that actually had statistically significant impact on how the metric’s performing, or how it changed over time. And it is a really important step that I think has been underserved, and candidly, that’s where Sisu’s initial traction in the market has come from, and helping fill that gap around why. Because as most companies, it’s a very manual process of analysts going in and, kind of, hypothesizing; like, “I think it could be this. Let me check.” Rather than saying, “Out of all the hundreds of millions, or even billions combinations, tell me which ones actually mattered.”
So, that’s kind of the bread and butter of where Sisu started. Now, what I think we assumed that was wrong, we kind of said, “We’re not going to build dashboards. We’re not going to do the simple what questions. Were not the kind of traditional dashboards, and some of those what questions, they can be done in other tools, and we really want to focus on this why.” And what has been really interesting [laugh] is we actually heard customers and got feedback that they were coming into Sisu to answer that why question because they couldn’t do it in their existing BI.
But it was really painful because they’ll start to have to go back and forth, right? They’re starting in a dashboard to say, even, “What why question should I even be looking at?” They’re jumping into Sisu to answer it, and then on the other end of that, they’re saying, “I found something in Sisu. Now, I want to create a simple visualization to be able to communicate it to address my business.” And so there was this really, again, inefficient having to jump between two different worlds and two different tools.
And we ended up, I guess, as a result of that insight, saying, “We’re going to go back on this stance of, like, we’re not answering what questions and we’re not allowing any, like, basic visualizations or dashboards.” Because again the reason, it was not to say, “Let’s go and build the capability to create a stacked bar chart just because every BI tool has it.” It was because we’re seeing customers have that need on either side of what they’re already doing in Sisu. And so, that’s kind of what drew us into—you know, and we actually have just recently launched that capability to pair those kind of exploratory what questions and creating some of the visualization around those to pair that. And then, you know, I think what’s really exciting is—and powerful—is to be able to then go from one of those immediately kick off your why question, which is where Sisu really differentiates, and that those handoffs, I think, become really powerful.
So, that’s an example where, like I said, we almost had the opposite stance up front of, “We’re not going to build this,” and we’ve since changed that and spent about two quarters, really investing and building a V1 of that capability because it was clear that it was so essential to the workflow that our users needed to follow.
Brian: So, if I follow this correctly then, the what part of the question is not so much supported, and you’re really kind of doubled-down on the why part? Is that correct? Like, answering the why?
Berit: Yeah. Well, that’s very much where we started, but now what I was saying is that we’ve recently launched into that, more, what capability because we are seeing that handoff—
Brian: Oh, okay.
Berit: —is so critical in the workflow.
Brian: And the expression of that experience is different—
Berit: Exactly. Yeah it was—
Brian: —than it was originally?
Berit: —it was not even something that we planned on addressing, initially, right? We didn’t think that that was an important part, that it could be served in other capabil—you know, other tools and other capabilities.
Brian: Right. Okay, I think I understand now. So, and I wanted—this actually dovetails in my next question because one of the things, one of my first analytics software clients, a [CEO 00:24:43] called it the metrics toilet, and I think it was a great—“Our product is a metrics toilet.” Anything you want to see, divided by anything this way, compare anything you want, any chart type, anything, and then that way they can solve every problem they will ever have without ever calling us, right? That’s the theory.
How do you control for this as a product manager, when you know that you don’t have the entire scope of all the problems, when you’re building a tool that is used to design solutions to these questions: you don’t know all the questions, all the industries, all the types of users, all of that, all the data, all the fields, et cetera, et cetera, et cetera. So, some of this you have to abstract out, like, this is a class of question. So, like, “Yeah, it’s customer data, but it could be vendor data, so let’s not lock in customer, we need to give them a field which lets them select from the vendor table instead of the whatever table.” But now you’re on this—now this can easily get out of control, right? “Well, maybe they like to join on inventory data as well as customer order data, just in case because I can theoretically think of this thing,” and on and on and on and on. How do you control for reduction of scope to focus narrowly on quality, versus making sure you’re not too constrictive where you’ve, like you know, doubled down on one class of problems so far, but it’s not really a platform now, it’s very customized to a very specific thing like that’s—
Brian: —how do you balance that?
Berit: The short answer is, it’s hard, and we’re figuring it out. It’s not—[laugh] I’m not going to pretend that I’ve got, you know, a magic answer to that. I think what you’re talking about is—we think of it in the product as explicitly a metric layer, a metrics layer, right, where the business is actually doing some of that up front, and it’s probably going to be an analytics engineer, or an analyst of types, who’s doing some of that upfront modeling to reflect the business logic and say, here’s how we define this metric and whatnot. And to your point, I mean, there’s just a rat holes upon rat holes of, you know, of edge cases and things that you can design for with that. So, you know, our approach—and I guess, you know, let’s circle back in a year or two and see how this works—but our approach here is, you know—it’s I know, I’m probably being a kind of [unintelligible 00:27:08] redundant, but it’s coming back to that, you know, design thinking process, and getting things in front of users, testing—I think the most—this is a good example where having a really clear hypothesis going in that actually takes, maybe, a more conservative stance, I found that can actually be a really go—for this type of problem where you could think of, again, every edge case you could possibly design for, if you start with something where you’re intentionally saying, “We’re not solving XYZ edge cases, we’re not. And let’s see.”
And look, users are going to tell you they need to do it, right? They’re going to tell you, “I absolutely need to be able to do that.” But when you actually put it in front of them, they’re going to—and actually maybe ship a V0 or a beta that doesn’t have that capability, one of two things is going to happen. Either they’re going to say—they’re going to continue to pound the table and say, “This is super painful. I absolutely need it.” And then you know they really need it. Or, it’s never going to come up again and you’re going to realize that maybe they thought they needed it, and they didn’t. And so that’s where I think, in this ongoing battle that I think is true, for sure in data and analytics, and especially at this kind of data modeling metric layer, but it’s true in a lot of enterprise software contexts, where there’s this tension of, I want a simple, intuitive experience, but I also want to be able to configure and customize till the cows come in, I think the best way to test that is starting simpler, shipping that simpler thing, and then seeing where people truly push back, versus where maybe they thought they needed something that in reality maybe isn’t quite as critical as they thought.
Brian: Sure, sure. I sometimes call those anti-personas or anti-goals, and they’re used as anchors to help us know, especially when you have your, what I call outer ring stakeholders who are not maybe in the day-to-day, but they drop in, you have to give that context so that they don’t immediately derail everything because they don’t know what you’re not doing, and that—which means we’ve actually thought about this and we’ve declared we’re not working on that type of problem in order to keep our scope small to focus on a need, it can be really powerful. And it’s, “As compared to what?” Right? Like, we’re doing this. Compared to what? Not that stuff is what it’s compared to.
So, I love that you mentioned that and I wanted to actually plug a—I’m going to plug a book I haven’t read, but I heard the author interview which is great, and it’s called Subtract: The Untapped Science of Less, and it’s basically about the idea of—you know, when we talk to people about solving problems and you give, you know, people in these studies, “How would you solve this situation?” You know, nine out of ten people, it’s, they would add something to whatever it is that you’re making to make it better. And so often we forget—and designers, especially when we think about modernism and all this, it was very much about what can I take away that will help make it better? And I think this gets lost esp—and the tendency with data, when you think about how much we’re collecting and the scale of it, that adding it is always going to make it better. And it doesn’t make it better all the time. It can slow shit down and it can cause noise, it can make people ask even more questions when in reality, like, the goal here is to make a decision. It’s never going to be a perfect decision; let’s accelerate the decision-making so that there’s a learning cycle that happens better—
Brian: —you know, next, next step. I don’t—
Berit: I love—
Berit: —I love the way you said it—
Brian: Maybe preaching the the choir here, but—
Berit: —and I’ll have to check out that book. I’ve also not read it, but it sounds great. You know, I joke sometimes, probably the most important part of my job is saying no, right? Or pulling things back. And—
Berit: Yeah, so I’ll definitely check out that book. That—
Brian: Is there one particular thing that you found is most difficult about designing a decision support application or tool? What’s the hardest thing there? And any tips on how either you’re currently tackling it, or just any findings that you might want to share?
Berit: One thing that popped into my mind right away—and I don’t know if this is the hardest, but it was something that, like I said, as you were asking the question, popped into my head is trying to decide when to get specialized. And if so ho—and what I mean by specialized is, you know, there is a world where you could go and create a bunch of very, like, industry-specific or very function-specific types of experiences, right? We could say, “We’re going to tailor and deliver a Sisu for fill-in-the-blank industry,” or, “For fill-in-the-blank function.” And that is something that I’m constantly thinking about and constantly finding myself—and again in full transparency, I’ll say, “Let’s circle back in a couple years and see if this is the right decision,”—but what I’m finding myself doing right now is really trying to resist the urge to get industry-specific or metric-specific in any of the, kind of, baseline functionality in the product, and instead, kind of, say, “We can actually experiment in a pretty lightweight way, in terms of outside of the product: help content, guidance, best practices, things like that.” But I think that is going to be a constant tension because again, the types of decisions that you enact, and the types of questions you’re getting into are really different depending on whether you’re a, I don’t know, massive hotel chain compared to a quick service restaurant, compared to a software, you know, B2B SaaS company, right? [laugh].
And the personas and the questions are so different. So, that’s a tension that I think it’s really interesting when you think about the decision-making workflow and who those stakeholders are. And it’s kind of how specialized or specific do we want to get. And that’s both in, you know, the research, who we’re thinking about as our target customers and target personas, also in how we evolve the product, but also in how we kind of serve them outside of the product. So, that’s an interesting tension, and like I said, my intuition here is to stay general to start, and kind of see where we get pulled by the market.
Brian: Got it. Got it. Yeah, I can understand that, and I would, you know—it gets into marketing and positioning, right? Who are we helping? And are we speaking their lingo? Because the hotel chain and the Burger Kings of the world don’t probably have the same lingo in what they’re trying to do. And this is all about operations, and this is all about customer experience, which is not about—
Berit: [laugh]. Exactly. Exactly.
Brian: —efficient burger shipping. Like, you know, the hotel guest came here to spend time at the hotel. That is actually the—
Berit: [laugh]. Exactly.
Brian: —goal is to relax here, not to leave. You know? So, the lingo, you know, the way we talk to them is different, and it gets down probably, to the tooling level to with what you’re talking about. So, I’ll be curious to hear out you guys—
Brian: —come out of the end of that, you know? How can people follow you? And anything coming up in your world you want to share? Any events, or speaking, or books, or anything like that that you want to mention?
Berit: Yeah, absolutely. So um, follow me on LinkedIn, or feel free to just—yeah, shoot me a message there, and I’m always happy to connect, especially if anything that we talked about today—I mean, I—this is the type of stuff I love talking about, this intersection of data-driven decision-making, and designing for that, and thinking about all of the fun [laugh] challenges and questions that come with that. So yeah, I would absolutely always welcome people reaching out on LinkedIn, connecting there. In terms of events, we’ve got a lot of really exciting events actually coming up [at Sisu 00:35:02]. Depending on when this airs, we may be before or after [unintelligible 00:35:07] Future Data Conference on October 13th that we’re really excited about. And we’ll have a number of different events and announcements coming up from Sisu, so I’d also encourage people to follow Sisu on LinkedIn, check out our website, sisudata.com. And just really appreciate, Brian, you having me here. This was a really fun conversation. So, thank you so much.
Brian: All right. Excellent.