085 – Dr. William Báez on the Journey and ROI of Integrating UX Design into Machine Learning and Analytics Solutions

Experiencing Data with Brian O'Neill (Designing for Analytics)
Experiencing Data with Brian T. O'Neill
085 - Dr. William Báez on the Journey and ROI of Integrating UX Design into Machine Learning and Analytics Solutions

Why design matters in data products is a question that, at first glance, may not be easily answered for some until they see users try to use ML models and analytics to make decisions. For Bill Báez, a data scientist and VP of Strategy at Ascend Solutions, realizing that design and UX matters in this context was a realization that grew over the course of a few years. Bill’s origins in the Air Force, and his transition to Ascend Solutions, instilled lessons about the importance of using design thinking with both clients and users.

After observing solutions built in total isolation with zero empathy and knowledge of how they were being perceived in the wild, Bill realized the critical need to bring developers “upstairs” to actually observe the people using the solutions that were being built.

Currently, Ascend’s consulting is primarily rooted in healthcare and community services, and in this episode, Bill provides some real-world examples where their machine learning and analytics solutions were informed by approaching the problems from a human-centered design perspective. Bill also dives in to where he is on his journey to integrate his UX and data science teams at Ascend so they can create better value for their clients and their client’s constituents.

Highlights in this episode include:

  • What caused Bill to notice design for the first time and its importance in data products (03:12)
  • Bridging the gap between data science, UX, and the client’s needs at Ascend (08:07)
  • How to deal with the “presenting problem” and working with feedback (16:00)
  • Bill’s advice for getting designers, UX, and clients on the same page based on his experience to date (23:56)
  • How Bill provides unity for his UX and data science teams   (32:40)
  • The effects of UX in medicine (41:00)

Quotes from Today’s Episode

  • “My journey into Design Thinking started in earnest when I started at Ascend, but I didn’t really have the terminology to use. For example, Design Thinking and UX were actually terms I was not personally aware of until last summer. But now that I know and have been exposed to it and have learned more about it, I realize I’ve been doing a lot of that type of work in earnest since 2018. - Bill (03:37)
  • “Ascend Innovations has always been product-focused, although again, services is our main line of business. As we started hiring a more dedicated UX team, people who’ve been doing this for their whole career, it really helped me to understand what I had experienced prior to coming to Ascend. Part of the time I was here at Ascend that UX framework and that Design Thinking lens, it really brings a lot more firepower to what data science is trying to achieve at the end of the day.” - Bill (08:29)
  • “Clients were surprised that we were asking such rudimentary questions.  They’ll say ‘Well, we’ve already talked about that,’ or, ‘It should be obvious.’ or ‘Well, why are you asking me such a simple question?’ And we had to explain to them that we wanted to start at the bottom to move to the top. We don’t want to start somewhere midway and get the top. We want to make sure that we are all in alignment with what we’re trying to do, so we want to establish that baseline of understanding. So, we’re going to start off asking very simple questions and work our way up from there...” - Bill (21:09)
  • “We’re building a thing, but the thing only has value if it creates a change in the world. The world being, in the mind of the stakeholder, in the minds of the users, maybe some third parties that are affected by that stuff, but it’s the change that matters. So what is the better state we want in the future for our client or for our customers and users? That’s the thing we’re trying to create. Not the thing; the change from the thing is what we want, and getting to that is the hard part.” - Brian (@rhythmspice) (26:33)
  • “This is a gift that you’re giving to [stakeholders] to save time, to save money, to avoid building something that will never get used and will not provide value to them. You do need to push back against this and if they say no, that’s fine. Paint the picture of the risk, though, by not doing design. It’s very easy for us to build a ML model. It’s hard for us to build a model that someone will actually use to make the world better. And in this case, it’s healthcare or support, intervention support for addicts. “Do you really want a model, or do you want an improvement in the lives of these addicts? That’s ultimately where we’re going with this, and if we don’t do this, the risk of us pushing out an output that doesn’t get used is high. So, design is a gift, not a tax...” - Brian (@rhythmspice) (34:34)
  • “I’d say to anybody out there right now who’s currently working on data science efforts: the sooner you get your people comfortable with the idea of doing Design Thinking, get them implemented into the projects that are currently going on. [...] I think that will be a real game-changer for your data scientists and your organization as a whole...” - Bill  (42:19)

Links Referenced:


Brian: Welcome back to Experiencing Data. This is Brian T. O’Neill. Today I have my pal, Bill Báez on the phone. What’s going on Bill?

Bill: Not much, Brian. How about yourself?

Brian: I’m doing great. And you are a data scientist that I see as somebody—so we worked together in the seminar, and I feel like you had a little bit of a conversion in your head or something happened here, and so I wanted to talk about this sense of the before and after of, kind of, having a feel for why design matters in the context of doing data product work. That’s what I want to talk to you about today. But before we do that, why don’t you give people an idea of what you do at Ascend Innovations, your role there, et cetera, et cetera.

Bill: Yeah, so here at Ascend Innovations, I oversee most of the day-to-day operations, working very closely with our technical teams and my sales and capture teams, going after, you know, those who are working in the community to help solve those complex community problems and really bringing those data-driven services and products to them to help meet their organizational goals and objectives.

Brian: Got it. And is this primarily in healthcare?

Bill: It’s healthcare adjacent. We are owned by the local three major hospital networks here in the Dayton Ohio area, but we’re not sworn to working primarily in healthcare. We’ve certainly bled over into some of the more community needs, especially around addiction, drug abuse, and mental health, which is an ongoing concern here in Dayton, Ohio. So yeah, not exclusively healthcare, but we’re not too far from it.

Brian: And so, as VP of Strategy, I assume you’re kind of a leading face in the early conversations with clients about diagnosing problem, need that kind of situation?

Bill: Right. My sales director and I are often looking for the next company to work with, the next organization to work with, and yes, we together partner to figure out, like, what is that needs assessment that we can help them fulfill.

Brian: In your background, data science, some air force, as I recall?

Bill: Yeah. So, I’ve been 23 years now, over this year, in the Air Force. I did 12 years active duty, and then I’ve been reserved since 2011. And I switched to the reserves because I was accepted into the Physics PhD program at the Ohio State University. So, completed back in 2018 with a concentration in bioinformatics and theoretical biophysics, but then I switched away from academia back into the private sector, working originally as a senior scientist for defense contractor at Wright-Patterson Air Force Base, and then this opportunity with Ascend Innovations opened up back in the summer of 2019. And made the leap from defense to healthcare and haven’t looked back since.

Brian: Nice, nice. Well, tell me a little bit about, where did you first hear about design? And maybe this wasn’t your experience; for a lot of people who are not designers, we tend to see design for the first time on the surface. And then as we go into the product world, it starts to go deeper in that. Where did you first hear about it, and particularly where did you first think about design as having anything to do with data science and analytics work?

Bill: Yeah, so my journey into Design Thinking, it started in earnest when I started at Ascend. I originally started out as director of R&D with that product focus. Various mechanisms in play [to 00:03:50] looking at early startups or early technology and figuring out how to commercialize them. But I didn’t really have the terminology to use. For example, Design Thinking and UX were actually terms I was not personally aware of until last summer.

But now that I know, have exposed to it, have learned more about it, I realize now I’ve been doing a lot of that type of work in earnest since 2018, actually, through an Air Force experience I had right after I graduated, or defended my dissertation. I did a two-month stint down in Florida with a unit who captures a lot of full-motion video data, and doing it for the better part of a decade. So, they had a couple petabytes of data that they had been collecting and analyzing. And so I got a chance to get in the saddle with them and really understand what their process is and look for a new way forward.

Because they’d been using a lot of technologies that you wouldn’t expect to see in a high ops tempo environment like that: Excel, Microsoft Access, things that once you have multiple users trying to access at the same time, crashes. And that once the data has been collected and you have people on the back end doing analysis, may not be the right tool for what they’re trying to achieve. So, there was a lot of effort to just [unintelligible 00:05:09] to re-wrangle some of that data. And so, that was a very eye-opening experience for me, especially from a des—now that I know it’s called the Design Thinking perspective.

Because what I found out during that time was there was a company working within that unit, creating the new suite of software for data collection of the future for that particular mission. And come to find out that the developers who were working downstairs in the building had not spent a lot of time with the users upstairs and really understanding, like, what is it that they’re doing on a day-to-day basis, from the moment they’re collecting to the moment when they’re doing that initial analysis on the back end? It was really clear to us that they had really foregone that user experience, or just ignored it, and really hadn’t established that empathy with everyone from those watching the screens to those actually doing some of that initial analysis with their various customers.

Simple things like, how do people enter dates and times such that you can actually do analysis, large-scale analysis on the backend? None of that stuff was standardized to begin with, but then watching them create the software, it was not standardized there as well, so it was clear, like, some of these developers, these coders, and the people who were creating this new [unintelligible 00:06:24] tools, like, haven’t spent the time to really understand, like, what are those unarticulated pain points that could be addressed by just fixing it right in the new tool that they’re creating? That was, again, was a very eye-opening experience for me, but it wasn’t until recently that I knew that there was a whole career field called UX [laugh] that actually tries to prevent that stuff and really understand, like, what is it that these people do, and how are we creating solutions for them to do it better?

Brian: Maybe the reason is because why are the technical people always in the basement or downstairs, and maybe it’s because they always have to walk up to go see users, and it’s more work? I don’t—[laugh].

Bill: You’d be surprised. We took one of them upstairs one day—my partner and I at the time who were working on this project—we took their lead engineer upstairs and just had him. Watch us do this as if he’d never seen it before and we’d never spoken before and just kind of did that kind of scenario.

Brian: Yeah.

Bill: It was eye-opening to us at how much he didn’t know, even though he’d been in the building for the better part of a year working on this.

Brian: Yeah.

Bill: It was mind-blowing.

Brian: Yeah, yeah. What triggered that need to feel like you needed to do that? Was it just, like, common sense; this just feels off? Or was it like, “I’m hearing this pain and complaints from the users?” Or, something changed that made you say, like, “I really need to get the makers closer to the users, and now’s the time to do it.” What changed there?

Bill: It was talking to the users, coming up with that list of concerns, gripes, and complaints that they had, and then just casually speaking with the developers. And it was as if they’d never heard these things before. So, that’s when we said, “Hey, why don’t you come upstairs with us and let’s see what happens.”

Brian: And so how long—what was the gap between that experience and now at Ascend what you’re doing there? Like, did you come in the door at Ascend and say, “Definitely not recreating that environment here.” Was that the thinking? Or tell me about that transition.

Bill: So, that transition happened, like I said, last summer. There were certainly elements of this Design Thinking and Ascend Innovations has always been product-focused, although again, services is our main line of business. As we started hiring a more dedicated UX team, people who’ve been doing this for their whole career—and them sharing your podcast with us, of course, too—it really helped me to understand that what I had experienced prior to coming to Ascend and part of the time I was here at Ascent that UX framework and that Design Thinking lens, it really brings a lot more firepower to what data science is trying to achieve at the end of the day.

Brian: Tell me about that dance between your user experience team, your data science team, especially as you upskill the data scientists about what the designers are doing. Is this a, “This is their job, but we want to know about it,” or, “This is our job, and they’re the accelerants and facilitating it,” or some third version? Like, how do you frame it in your head about whose job is the user experience and the value and all of that?

Bill: So, we’re still early on in this process. And it’s like I said, it’s only been a few months since we’re figuring this out, so there’s still a lot to be learned here, but what we’ve decided is that every data scientist should have some of the basics of Design Thinking, walking clients through that question-and-answer scenario, especially upfront during discovery, they should be comfortable talking to the client in a way that isn’t focused on the ones and zeros that they’re analyzing.

Brian: What, you let data scientists talk to clients? [laugh].

Bill: We heavily encourage them to talk to clients.

Brian: Actual human clients? [laugh].

Bill: There has been pushback.

Brian: [laugh].

Bill: I’m not going to, uh—

Brian: [laugh]. Not my job—

Bill: —I’m not going to lie. I’ve heard that more than once.

Brian: Yeah.

Bill: And, you know, we frame it in the way that it helps them broaden their career, right?

Brian: Yeah.

Bill: Especially the junior data scientist, I think most of them are actually on board. Some of the senior, more seasoned data scientists are a little bit more resistant because they want to hide behind their Jupyter Notebooks or, you know, whatever their favorite IDE is to just analyze code. But at the end of the day, like, if you really want to be a heavy hitter, a lead, or a principal data scientist somewhere, like, really being able to empathize with the customers, really understanding what is it that they’re trying to achieve with this data? What are those questions they need answered, those goals to be met, et cetera? That’s not going to be found in a formula. You’re going to have to talk to these people and really understand what is it that they do on a day-to-day basis, and how can you help them do it better?

Brian: And how do you negotiate the way the UX team is—or maybe they aren’t—talking to clients, versus when you have data scientists on that front-facing thing with the stakeholders? Is this a, “Everyone needs to know how to do this,” or, “You do this?” Kind of—you know, “The interview has two parts. We kind of want you doing this part, we’re going to handle this part.” Like, how do you think about that?

Bill: For now, the way that we’re trying to construct this is that every project will be led off by UX, but there will always be a data scientist or some data representative in the room. There’ll have been some pre-gaming ahead of time, like, “What are those questions we need to ask? What is it we know today about what they do?” Try to do some of that homework upfront, especially like when me and the sales team are on those qualifying calls, like, try to extract as much out of them so that we can feed the team a little bit of that ahead of time.

But then let UX, kind of, lead those initial discussions, but make sure that the data team is participating because we want them to learn, learn by watching, and hopefully, then once the project really got going, and that most of that discovery is done, just hand it off to the data team to continue to move that ball further down the field.

Brian: Let’s dig into this, I don’t want to call it the stereotype. You talk about hiding behind the Jupyter Notebooks and, like, don’t really want to talk to customers, “Leave me alone. I was hired to do this really technical work no one else can do.” I do hear these stereotypes a lot. I don’t like to propagate them too much. I joke about it but is that, to you, a real thing? And I’m curious, as a leader, does this mean I need to change who I’m hiring, or it’s like, “Nope, I still hire for that thing, but then I want to upskill them?” And how much do you trying to change water into wine a little bit there? Or do you see it as, like, no, it can happen? It’s just, I’m always going to look for that tech talent first and then open them up to this? Or what’s your thinking there?

Bill: So, let me preface it first, with those characteristics are not imaginary. And coming from a physics background, you know, Big Bang Theory-type people exist, but they’re not all like that. [laugh]. There are some of us who are real people and can socialize. And the same is true of data scientists.

Whenever we can get some of those people in the door who are open to it or have a little bit of experience to it, all the better. But we do look for those people who are more willing to throw themselves out there and get in front of the customer, so we will ask that during our initial interviews with candidates for our positions. It’s a huge—and we have a small team. You know, we’re growing, but we’ve got a small team, we’ve got a lot of work that we need to share, so we need everybody to be comfortable sitting at the table with the client and willing to ask those questions that may feel like they’re prying into their business, but at the end of the day, like, what we’re trying to do is solve problems, both the problems they think they have and the problems they actually have.

Brian: And the way your data scientists and user experience teams are working together, is there more of that training-ish model where the UX people are trying to help the data scientists be self-sufficient once that project is kind of going, or is it more like, we each own different parts of it and at least, you know, the data scientists know what we’re doing: Why are we testing it? Why are we talking to users now? Why are we doing whatever it is that they’re doing? Is it more just so they understand this process is not just about modeling the data, and cleaning it up, building the connectors, spitting it out into a plot, and then now it’s done; next project. Talk to me a little bit about that.

Bill: It’s a bit of both, right? So, we want the teams to work together seamlessly, so the two need to understand where they’re both coming from. Now, it also depends on the parameters of the project itself, right? We’ve got a lot of efforts that don’t really require UX per se, so as I said earlier, like, we are owned by three hospital networks here in the Dayton region, so we have access to all the patient encounter data that comes to the hospitals on a daily basis. We have real-time feeds, we have a variety of clients that we sell that data to in various analyzed forms, or aggregated forms depending on what it is they need and depending on what legal frameworks you have to work in.

So, those types of projects don’t necessarily need UX, however having gone through multiple iterations of those types of projects, that data scientists need to be comfortable asking the questions of those who are requesting that data, what is it at the end of the day you’re trying to—those insights you’re trying to actually get from this data set? What are the questions you want to know? What goals are those questions tied to because I can just give you data, but it may not actually be what you need? You may need something a little bit more raw or a little bit more calculated and curated. And you’d be surprised, not everybody is comfortable asking those questions and being in that type of situation.

Brian: Sure, sure. I was reading a book, I forget what it was called, but it calls that—this is the presenting problem. So, the presenting problem is, “I need the data about how many hospital admissions happened because of Covid during this time window. Period.” [laugh].

Bill: Right.

Brian: That’s a presenting problem. It’s like, you could just give that to them, right, or you could understand why it is they want that. And then later you find out it’s like, well, do I need to, like, change the number of beds that we’re predicting will be available because of the number of things. Like basically, do I need to resource plan differently? “Oh, so you’re trying to figure out if you need to hire more nurses?” “Yes.” “Okay. That’s not what you said at the beginning. And yes, maybe you still do need to see the Covid admissions, but really you’re trying to inform a resource planning thing, is that right?” “Yes.” That whole dance, is that kind of what you’re talking about?

Bill: Right. Right.

Brian: Yeah.

Bill: So, one example of that is we’ll have clients come to us and want the—so we have access to the hospital data; we also have access to various other community outreach organizations doing mental health, drug addiction, and alcohol abuse stuff, too. And so we try to sprinkle in some of those social determinants of health, like, really understanding, like, what is going on with this person, not just physiologically but sociologically as well to really understand how can we serve them better and figure out is what we’re doing actually having a positive effect on the other side. So, making sure that the data scientists are comfortable talking about it, they’re able to communicate to the clients what’s available, what’s not available. And not necessarily like is it there or not, like, what’s available to them legally or not, too, because there are a lot of HIPAA constraints, there’s another constraint that we have to deal with called 42 CFR Part 2, which has to do with substance use disorder. So, really understanding, like, what we can and cannot do, and having some of those more legalese questions, it’s a learning process [laugh] for them.

Brian: Sure. What’s the—and I realize you’re early in the journey here, and actually, this is, I think, really informative because it’s always going to be rocky along the way when you’re trying to change the way you’ve done something before, or the status quo is just, like, it’s always there. And that’s the thing I tell teams, especially when I talk to private coaching clients, it’s like, you’re already deciding what you’re doing. You’re doing it the old way, and every day you come in the status quo was pulling you to do it that way. There is other ways to do things, but you’ve made a decision, tacitly, like that.

So, what signals have you—if you’ve seen any results of changing this process in terms of your client and stakeholders perceptions, do you get some sense of feedback that, hey, we’re seeing value in doing it this way? Or not. Maybe you haven’t, yet.

Bill: Oh no, we’ve actually seen some stark returns on the investment of going all-in on this Design Thinking. In particular, two of our clients, we’ve worked with one of them for quite a while, it’s actually our sister organization—Greater Dayton Area Hospital Association, [reports to 00:18:56] our owner—but they’ve actually noticed the change in the way that we talk with them during some of these discovery sessions. So, what we hear is that they felt heard. So, there’s a lot of emphasis on empathizing with them and really digging deep and what is it that they’re trying to do? Because in the past, our reaction would be like, “Well, here’s the data. Good luck.”

Brian: Right.

Bill: But really trying to get in deep to what is the problem they’re trying to solve and what is the message that they’re trying to communicate to the community, that has been quite a game-changer for our relationship with those clients.

Brian: And are you guys still at the process where you’re in progress of building the solutions, or—because you’re talking about, kind of in the upstream—at least in the problem definition space, they’re saying, like, “You get it. It sounds like you get us you understand what we need.” Has that carried through it all to the solution side?

Bill: It has. It has.

Brian: Okay.

Bill: So we, in various efforts with them, we hear it at the beginning, but it does carry through in some of the finished efforts and the ongoing efforts.

Brian: Do you listen just for, like, verbal feedback from them? Or, how are you knowing what they think about the solutions that you’re building once you’ve gone through the problem discovery part and the ideation and now you’re into developing some tooling or interfacing, that kind of stuff, how do you get that feedback? What’s that process?

Bill: Well, we ask them, “How are we doing? Do you feel like we’re providing what’s your need, or what you think you need?” It’s just the easiest way to get to that answer: Just ask them. They’ll be quick to tell you one way or the other though, too. Especially if you’re going down the wrong path. Our experience has been that people are quick to say, this isn’t what I was thinking.

[midroll 00:20:35]

Brian: Was the effort for you to get this—in the ideation and problem discovery work that you started to apply upstream in the process of these clients, was it difficult or lengthy to get to the point where you started to hear those reactions from your clients that they started to feel better, more heard? Was that difficult? What were the challenges along the way to doing that, if any?

Bill: So, some of the initial challenges that we had—and I wasn’t on all of these calls, so some of this feedback is second-hand—was that clients were surprised that we were asking such rudimentary questions. So, they’re like, “Well, we’ve already talked about that,” or, “It should be obvious.” Like, some of those, like, “Well, why are you asking me such a simple or, like, a Socratic question?” And we had to explain them, like, well, we want to kind of start here, you know, at the bottom to kind of move to the top. We don’t want to start somewhere midway and get the top.

We want to make sure that we are all in alignment with what we’re trying to do, so we want to establish that baseline of understanding. So, we’re going to start off asking very simple questions and work our way up from there. And so there was some initial pushback there, but as we’ve really, like, refined the process and worked with new clients—so some of that pushback was from clients we’ve already had, so we had to go through those pains of explaining, like, “Why are you suddenly changing your tone?” But overall, it’s been very positive?

Brian: Was it kind of like, “This is a waste of time, and you’re billing our ti—” I mean like—

Bill: Yeah.

Brian: Like, running the meter up—

Bill: Oh, yeah. [laugh].

Brian: —and all this kind of stuff? Yeah.

Bill: Right. And that’s—we tried to explain them, like, you know, we want to spend as much time upfront understanding the problem so that we can really get to the solution faster, right? The more time we spend now, the less time later, and the less time having to possibly fix something that we may not have caught at the beginning.

Brian: Mm-hm. And do you think the solutions that are starting to come out the other end of the pipe, are they coming out better and faster? Or just better? Or just faster? Like, how do you think about that?

Bill: So, they’re coming out better. Faster is still—we’re still tuning those parameters. The team is still trying to fuse these two things together. I mean, data science still takes time, which is something that the UX-ers need to understand. So, back to that earlier question about, like, when we’re at discovery, having the data science there, like, what is that cross-pollination look like? It’s really two ways.

Like, we need the data scientists understand, like, why are we spending so much time empathizing with them, but I need the UX-ers to also understand, like, doing this analysis doesn’t just happen, right?

Brian: Yep.

Bill: It takes time to really sort through this data, understand what’s there, what’s usable, what’s not usable, do the insights it’s telling me really make sense? Is there a level of explainability there, especially once you move into machine learning, as I know you’ve talked about in this podcast before? You know, more complex models, neural networks are notoriously unexplainable, versus, you know, like a tree, some type of random forest. So, getting there, getting the UX-ers to understand that is also part of the hurdle that we’re trying to clear.

Brian: Got it. Yeah, you jumped right into my next question. I wanted to hear your thoughts on—I don’t know, your reporting structures and if the design and user experience practice reports up into you or not, but if you were hiring designers and bringing them in, what’s the message you need to communicate as a data scientist who wants to use design in your process, “But I need UX-ers and designers who think like this,” or, “They’re aware of this.” What’s this that designers and user experience people need to know?

Bill: So, what they need to know is that analysis takes the time it takes to be completed. It’s not fast, it’s not a straight line to the solution, and you may go down multiple paths with dead ends and you have to backtrack and start over. So, really having them understand that and possibly even start communicating that to the client based on what they need from the beginning is key. And I’ll say, like, we’ve been kicking around this idea—of course, it takes time to do this, too—but for the new people coming in, whether they’re on UX, data, software engineers, or even, like, some of the salespeople or product managers, giving them these fundamental masterclasses around, like, what is UX, what is data science, so they have that baseline understanding of what is it that data scientists do when they sit down at their computer, fire up their Jupyter Notebook, and then start looking at this data. What does that thought process look like?

And we try to use that CRISP-DM model, which is the classic model that many use, going through that exploratory data analysis, playing around with various models or visualization attempts to really get your arms around what is it that this data is telling you. And then put the same side, like, any new data scientists come in, like, have them go through that UX process. Like, what is UX? What is the problem space? What is the solution space? How are the two connected? How do you get from one to the other?

Brian: Yeah.

Bill: Because what we’ve discovered, using that UX mindset and that data science, specifically the CRISP-DM mindset, is that they supplement one another in the sense that CRISP-DM, in my opinion, all the things I’ve read about it, it glosses over that business understanding which is step number one. It very quickly gets into exploratory data analysis, model creation, all that stuff, without really spending the time to say, what are we trying to do here? What is the problem I am trying to solve? Which is what I think the tools from UX and that mindset really bring to bear in those efforts. So, in my opinion, that’s where the two naturally sit is right at that first step. What are we doing here? And how are we trying to solve this?

Brian: Yeah, outcomes and outputs is still the best framing I’ve heard for that, and it’s really, yes, we’re building a thing, but the thing only has value if it creates a change in the world. The world being, in the mind of the stakeholder, in the minds of the users, maybe some third parties that are affected by that stuff, but it’s the change that matters, so what is the better state we want in the future for our client or for our customers and users? That’s the thing we’re trying to create. Not the thing; the change from the thing is what we want, and getting to that is the hard part. Sometimes.

Bill: That question, the question we love to ask is, you know, “You have a magic wand. What would you change to make your life easier?” That magic wand question, I’ll say, is a game-changer. [laugh]. So, anybody listening, that should be something that your data teams adopt and ask upfront? Like, “What does that world look like, that ideal state look like? If you had a magic wand, what would you change?”

Brian: Is there an example of you asking that question you can share, with a client where, like, the presenting problem was A, and when you started digging into questions like that, and Socratic questioning, B came out the other end? And it’s like, lights go off, and everyone’s feeling like, “Wow, I never thought of it that way. Yes, that’s what I need.” Is there an example that comes to mind that you could share?

Bill: So, one project we had that we worked on for quite a while was with our Department of Public Health here in Montgomery County. They have what’s called a quick response team, so anytime a person goes to the emergency room due to an overdose, we’ve set up the pipelines in such a way that they get a notification and they send out these what are known as a peer supporter. So, these are people who’ve been addicted to some type of drug; they’ve since recovered and now they’re trying to help those who have overdosed get into treatment as fast as they can. So, they know how to talk to these people; it’s not like law enforcement or first responders, people who may not have ever experienced that. So, we’ve set up that system to do that notification, but they’d asked us, like, is there a way that we can classify these people or do some type of risk assessment to them?

So, we spent a lot of time developing a risk notification model, and going through our historical data, trying to understand, like, what are the drivers for a high-risk person versus a low-risk person? What does ‘low-risk’ mean? What does ‘high-risk’ mean? Given a scale from 0 to 100, you know, what’s the difference between somebody who’s, like, a 52 versus a 47? All that work, just for—and the Public Health Department liked it, epidemiologists like that kind of stuff, but the people who are at the frontline, the peer supporters, after all that work, we heard from them that it actually was not useful for them. The number meant nothing to them.

What they really needed was all the information that went into the model to give them that context around why did this person overdose? Was there something in their past? Was it some kind of chronic illness, such as cancer, that may explain why they had this overdose or was there just no possible data in the local area and they just came up as higher? Was their first time and they’ve actually—they may have come up as low? So, it’s that contextual information that went into the model that was actually more valuable to them, give them that historical picture to really then tailor their outreach efforts.

Especially if you have repeat users or repeat people who overdose, who obviously we’re going to come up as high utilizers, but to these peer supporters, they already know these people and they know that those individuals are not necessarily ready for treatment or outreach efforts; they’re just going to blow them off. So, having that contextual, that story behind the number was really more important to the people who on the front line than the people on the back end who actually requested that those models be created. And that’s some of the disconnect that I think you’ve talked about, and we talk about here. Like, the disconnect between the people who are actually doing the work versus the C-suite types or the leadership of whatever organization who want you know, the newest, sexiest machine-learning tools.

Brian: “Does it use AI?” [laugh].

Bill: Yeah. “Does it use AI?” “No, but it has a dashboard that people seem to like and that is actually more effective.”

Brian: [laugh]. So, in the end, did you actually present a model that had a score that was then explainable? Or you got rid of the model and you simply like, I don’t [unintelligible 00:30:53] some other way; you were just showing some correlated features, or like, how did that work in the end?

Bill: We had a model, had a score, gave, you know, the top drivers across, you know, certain population segments of that population. Again, the peer supporters did not care about the number. They said, “We do not care about this number. Just give us the data that went into this. Tell us about this person.”

And that’s ultimately what we had to do is retool it. We still leave the number in there. We gave a much broader data set to them about what went into that model and why is that score there?

Brian: Right. Well, you talked about interventions, and this is the classic thing, right? The change in the future is, it sounds like, “We want to know when and how and what interventions to take next. My request is not a score. The score facilitates me making a decision about intervention. Help me decide how to intervene properly, based on what happened with this patient.”

That’s the thing that they actually want to do. The score is just a component of that, the number itself not being particularly actionable; it’s the insights that go with it, it sounds like.

Bill: Yeah, and I will say that a lot of that discovery came out of embracing this UX and Design Thinking mindset. Because before that we had never actually spoken or interviewed the peer supporters. We’d always listen to those above them who are funding a lot of this effort, which, to their credit, like, they’re not trying to hamper the peer supporters efforts at all, just, they are not there on the front line doing the work, so they may not know what’s actually usable to these people. So, once we went all-in on this Design Thinking, we started going to their weekly meetings. We'd have them here in the office and talk to them and really try to get deep into what is it that they need from us to do their job and provide help to these people?

Brian: Right. You just said something really important I want to reiterate here, and I want to see if you have any closing thoughts to share on this. Just getting that access to those people, that is a challenge for some teams. And this is probably more true in the enterprise, internal data science groups where there’s all these change management organizations, different business units, they might be spread across the globe. It’s like, “We know what they need, don’t worry.”

You’ll talk to the head of the department, who’s looking at 15 regions who has 5500 staff that are the actual users, telling you their quote requirements, and they don’t feel like it’s necessary to go down. Like, “We know what they need. Don’t worry.” Did you have to, kind of like, negotiate to get that time with those frontline workers to actually say this time is well spent? “And yes, I know you want a machine-learning algorithm to generate a score, and we can do that, but we need to get that face time with the last-mile users.” Was that hard to get, or was it an easy—just tell me about that process of getting that time and access?

Bill: So far, that’s actually been really easy to get.

Brian: Oh, that’s great.

Bill: Once you ask, like, “Hey, I want to see the people who are doing this work.” “No problem. Just let us know when you’re available, and we’ll get you where you need to go.” And it’s been surprising because that’s actually an ask that I’ve noticed that a lot of the data scientists or, you know, people from my organization, they’re not [snaps fingers] quick to ask that question.

So, training people to be like, “Hey, I want to go see these people doing their job today.” Training people to have the courage to ask that question or even just the wherewithal to acknowledge that needs to be done is something that we continuously have to work on. Because going to where these people do their job, I think, just brings a wealth of information that you can’t get either through Zoom or through, you know, those who are pulling the strings above them.

Brian: Sure. Sure. Well, and I’m just going to toot my horn again and say that for people out there listening that feel like this is a tax, and that you’re impeding, you’re challenging their knowledge about what they need and all this, you are giving a gift. This is a gift that you’re giving to them to save time, to save money, to avoid building something that will never get used and will not provide value to them. You do need to push back against this and if they say no, that’s fine. Paint the picture of the risk, though, by not doing this.

“It’s very easy for us to build a model. It’s hard for us to build a model that someone will actually use to make the world better.” And in this case, it’s healthcare or support, intervention support for addicts. “Do you really want a model, or do you want an improvement in the lives of these addicts? That’s ultimately where we’re going with this, and if we don’t do this, the risk of us pushing out an output that doesn’t get used is high.”

So, this is a gift, it’s not a tax. That journey can feel difficult, as you said. Like, sometimes it is—I mean, actually, you said it wasn’t in your case, but I know for teams, it can be difficult, it can feel like you’re challenging authority structures and all these kinds of stuff. But.

Bill: So, I will say, like, getting access to the people doing the work hasn’t been hard yet, but convincing the people doing the work that there might be an easier way, sometimes—

Brian: Oh yeah.

Bill: —we experience a little bit of that [crosstalk 00:35:55].

Brian: [crosstalk 00:35:55] behavioral chan—from—yeah. Now—at least you have the users, but now it’s like getting them to change their mind, that’s a whole ‘nother—yeah.

Bill: Yeah. That’s a whole other challenge. And really working through that, especially—we had a hospice client who wanted some machine learning, which gets dark very quick, but it’s still very interesting, and very—there’s a lot of empathy to be established there with these nurses who are caring for people at end of life. So, the organization wanted a tool to help figure out what is the, what’s known as the acuity of this patient when they’re first admitted into hospice? How sick are they, and how much resources do we have to devote to this person in their final days?

So, we actually created two models, one that classifies them from the get-go—the hospice that we worked with, and I think this is true in all hospices, they have a term called short-stay patient. So, this is based on the clinical observation or the clinical judgment of the nurses and the doctors, like, is this person going to die in the next seven days? And I don’t mean to get dark [laugh] just let me throw that out there—

Brian: It’s okay. Yeah.

Bill: —there’s a lot of human judgment in there. And luckily, this hospice gave us access to all the historical patient data where we were able to then go in and really do some of that exploratory data analysis, like, what are some of the drivers of the people that they had previously identified as short-stay versus those who are not? And can we then develop a model to help compare to the nurses’ judgment? And surprisingly, we were able to do that and actually beat out the nurses’ judgment by a little like a couple of percentage points, I think it was close to about 20 percentage points on average. It was pretty cool. [laugh].

So, once they’re into hospice, the next question they want to ask as each day goes on and they’re checking in with them, it’s like, “Is this person who is maybe not the short-stay, are they getting closer to the end of their life where then they would switch to being short-stay within that seven-day window?” So, then we created a second model to then analyze the patient data as it’s coming in from the daily assessments after admission, to say, this person is within that two-week window or one-week window, or maybe is not. Because there’s also reimbursement from the government, as far as Medicare and Medicaid, since hospice is paid for out of Medicare and Medicaid. Once they’re switched into that status of being short-stay or what’s known as service intensity add-on, the reimbursement rates actually go up.

But if you look at the historical data, they don’t always get it right. You know, people suddenly take a turn for the worse and it was unexpected, and then they’re gone. So, if they could figure out how to predict, even within just a day or two, like, are they going to start taking that turn, helps them to then really call in the cavalry, give them the comfort care that they need for their final days, and they’re not getting a call, you know, late at night from their spouse, loved one, children who were taking care of them at home, and so they’re not suffering. So, at the end of the day, hospice is there to mitigate suffering at the end as much as they can, so bringing some of these tools to help get ahead of that is really the goal there. But from the business end, there is a benefit, too, to getting them into that higher care status. So, getting more reimbursement.

Brian: Yeah, you can—it’s kind of like, is the experience going off of a cliff or it’s slowly going down a hill as you transition to death. And that’s a difficult thing for the whole family to experience, and all of that. And it’s like, I would much rather prefer to go through something that was, you know, not that you want to draw it out, but, like, you may not want this sudden surprise, kind of, experience at that time of your life, and having some insight on when to put the right resources in play, makes a lot of sense.

Bill: Yeah. So, what’s interesting there, talking through with the nurses as they have a lot of systems that they have to juggle throughout their day. Any given patient that they see, they’ve got several different electronic medical records they’ve got to go through, they’re on native systems to manage all that data and coordinate care. So, what we didn’t want to do was now have to create another thing for them to have to go in and get this assessment from these machine-learning models. Which, you know, these people are experts in what they do, so that adoption rate is something that we were concerned about, like, are they really going to trust what these models are telling them?

Brian: Right.

Bill: So, we want to do, at least if they can’t necessarily trust it right off the bat, make it easy for them to see and then help them make that decision, like, is this right? Because we always want to give them the ability to override what the machine is saying. Like, we don’t want the machine making the decision; we want it advising them. So, figuring out how does it organically fit into their current workflow such that it doesn’t increase the burden that they already have to endure, was a huge part of that long discovery process, even after we created all the models. And really figure out, like, how do you best want to be told this?

Brian: Right. Did that effect, then, where the UX actually occurs? Like, well, it’s actually in the healthcare record, or it’s in this application the nurses are using or, like, where did it—oh, it was going to be a Tableau dashboard because that’s what we do all of our tooling in, then all you are like, “They’re not going to open Tableau when they’re in the hospital room. No way. Like, this needs to be at the time of point where they would make a decision. Where is that? Can we get that into that system?” Where did you end up putting that if you can—I don’t know if you can talk about that at all, but just thinking about that.

Bill: So, they had some of their own internal dashboards that was part of their workflow already, so we found a natural place. And we’re in the process of deploying that, those models on their networks right now. They have a couple of new dashboards that they’re trying to implement that simplifies that process. So, we want to put that right in there so anytime they click to a specific patient, they kind of get those numbers at that time, thrown into their face, along with all the other information they need to know about that person at that time.

Brian: Well, this has been great to talk to you about these experiences you’re having, and I really appreciate you being open about your journey and where you guys are at and what’s working and all of that. So, just kind of in closing, are there any other thoughts you’d like to share, any perspectives on what you’ve learned so far that you might say, “Hey, don’t do this again. I learned it the hard way. Try it this way.” Anything like that you’d like to share?

Bill: I mean, I’d say to anybody out there right now who’s currently working on data science efforts, the sooner you get your people comfortable with the idea of doing Design Thinking, get them implemented into the projects that are currently going on, it may be slightly painful right there, especially from the customer perspective, like, “So, why are these people suddenly asking me all these questions they didn’t know the answer to?” But if you just explain it to them, like, “Hey, we’re just trying to change the way we do business, and we really want to be able to empathize with you guys and, you know, those coming after you.” I think that will be a real game-changer for your data scientists and your organization as a whole.

Brian: Cool. Thanks for sharing that. By the way, anything coming up you want to share? Any news we might want to know about?

Bill: One thing we had recently, which is not a data science effort yet, although it will be soon, is there’s an organization in this area of the country called CareSource, which does a lot of the managing of Medicare and Medicaid claims. They recently announced that they were an investor into a product that we spinned out, called Positiv.ly, which is a social emotional learning platform for high school students, kind of like a safe space that the schools are able to control—control is a loaded word, but they’re able to curate what the student sees and not necessarily track what the students—how the students are using it, but it gives them a safe space to talk about things that are going on their life, such as parents getting divorced, death in the family, girlfriend, boyfriend breakup with each other, just somewhere where they can expose themselves and be a little bit more vulnerable with each other than they would necessarily in the classroom, in person or, you know, however, students are going to school these days.

Brian: Well, congratulations on that. And that’s something we could see online if we put a link in?

Bill: Yes.

Brian: Cool. We’ll definitely put that in. And where can people find and get in touch with you? Is it LinkedIn? Twitter? Where do you—

Bill: LinkedIn is the best place.

Brian: LinkedIn? Okay.

Bill: Yes, sir.

Brian: All right. We’ll definitely link that up in the [show notes 00:44:09] in the profile. Bill, thanks again. This has been Bill Báez, VP of Strategy at Ascend Innovations. Thanks for coming on and sharing your ideas with us.

Bill: Thanks, Brian. Been great.

Brian: Take care.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe for Podcast Updates

Join my DFA Insights mailing list to get weekly insights on creating human-centered data products, special offers on my training courses and seminars, and one-page briefs about each new episode of #ExperiencingData.