090 – Michelle Carney’s Mission With MLUX: Bringing UX and Machine Learning Together  

Experiencing Data with Brian O'Neill (Designing for Analytics)
Experiencing Data with Brian T. O'Neill
090 - Michelle Carney’s Mission With MLUX: Bringing UX and Machine Learning Together  

Michelle Carney began her career in the worlds of neuroscience and machine learning where she worked on the original Python Notebooks. As she fine-tuned ML models and started to notice discrepancies in the human experience of using these models, her interest turned towards UX. Michelle discusses how her work today as a UX researcher at Google impacts her work with teams leveraging ML in their applications. She explains how her interest in the crossover of ML and UX led her to start MLUX, a collection of meet-up events where professionals from both data science and design can connect and share methods and ideas. MLUX now hosts meet-ups in several locations as well as virtually. 

Our conversation begins with Michelle’s explanation of how she teaches data scientists to integrate UX into the development of their products. As a teacher, Michelle utilizes the IDEO Design Kit with her students at the Stanford School of Design (d.school). In her teaching she shares some of the unlearning that data scientists need to do when trying to approach their work with a UX perspective in her course, Designing Machine Learning.

Finally, we also discussed what UX designers need to know about designing for ML/AI. Michelle also talks about how model interpretability is a facet of UX design and why model accuracy isn’t always the most important element of a ML application. Michelle ends the conversation with an emphasis on the need for more interdisciplinary voices in the fields of ML and AI. 

Skip to a topic here:

  • Michelle talks about what drove her career shift from machine learning and neuroscience to user experience (1:15)
  • Michelle explains what MLUX is (4:40)
  • How to get ML teams on board with the importance of user experience (6:54)
  • Michelle discusses the “unlearning” data scientists might have to do as they reconsider ML from a UX perspective (9:15)
  • Brian and Michelle talk about the importance of considering the UX from the beginning of model development  (10:45)
  • Michelle expounds on different ways to measure the effectiveness of user experience (15:10)
  • Brian and Michelle talk about what is driving the increase in the need for designers on ML teams (19:59)
  • Michelle explains the role of design around model interpretability and explainability (24:44)

Quotes from Today’s Episode

  • “The first step to business value is the hurdle of adoption. A user has to be willing to try—and care—before you ever will get to business value.” - Brian O’Neill (13:01)
  • “There’s so much talk about business value and there’s very little talk about adoption. I think providing value to the end-user is the gateway to getting any business value. If you’re building anything that has a human in the loop that’s not fully automated, you can’t get to business value if you don’t get through the first gate of adoption.” - Brian O’Neill (13:17)
  • “I think that designers who are able to design for ambiguity are going to be the ones that tackle a lot of this AI and ML stuff.” - Michelle Carney (19:43)
  • “That’s something that we have to think about with our ML models. We’re coming into this user’s life where there’s a lot of other things going on and our model is not their top priority, so we should design it so that it fits into their ecosystem.” - Michelle Carney (3:27)
  • “If we aren’t thinking about privacy and ethics and explainability and usability from the beginning, then it’s not going to be embedded into our products. If we just treat usability of our ML models as a checkbox, then it just plays the role of a compliance function.” - Michelle Carney (11:52)
  • “I don’t think you need to know ML or machine learning in order to design for ML and machine learning. You don’t need to understand how to build a model, you need to understand what the model does. You need to understand what the inputs and the outputs are.” - Michelle Carney (18:45)

Links Referenced 


Brian: Welcome back to Experiencing Data. This is Brian T. O’Neill. Michelle Carney, welcome to the show. You’re here, finally.

Michelle: Hey, thank you so much for having me. I’m so grateful.

Brian: Yes, Michelle is the founder of MLUX. So, we’re going to talk about, well what I really want to talk about today is why ML people need anything to do with UX. And maybe why UX people need anything to do with ML. Because you put them together—

Michelle: Right. My favorite topic.

Brian: Yes. But first, was there a single moment in your life when you turn from being a neuroscience researcher to becoming a user experience professional? Was there just, like, a bolt-of-lightning moment? Because that’s quite a shift.

Michelle: Yeah. I mean, totally. I don’t know if I could pinpoint the exact date and time. I am a pretty meticulous calendar person, but I do remember—so I did my undergrad in computational neuroscience working on some of the original IPython Notebooks, which later became Jupyter Notebooks, you might use Colab or something internally. But what I was doing was really optimizing different hyperparameters of these models, and tuning these models, and the models would say, “Hey, this is, like, 30% better,” or whatever.

And I was working on, like, hearing-aid cochlear implant-type algorithms. And just because the model says that it’s better, you put it in front of people and they don’t actually perceive a difference. And if they aren’t hearing 30% difference, that to me was the really interesting part about, what does it take? It’s not enough to just optimize it on the ML side, but how do we actually get the human perception to match that?

Brian: Got it. So, do you have a general approach or a change in thinking now? And can you kind of summarize what that is? I mean, you said—you gave us an example of that, but how do you apply this to any problem that comes up in the future like this—

Michelle: Totally.

Brian: —that’s not a cochlear implant or something? How do you think about this now?

Michelle: Oh, my gosh, yeah. So, by day, I’m, like, a UX researcher embedded on different ML teams. And one of the big things that I do is actually go in and help the teams understand that the models themselves also need to be designed because you can make a really great model, but if it doesn’t have the inputs and the outputs that users are expecting, it’s not going to work very well when we launch a product. So, really focusing on, like, well, how do we get this in front of users early and test often and get their feedback? And also, the other big thing to remember, too, is that these models are not—they don’t exist in a vacuum. Like, they are coming into these users' lives and the users have so much else going on around them.

For example, like, Alexa devices, or Google homes, and stuff like that, too. Whenever they’re marketed, you notice, “Oh, nothing else is on the living room table, and, like, they’re just in the center. And oh, the audio would work perfect, right?” But in real life, I don’t know if you have one—I see you smiling and laughing, so this must resonate with you, but for me, it’s like, on, my kitchen counter, hidden away, and, like, the microphone probably gets a lot of echo and all this stuff, too. And, you know, all this noise is going on.

So, that’s something that we have to think about with our ML models, too, is that we’re coming into this user’s life where there’s a lot of other things going on and our model is not their top priority, so we should design it so that way, it fits into their ecosystem. Does that kind of make sense?

Brian: Yeah. So, I would imagine there is some change that needs to be req—[laugh] required for some of the team to maybe to even begin to care about this, let alone to participate in that activity of thinking about the context in which the data product and the model, the work that they’re doing on the technical side, does get experienced in the real world. Is there a particular approach you guys use? You’re currently Google, right, working on ML/AI tooling is that correct?

Michelle: Yeah, for sure.

Brian: Okay. For technical audiences, is that right? Like, most of the work you do is going to be used by a technical audience?

Michelle: Yes. Yes, for sure. [laugh]. Yeah. So, I work by day at Google, but I love this topic so much and I’m here to share kind of my overall breadth and knowledge and expertise because I’ve been running the meetup for now, since 2017. Oh, my gosh, five years.

Brian: What meetup. Tell us what MLUX is—

Michelle: Oh. Right. [laugh].

Brian: —if you have to—we haven’t even—I haven’t even asked you that. So, for people who don’t know that’s listening, why don’t you tell us what MLUX is?

Michelle: Oh, my gosh. Thank you. Okay. Yeah. So—oh, yeah, the meetup. So, it’s the machine learning and user experience meetup, or MLUX meetup, founded in 2017 when I was very interested in both ML and UX and I would apply for different types of jobs and they’d be like, “Mmm, if you do both ML and UX, you must not be good at either.” And I was like, “What? No, there must be other people like me out there.”

So, I thought it would be, like, a 20-person pizza party maybe once a quarter or something. And our second event, we had, like, over 200 people. And we’ve just been growing over the last five years. Now, we do all of our events virtually as well. And I know that you’ve been to some of the events too, which is awesome, so that’s how I was all, like, “Oh, I should chat with Brian because I see him on the invites.”

So, our topics really range from, you know, “How do I be a UXer in general, just, like, an interaction designer, visual designer, working with an ML team,” to, “How do I, as an ML specialist, build out products for users and everything as well.” As well, methods for UX researchers. So, like, unsupervised learning methods for data-driven personas, and everything, too, on UI click metric data. So, really bringing together people from UX, data, ML, AI, and everywhere in between, PMs, software engineers, and stuff, too. People who are interested in making more usable ML products sell.

Brian: Right. Right. So, you know, given the… again, ironically, no analytics about my show, but I know who I intend to speak to when I talk to those of you listening right now, who are probably coming more from the technical, from the data science side, maybe technical product management or analytics, let’s call them ‘design curious,’ ‘UX curious.’

Michelle: Perfect.

Brian: If I’m bought in that some of this UX stuff isn’t fluff and I’m maybe I’ve seen how people aren’t able to use the things that I put out with my team, or they don’t want to use them, or they don’t believe them, they have some of these kinds of problems, “I hear this UX mojo, like, that can help me out.” What do I need to change or do in my team to get going with this? Especially if maybe I’m not going to run out and hire, like, a UX professional because I wouldn’t even know how to fit them into my team. What’s the thinking change, what stuff my data science team can do themselves? Give me suggestions for how to get going with changing the way we approached the work.

Michelle: Yeah, that’s a great question. I think, for me personally, like, you don’t need to be a UXer to do design thinking or anything, too. So, that’s another thing that I do outside of my normal day job and running a meetup, I teach at the Stanford School of Design, or the d.school. And one of the big things that I teach is a class aimed at those types of folks, PhD students in computer vision and NLP and that kind of stuff, too.

There’s a really great website, the IDEO Design Kit that I absolutely love. I don’t know if you’re familiar with it, but it has a bunch of different activities and exercises to kind of get those questions flowing about, who are your users? How are they wanting to use this? One of the biggest things that I always hear is, “Well, everyone is my user.” And so, then I’m like, “Well, is a grandma in Australia your user?” “No.” “Okay, so not everyone.” Let’s try to walk it back a little bit in everything, too.

So, starting from the large boil the ocean to maybe let’s start with a cup. But also putting things in phrases of point-of-view statements. “As a blank, I want to blank so I can blank.” Where the first blank is, what type of customer or something too. A second blank is maybe a feature or thing that they have to do. And the ‘because of blank’ is really their motivation and their goal.

And I think just doing a couple of those will get your team started on really thinking about who’s coming to this product? Why are they using it? How are they using it, and everything, too? And you probably—I mean, I assume the teams probably know this because they’re building it because of some type of feedback, whether that’s through a forum or, you know, key enterprise customers, or something like that, too, and so like, that’s enough to at least get started on this before you potentially bring on a professional UXer to conduct that research, validate it and everything like that.

Brian: So, tell me, is there anything that, particularly a data science team working with machine learning who, you know, obviously, they were going to have technical skills and all that, is there any particular thing that they need to approach user experience work differently than maybe a different kind of person because of their, we’ll call it a bias and their knowledge around data science and all of that, or just a particular worldview about how they see the world that you kind of feel like, “I got to undo that a little bit. I got to unpack this in order to repack it with a different perspective.” Any broad, general—I mean, we’re talking about generalizations here, but have you seen any patterns like that you’re kind of like, we need to unlearn this and then relearn this? [laugh].

Michelle: Yeah, for sure. One of the first things we do when we start off our d.school class—myself and my co-teacher, Emily Callahan—is we set up a shared vocabulary with the students. And the reason is because of that, kind of like, preconceived notion. I’m sure all the data scientists who may be listening probably understand this but a computer vision person has a different way of thinking about problems than an NLP person, or, you know, someone who works with time-series data or something like that, too.

And so, just because you come from a certain background doesn’t mean that everyone kind of thinks the same way that you do. Or maybe one person’s training data is another person’s validation set. I don’t know, like, things like that, too, where you’re kind of using interchangeable terms, but you’re like, “Wait, hold on. Where does this go,” and everything, too.

So, the first thing that I also try to help teams understand, just as you’re not your users, you might have really deep expertise because you’re the ones who are building out these tools, but what might your users be trying to accomplish? And so, really thinking about, like, how can I help them where they’re at? What should be something that should be explained? Or even something, too, where you can—one of my other favorite types of UX research techniques is called a contextual inquiry, and it’s really just, like, sitting and kind of like a ridealong with the user about what are you doing? What does that look like?

And understanding, too, I’m not there to judge you on how you’re using this tool. You are the expert here and what I’m trying to do is test out the usability of my tool or of this tool, and your feedback is incredibly valuable because it helps me shape that.

Brian: And are you seeing that your students and just maybe the community out there, there’s a broad stereotype sometimes that, like, it’s not my problem if they don’t understand it. My job is to do the work and to produce the model and make it accurate. And, you know, if they don’t understand it, I don’t know what I’m supposed to do with that.

Michelle: Oh, my gosh.

Brian: Do you get that at all, or not so much?

Michelle: Do I get that, like, people telling me that? Sometimes. Do I get that from, like, a do I understand why people say that? No, I don’t understand that.

Brian: [laugh].

Michelle: It is your problem.

Brian: Well, kind of like, it’s someone else’s job. Or it’s, “Well, that sounds like a data literacy problem which I’m not here to solve.” Or—

Michelle: Sure.

Brian: —something. And again, I’m broadly stereotyping things that I’ve heard just from doing this for such a long time now. [laugh].

Michelle: Oh, my gosh.

Brian: But this is sometimes a problem where, you know, it’s technically right, effectively wrong is the label that I use for this.

Michelle: [laugh]. Totally.

Brian: But it’s like, the effective part isn’t my problem. So, it’s like, well, whose problem is making it effective?

Michelle: Yeah, and I hear this all the time, too, back when I did more privacy things, like the chief privacy officer, okay, they’re in charge of privacy, and now we’re hearing it with ethical AI of oh, well, there’s an office of ethical AI, it’s so their problem. But it’s like, but if we aren’t thinking about privacy and ethics and explainability and usability from the beginning, then it’s not going to be embedded into our products. If we just treat, like, usability of our ML models as a checkbox, then it kind of is just playing the role of a compliance function. And imagine, too, it should not be that big of a lift as a data scientist ML researcher, who’s building out these models who wants them to be successful, to really think about how people might want to use them, so that way, you are able to deliver it in a way that really resonates with people. And it’s still as part of your job [laugh] and everything like that, too. It’s not—shouldn’t be a giant lift of, oh, this is someone else’s role to accomplish. Because you probably want your model to do really well, too.

Brian: You would think so. [laugh]. I know a lot of the, I guess the simplest framing that I’ve heard, that I think about this, especially in the context of enterprise and more business applications of machine learning, is that, okay so business value is the promise of these data science teams, right? We’re investing all this money in data science, and those smart PhD people are going to go crank something out of the basement and we’re going to get some magic coming out the other end. And the first step to business value, you first have to go through the hurdle of adoption.

Michelle: Yeah.

Brian: Like, someone has to be willing to even try or to care before you ever will get to business value. And so, I see a lot—I guess, one of my peeves in this space is that there’s so much talk about business value and there’s very little talk about adoption and providing value to the end-user who is the gateway to getting any business value. If you’re building anything that has a human in the loop that’s not fully automated—and even in a fully automated system still has some human touchpoints in it—you can’t get to business value if you don’t get through the first gate of adoption: Want, care, usability, utility, that has to provide—it has to improve someone’s life in some way or you will never get to that. That’s my soapbox on that.

Michelle: Preach, Brian. I totally agree. Because, yeah, business value also totally abstracts away any, I don’t know, accountability. It’s like, “Who’s responsible?” “The business.”

But it’s, well, our users, you could really think about who our users are. And actually, break that down into, well, we’re aiming for these types of enterprise users in this types of contexts and we want adoption on this type of thing. And it actually leads to better metrics overall, too, when you’re able to be that specific. So, that’s another big thing that I end up helping teams work on, too. But the other thing is, too, this is where I get UXer involved in stuff too because a lot of UXers like, “Well, I don’t work on AI or ML. I don’t know how to do it.”

And I’m like, “But you’ve been building out UX. You are an expert in UX and think about this machine learning or AI or data science model or something, too, as another thing that can be designed.” And being able to present it in ways that really resonate with your users, that could come up with even better ideas in the next quarter or a year or something like that, too, that really are solving these larger business problems by involving the user. It’s not just, like a… I don’t know, like a checkbox for data literacy, or, “Oh, do I really have to explain this,” but it’s like a way for your company to continue innovation and stuff, too.

Brian: Mm-hm. Okay, so let’s say, “Okay. I’m sold this stuff sounds good. I’m going to go try to do some of this. But how do I know it’s working?” Like, how do I measure this? How does my team know that we’re—I mean, my model, I can see if we get from 30 to 46% accurate? Like, that’s some pretty concrete feedback right there. How do you measure this UX stuff, like that we’re making a difference? Like, how do we know?

Michelle: For sure. Accuracy is only part of the equation. You also want to look into—I think you even mentioned it, too—like, adoption and how often that model is being used and in what context, and everything, too. Some other things, too, that I have seen that are really successful about just gauging some user feedback include if you’re able—I’m sure we’ve all seen those little surveys that kind of pop up, and they’re like, “How satisfied are you with this?” And whatever, they might have some questions, and everything, too.

That’s a great way to just, quarter-over-quarter, be like, who are our users? What are they coming here to do? What are they trying to do? At least get some type of feedback. Another thing that I’ve seen, too, is like, some companies may actually read the Twitter or Reddit comments or something, too, about their product, or wherever your users are kind of—if there’s an open forum on StackOverflow, or something like that, too.

So, you can see the types of problems, and people sometimes will be like, “Oh, I’m working with this type of data, and,” blankety-blank-a-blank or whatever. So, you really get a sense of what the users are doing. But then how do you measure it, right? I think that this is where you really put yourself in the mind of the user where you’re like, “Well, what I really want as a enterprise user in a healthcare company, is to be able to deploy this without any crashes,” or something like that. So, maybe then you start tracking crashes. Or, “I want to deploy this and be able to have it serve across multiple devices,” or something like that, right? Or that kind of thing, too.

So, really think about, like, how does the model live in this user’s ecosystem? What does it look like? What are all the touchpoints in which people will be interacting with it, and then treat those other, kind of like, freeform text and, you know, little surveys as kind of sprinkles of anecdotal data? Because one of my favorite quotes is, I think, “One story is just an anecdote, but multiple anecdotes is actual data.” Because, you know, you’re getting it from multiple sources, and it’s over time, and everything, too. So, think about it that way.

Brian: So, I’ve picked on my data science listeners enough. I’m sure some of them are now unsubscribing. Let’s talk about what design and user experience professionals need to learn about machine learning and data products in general. My general feeling is there’s a very small audience out there that are looking at this as maybe anything more than a fad, kind of like it’s—

Michelle: Yeah.

Brian: I don’t know, it’s just another software thing, this whole machine learning thing. I don’t look at it that way; I know you don’t see it that way. What do design leaders need to be thinking about with all of this?

Michelle: Oh, yeah, for sure. My two cents is that we used to have ML and AI be like a thing within the experience. It used to be, like, an AI recommendation is, like, here, or something like that, too, right? Like, your Spotify Discover playlist or something, right? But it was like, the rest of it may be curated, or here’s what our editors want-or picked for you and that kind of thing. Or like, spring stuff, whatever.

But now AI is becoming more and more the product itself, and so it’s really important to think about it’s not just a one-off, oh, I need to design this one thing, but what happens when the entire interface is potentially changing for different people? And how does that presentation matter? And what are things, too, that maybe you really do want to have a sense of what makes a good overall experience for this user? And I know a lot of the times some designers are like, “Well, I don’t know machine learning; I don’t know ML.” But I don’t think you need to know ML or machine learning in order to design for ML and machine learning.

You don’t need to understand how to build a model, but you need to understand what the model does. You need to understand what the inputs and the outputs are. And personally, I think it’s going to go the way of—I don’t know if you remember the fad, like, uh, ten-plus years ago, where everyone was like, “I’m a mobile designer.” But like now, if you say, “I’m a mobile designer,” people are like, “What is this 2012 or what?” [laugh].

Brian: [laugh].

Michelle: Because mobile is just a part of the design language itself. Now, when you design something, you think about where your users are and potentially make a mobile app and that kind of thing, too, or just go with responsive or something like that. So, I think, we already see a lot of—I actually see a lot of job postings now and then about, “Oh, AI designer,” or, “ML chatbot designer.” Voice user interface designer is probably a big one that I see a lot of, which I think that one is going to be here to stay because that one’s a unique thing about voice, but I actually think that designers who are able to kind of design for ambiguity are going to be the ones that tackle a lot of this AI and ML stuff.

Brian: What’s driving that increase? Why are these—I think most of the [contents 00:19:55] what you’re talking about are software product teams, but why didn’t they need this five years ago? Why didn’t they need a designer on these teams five years ago? What changed that all of a sudden there’s a feeling they need user experience help?

Michelle: Yeah. I mean, I think that it’s because of access. I think that doing ML and data science and AI has become so much easier with a lot of the tools that we already have, especially a lot of the off-the-shelf models, and everything, too. Brian, I don’t know if you’ve seen the Hugging Face Spaces is currently what I’m most obsessed with. I’ll ping it to you, but it’s basically how do you just demo an ML model and understand what it does before committing to it to put it on your website, right?

And so, I’m sure you saw the hype with DALL·E, too, that just came out last week that’s OpenAI’s new text—like, you write text and it turns it into an image, and everything, too. But there’s a waitlist and stuff. But they actually made a DALL·E mini where you could just play with it on your browser, and you could just test it out and write something.

I always hear the song, “Sweet dreams are made of bees,” so that’s my go-to, like, [laugh]—

Brian: [laugh].

Michelle: Test out this ML model of what shows up when I type that, that kind of thing, too. So, access is a big one, and I think there are some design patterns that really matter. I think Hugging Face Spaces is a really great example of that. I think things like Colab notebooks where you’re able to quickly draft up a working doc of—it’s basically like Google Doc, but of code, normally Python code and stuff too, and quickly share it across your team. You don’t need to be, “Oh, okay go to my Jupyter Notebook repository,” and all this stuff, too.

I think, like, that kind of types of innovation and technology have taken ML and AI from being a one person on the team can do it to more like anyone can kind of check-in and see how the model is doing and all the stuff, too. And so, I think that’s really important.

Brian: Cool. I meant to ask you about this earlier. In some of the work that you’re doing right now in your full-time gig, talk to me about designing for technical audiences, and what does it mean to have good user experience when you’re talking about designing technical tools? Should they be easy? There’s always this trade-off—

Michelle: Yeah.

Brian: Of flexibility versus… you know, it’s like you don’t always need a purpose-built tool; sometimes you need a platform: a platform for experimentation or a platform do this. “I don’t need a piece of art; I need a Photoshop.” It’s a different kind of design. Talk to me about how do you think about this. How do you approach it when data scientist are the users—

Michelle: Yeah—

Brian: —of the tool?

Michelle: Definitely. So, one of the things that I think about and I talked to my teams about is—I know we talked about, you’re not the users, but just because we spend eight hours a day thinking about these tools doesn’t mean our users should spend eight hours a day thinking about this tool, right? And so, how are they going to get to this tool? How are they going to leave this tool—because that’s the other thing; we don’t want them there forever—and so how do we support them in their overall journey from maybe they are testing out something very early stage, maybe they’re already dealing with a model that’s already in production, like, how do we help support them wherever they’re at and so this tool becomes a vital part of their work stream? That’s something that is a overall user experience problem, where it’s, “Yeah, I can provide the best user experience on my platform. I can’t change the other platforms, how they come in and how they leave, but I can at least make sure that the connections that I’ve established can actually work for them.” Right?

So, it may be something of integrating into GitHub or being able to quickly visualize it on a Tableau dashboard. I don’t know, I’m just thinking about some common things that I’ve seen before, too. And that still could be a huge critical value to the users because it’s not just about the model, it’s also about how do you interpret what the model is doing? How do you deploy the model? How do you change the model? How do you tune its hyperparameters? That kind of thing, too. Which model do you choose? Oh, that’s a big one. I don’t know—I know for me when I develop ML models, I have, like, image net 7, image net 403, image net 9002. It’s, “Oh, okay. These are my three really good ones because I’ve tried so many other ones, and how do I decide which one do I go with?” So, that kind of thing, too.

Brian: Got it. When you’re testing these solutions with technical users, is it harder to get clear signals about when you’re doing a good job or not, when the audience is technical? Or do you have pretty good, like, formal structu—you know—

Michelle: I mean, yes and no. So, I think sometimes for enterprise users—which tends to be data scientists using data science tools, right—there’s just a lower threshold for what’s okay. And so, [laugh] some people are like, “Oh, yeah. This works. This is fine.”

But this is kind of where I put on my UX research hat and I’m like, “But is this actually the way that you want to do it? Like, walk me through your workflow.” That’s another big one, too about how do you go from I have a model here, and now I’m trying to—or you’re using a model on my system or platform, and what do you do next? Like, why do that? What are the types of things that would make you want to change that model? That kind of thing, too.

Brian: Got it. Got it. I want to, actually, double back on one other thing you had talked about, interpretability. Can—talk to me a little bit about the role of design around model interpretability and explainability, as well/ I’ve had multiple guests on the show to talk about this, and I think these are important parts of the design—

Michelle: Yeah.

Brian: —important parts of the user experience? How do you think about that? When do we need to expose the guts? How do we expose the guts? How do we convince somebody—or maybe convincing is not the right word, but how do you approach the design of interpretability and/or explainability?

Michelle: Totally. I think you—probably your other guests, too, have felt the same way—black box models or to—calling things black box doesn’t help in terms of what can we actually look into it? I think, you, me, probably your audience, we understand that when you make a model, there are things that make this model different than this model, and we’re able to heuristically look at that and understand, “Oh, this was performing better because of X, Y, or Z,” right? Or at least some general heuristics. I think that’s the type of thing that I would love to see, kind of, surface to our users in terms of explainability and interpretability, is you don’t need to explain everything.

One of the things that I think about, too, is I don’t know if you’ve seen the TensorFlow of embeddings projections—

Brian: Mm-mm.

Michelle: —online?

Brian: No.

Michelle: Okay. I’ll link it to you, but it’s like, imagine you could plot all principal components analysis of [unintelligible 00:25:55], or something. So, you see a cluster of, you know, ‘royalty,’ ‘queen,’ ‘king,’ royalty kind of things, and you can literally explore it in the three-dimensional space, and you could change out the different principal components. No, that would not work for, like, a business manager [laugh] or something like that, too. “So, like how do I predict my sales?” Like, no, they don’t need to know all that.

But they might want to know, “Hey, another similar type of business to medical applications is health care or optometry, or here’s different types of”—you know, so it’s like, how do you take this n-dimensional space and be, like, “Well, what are some heuristics that I look for?” Right? I look for things that might be similar. Here’s how I present the similar things to the users.

I think those types of design patterns actually get at explainability and interpretability a lot more than being, like, “Here. Now, look inside the hood of the model,” and that kind of thing, too. And so, it’s things that we actually know how to do that we’re doing already when we go about building the model, but we’re not surfacing them to the end-user. Does that make sense?

Brian: Yeah. Yeah. This is good. MLUX, where are you going with it? What’s next?

Michelle: [laugh]. Yeah.

Brian: What do you want to see—what do you want to see out of this meetup? Is it going to stay a meetup? What’s its vision?

Michelle: Oh, gosh, yeah. When I made it in 2017, I was like, “Yep, it’ll be around for a year or two and then we’ll just normalize that ML and UX go together, and then I’ll stop running it, and we’ll be fine.” But still, I find that you know, still people are like, “Whoa you do ML and UX?” I’m like, “Yeah.” What?

So, I guess it’s going to still be a thing until we have more jobs and normalize that ML and UX go together. We’re still doing virtual events, too. What we found—I mean, we were doing events in New York and Seattle, so we had already branched out to three different locations—SF Bay Area, New York, Seattle. Had done events at the World Trade Center with Spotify, had done events up in, with Getty Images with the head of data science there. But you know, we still want to be accessible to everyone, and by doing things virtually, we found that we’re able to engage the community in South America—we get a ton of people from Brazil and Guatemala joining—Australia, we weren’t able to support people in different parts of the world, and so we’re trying to mix it up. We’re doing some 9 a.m. Pacific Time, we’re doing some 5 p.m. Pacific Time, and just trying to be accommodating for people’s schedules. We have events coming up, so I want to get this on your radar. I don’t know when this podcast is going to come out, but the next one is going to be May 12th at 9 a.m. with some folks from the Expedia group. This might be of particular interest to your audience because it’s going to be their head of data science and one of their senior designers on how do they work together to actually, as a UX designer for data scientists and the data scientists who’s using UX designers and all this stuff, too.

Brian: Awesome. How do they get that info?

Michelle: Yeah. Oh, my gosh, thank you. Follow us on Twitter, at @mluxmeetup. We’re also on LinkedIn and stuff, too.

I think our Twitter’s probably the best and I’ll just tweet out when stuff is happening. And from there, you can also see we have a newsletter, and so I try to send that out when we have a couple events. And if your audience is ever interested in our past events too, we actually have them all up on our YouTube channel. So, bit.ly/mluxyoutube is where to find all of them, too. And there’s a couple of dozen videos at this point. [laugh].

I’m trying to think the one that might be of particular interest is actually Salesforce’s data-driven personas. I know I mentioned it at a high level, but it was actually pretty interesting where they took the metrics data from their users to create personas. And so, how does that, kind of, work? And maybe that’s something, too, where your audience actually has metrics on who’s using it [laugh] in a general sense, and how do they use unsupervised learning to kind of create oh, wait, this is kind of one type of user group. Maybe, what are some motivations that they might have to think of? So.

Brian: Cool. Michelle, this has been fun. Any—just wanted to give you a chance; any closing thoughts you’d like to share before we call it a day? It’s been great to talk to you, and thanks for your work in this space.

Michelle: Oh, my gosh, thank you for the opportunity. I guess my closing thoughts are, we need more folks who have a background in design, data science, but also anything in between. Like, archaeology, I’ve met—some of my favorite folks doing ML and UX stuff are from sociology or social work and everything, too. And so, if this is of interest to you, too, we need a lot of different voices, a lot of different perspectives, and everything, too. Because AI and machine learning is something that impacts everyone.

And so, if we want it to be designed for everyone, we need everyone to design it. And it needs to be designed by everyone. And so, I want to encourage folks who may be listening and from different backgrounds of, like, “I don’t know if I fit in,” and all this stuff. No, keep on doing it. Think about how your domain expertise—we’re all an expert in something—applies to this. And excited to see what y’all do.

Brian: Awesome. I love it. Michelle Carney, founder of MLUX. Thank you for coming on Experiencing Data.

Michelle: Thank you, Brian. Thank you for hosting this. This has been really fun.

Brian: [laugh].

Michelle: Great way to start my Monday morning. So. [laugh].

Brian: [laugh]. Awesome. We’ll take care of—we’ll get the links up to the [shows 00:30:42] and oh, yeah. And you? How do we follow you? Is LinkedIn, Twitter—

Michelle: Oh right.

Brian: Where do people follow Michelle?

Michelle: Yes, @michelleRcarney on Twitter. My April Fool’s joke this year is that I changed my middle name to R because I used to teach data science and I loved the programming language so much.

Brian: [laugh].

Michelle: Yeah. Thank you.

Brian: Leave it with a data science joke. [laugh].

Michelle: Uh-huh. I think—

Brian: Okay. [laugh].

Michelle: Twitter liked it more than my students did. It is—anyways, michelleRcarney. The other Michelle Carney is a famous soccer player, so I’m the R one because I love data science.

Brian: Excellent. Cool. Well, we’ll talk soon, and thank you so much for coming on.

Michelle: Thank you, Brian. And thank you folks for listening.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe for Podcast Updates

Join my DFA Insights mailing list to get weekly insights on creating human-centered data products, special offers on my training courses and seminars, and one-page briefs about each new episode of #ExperiencingData.