Registration Closes Oct. 3rd for My Final 10-Week Training Seminar of 2021
Designing Human-Centered Data Products is back, but space is limited. Work with me and a small group of data product leaders who want to learn to build more useful, usable, and indispensable analytics and ML solutions. Live sessions begin Oct. 4, 2021. Details/register

070 – Fighting Fire with ML, the AI Incident Database, and Why Design Matters in AI-Driven Software with Sean McGregor

Experiencing Data with Brian T. O'Neill
Experiencing Data with Brian T. O'Neill
070 - Fighting Fire with ML, the AI Incident Database, and Why Design Matters in AI-Driven Software with Sean McGregor
/

Episode Description

As much as AI has the ability to change the world in very positive ways, it also can be incredibly destructive. Sean McGregor knows this well, as he is currently developing the Partnership on AI’s AI Incident Database, a searchable collection of news articles that covers questionable use, failures, and other incidents that affect people when AI solutions are poorly designed.

On this episode of Experiencing Data, Sean takes us through his notable work around using machine learning in the domain of fire suppression, and how human-centered design is critical to ensuring these decision support solutions are actually used and trusted by the users. We also covered the social implications of new decision-making tools leveraging AI, and:

  • Sean's focus on ensuring his models and interfaces were interpretable by users when designing his fire-suppression system and why this was important. (0:51)
  • How Sean built his fire suppression model so that different stakeholders can optimize the system for their unique purposes. (8:44)
  • The social implications of new decision-making tools. (11:17)
  • Tailoring to the needs of 'high-investment' and 'low-investment' people when designing visual analytics. (14:58)
  • The AI Incident Database: Preventing future AI deployment harm by collecting and displaying examples of the unintended and negative consequences of AI. (18:20)
  • How human-centered design could prevent many incidents of harmful AI deployment — and how it could also fall short. (22:13)
  • 'It's worth the time and effort': How taking time to agree on key objectives for a data product with stakeholders can lead to greater adoption. (30:24)

Quotes from Today’s Episode

“As soon as you enter into the decision-making space, you’re really tearing at the social fabric in a way that hasn’t been done before. And that’s where analytics and the systems we’re talking about right now are really critical because that is the middle point that we have to meet in and to find those points of compromise.” - Sean (12:28)

“I think that a lot of times, unfortunately, the assumption [in data science is], ‘Well if you don’t understand it, that’s not my problem. That’s your problem, and you need to learn it.’ But my feeling is, ‘Well, do you want your work to matter or not? Because if no one’s using it, then it effectively doesn’t exist.’” - Brian (17:41)

“[The AI Incident Database is] a collection of largely news articles [about] bad things that have happened from AI [so we can] try and prevent history from repeating itself, and [understand] more of [the] unintended and bad consequences from AI....” - Sean (19:44)

“Human-centered design will prevent a great many of the incidents [of AI deployment harm] that have and are being ingested in the database. It’s not a hundred percent thing. Even in human-centered design, there’s going to be an absence of imagination, or at least an inadequacy of imagination for how these things go wrong because intelligent systems — as they are currently constituted — are just tremendously bad at the open-world, open-set problem.” - Sean (22:21)

“It’s worth the time and effort to work with the people that are going to be the proponents of the system in the organization — the ones that assure adoption — to kind of move them through the wireframes and examples and things that at the end of the engineering effort you believe are going to be possible. … Sometimes you have to know the nature of the data and what inferences can be delivered on the basis of it, but really not jumping into the principal engineering effort until you adopt and agree to what the target is. [This] is incredibly important and very often overlooked.” - Sean (31:36)
“The things that we’re working on in these technological spaces are incredibly impactful, and you are incredibly powerful in the way that you’re influencing the world in a way that has never, on an individual basis, been so true. And please take that responsibility seriously and make the world a better place through your efforts in the development of these systems. This is right at the crucible for that whole process.” - Sean (33:09)

Links Referenced

Twitter: https://twitter.com/seanmcgregor

Transcript

Brian: Welcome back to Experiencing Data. This is Brian, and today I have Sean McGregor on the line who has done a lot of work with AI and machine learning. We’re going to be talking about firefighting, or fire prevention, using machine-learning and a project that he’s been leading called the AI Incident Database. So, first of all, welcome, Sean, to Experiencing Data.

Sean: Oh, thank you. It’s great being here.

Brian: Yeah, yeah. So, you’ve done a lot of impressive work. You’re a tech lead for IBM Watson AI XPRIZE, and you have this other, pet projects is probably not the right name; I think it’s pretty important, actually, but it’s the AI Incident Database. So, why don’t you give people a little bit of background? We’ll go into the database in a second, but you have this kind of interesting background that includes user experience work as well as machine learning and AI. Tell me how did you get started, and how did those two worlds come together for you? And why do you think design matters when we build machine-learning solutions?

Sean: Sure, it’s good to start with the fire suppression case here. Since about 11 years ago, I started a project that was trying to make recommendations for the fire suppression world. Basically, should you suppress the fire? Should you let it burn? As you can tell, this is not in homes; it’s actually in forests. You always suppress fires if they’re in homes, but in forests, there’s many instances where you want to let those fires burn because the ecosystem services depend on it.

There’s a lot of, actually, value even in timber and forestry, where having periodic low intensity fires through an area is actually quite good for it. The problem now that I encountered when working to optimize the system and to decide when you said suppress the fire and not and make a recommendation to a firefighter is both on the side of the person receiving that recommendation—they should have some level of skepticism for the thing that you’re telling them to do, particularly if you’re telling them to do something that isn’t intuitive for them, like a firefighter wants to suppress fires; that’s their job; they regard it as a almost holy war-like struggle against the flames and the waste of the burning—and on my end, as someone that really was going into go into the ideas market with something that is very blackbox in nature, you don’t really know why it’s telling you to suppress or not suppress a fire, I really did not ever want to have that really, kind of, visceral example of failure that I gave a bad recommendation to someone and they took it and now there’s a forest that is just a charred remnant of itself because it had a high-intensity stand-clearing fire where just, it’s very difficult for the ecosystem to even recover from that. So, it was very early as a result of this, of having a sense of the brittleness of machine-earned models and how they are or not serving the world that made it so that I didn’t want to just solve the optimization problem of, you know, “According to my simulations, this says”—my system, which is on the basis of a high performance compute cluster running the equivalent of thousands of years of experiments—I didn’t want to just say that the recommendation is, you let that fire burn; I wanted to have something that is more of a interpretable machine-learning system, and actually, in some ways, going beyond the current trend of explainable machine-learning—which is the model explains its decision to you—but actually do a lot more of sense-making systems where you have the ability to move through the decision space and understand the basis of the recommendations of the system in terms of the rewards that its optimizing, the system dynamics of how fire is spreading over the landscape, and really tie this all together in one vigilant analytic environment that tells you what, why, and how it arrived at that recommendation.

Brian: Got it. So, who would be the person sitting down to use this? Where would they be sitting down to use it? Are they in a hurry? Is this like, the fire’s last weeks and so this is like, I check it every morning at nine and I get kind of a status, and the model might be changing its recommendations based on how much wind blew the fire east? And help us visualize what it’s like to use it and maybe some of the skepticism; where does that sink in, and how does your software try to balance those human interactions?

Sean: So, I found in the production of the system that—at least the people I was talking to, and there’s going to be a variety of users out there that have their own suspicions or eagerness to adopt the systems—a lot of people I was talking to, they wanted really cool AI techniques and solutions to bring to their problem. And the ones that were most eager to do that and probably the ones that it was most appropriate for them to adopt these tools were what’s called a land manager: someone who might have thousands or even millions of acres that they’re responsible for, and they need to, at least in the case of public lands, come up with a land management plan, something that, it’s basically, “This is the rulebook of what we’re going to do with this land, and it includes elements of timber cuts, where you can take things off the land, it includes elements of this is our plan in response to wildfire when it happens.” It has a lot of additional intensity behind that plan because it’s also, in the case of public lands, very subject to litigation, people fighting out whether it’s appropriately balancing all the varied interests and joint uses of a lot of public lands.

And so kind of the thing that our user and the system is most concerned with is how do you write that document and then how do you justify the recommendations made within that document? So, in terms of building a user interface and making it so that someone can use it for that job, the important thing is, first, how do you affect the mental state of the land manager so they understand it internally and are able to figure out what is best on the basis of your tools, and their expert knowledge, and being someone that really understands the land on the intuitive basis, how do you inform that person? And then, how do you produce things for that person that then allows them to justify their own sense-making process that they execute on the basis of your tools, and other data sets, and other things that are out there? So, it’s—you know, like the one thing they could put into there, it’s just like, “This is a recommendation in response to this type of fire meeting these conditions.” But the thing that this work additionally enables is talking about the distributions of futures that are produced by those decisions made in the present.

So, one chart I really like is a fan chart that, as you go from now in the present and go into the future, the quantiles of the distribution of outcomes actually get wider and wider and you can see and compare these fan charts as you generate these large number of samples of futures, and how that shifts and appropriately present the uncertainty that is present within the system. So, these fan charts were just one example of something you could drop into a forest plan, a land management plan, and use that and compare between those to justify what your management actions are.

Brian: Got it. So, it sounds like the technical work you’re doing was actually being used to write the rulebook that will then be used at the time there is an incident. Is that more the workflow?

Sean: Yeah. Like most things with AI, it’s very inappropriate to just throw it all onto the intelligence system and say it knows better than we do. It’s always the work of humans, the human in the loop, or a human-centered view of things where you’re trying to brief that person is the proper way to view this activity and to view, frankly, most activities where a computer is giving a decision.

Brian: What was the enemy in your process in terms of the thing that I need to make sure that this user experience or the design of a system like this needs to overcome to get people to believe it? Could you name the enemy? And I don’t mean a person, but I mean, the biggest challenge you had in terms of the human adoption of a system like this; what was that? Was there one singular thing that, like, the trick is to get them to do X or to believe X, or to—what was that?

Sean: Here, there’s 7 billion enemies, and uh—

Brian: [laugh].

Sean: —it is people. The problem in forestry specifically is that a lot of the management actions you would take are very subject to the values and interests of all the people that are involved in it. So, one thing we’re finding in optimizations is depending on what you put into the—what’s called the reward function or basically the thing that’s being optimized, depending on what you put in there, the actions you would take in response to the fire just vary drastically. There’s a very narrow window where there’s much nuance, at least in the study area that I work in, very little nuance to your decisions. If you value not breathing in smoke, it almost always makes sense to suppress the fire because you’re going to have fewer smoky days in expectation over the course of 100 years.

If you value ecological services, if you value having diverse species on the landscape, then you want a lot of low-intensity fires very frequently and a lot of burning; a lot of days of smoke. And timber industry has its own interests, and this is true across the board that as you keep on throwing things into the basket, that enemy, or that surface of how do you reconcile all these things gets a lot more complicated. And this is where the system that I built was really meant to allow for people to change the values ascribed to each of these different elements—so smoke inhalation, ecosystem surfaces, timber—be able to change those values and reoptimize subject to that, and look at what you should do in that case. And it really, to me helped me understand the enemy of the other interest of what other people are wanting to do, which hopefully then allows for finding what is the middle point and the negotiations base, and understand that a lot better.

Brian: I would imagine that with something like that, even when you give someone a toggle to say, “Well, how much smoke inhalation will you accept high, medium, low or zero?” That’s the hard part is getting—especially if there’s a group of stakeholders—to decide that, “Okay, we’re going to say that we don’t care about smoke inhalation.” And it’s easy to do the model part, but is that difficult to get a team to say, “Ecological diversity, yeah, well, we’re going to take a hit on that. That’s okay.” You give them this lever, but the ability to set it with confidence, and then to own up and say, “We didn’t suppress the fire, and here’s what we”—[laugh] and someone raising their hand in the meeting afterwards saying, “What the eff were you thinking? We have the Eastern Lizard Toad, you know—

Sean: [laugh].

Brian: —[laugh]—“Didn’t you know?” Is that right, or—

Sean: We must all think of the Eastern Lizard Toad. And—

Brian: Right. [laugh].

Sean: —I think this is the challenge in computing, and we are now running into the challenges of democracy. And that’s something that people in computing have not ever had to deal with before; people in engineering, by and large, have not had to deal with this in a lot of ways. As soon as you enter into the decision-making space, you’re really tearing at the social fabric in a way that hasn’t been done before. And that’s where analytics and the systems we’re talking about right now are really critical because that is the middle point that we have to meet in and to find those points of compromise. That’s not easy; it’s not going to be comfortable, and it’s drawing into much sharper focus all those conflicts of past where we didn’t have numerical backing for arguments and numerical realizations of, these are disparate interests.

Brian: It’s something I talk to in my training a lot about, is this act of decision-modeling and trying to do the last mile as the first mile of the project. So, it’s deciding that, like, how would we choose to wait? Smoke inhalation versus ecological diversity, and having discussions with the people that are going to use this before we ever build that solution to see if we can find out that there are—“Hey, there’s already legal specifications on what we would allow,” or, “No, we’ve never even thought about having to make a collective decision about how much diversity we accept in a quantifiable way. Maybe we should go spend some time on that before we actually put this button here because no one’s going to know how to set it and no one’s going to want to be responsible to say, I accepted this and took action on this because of that.” And to me, this is classic design work: it’s understanding the new problem space that the technology actually creates for us.

Sean: Yeah. Yeah, and it’s something that’s deeply uncomfortable to pretty much everyone, but it’s so powerful in its ability to just realize the surplus of more intelligent, more thoughtful decision-making. And this is where we need to bring a lot more people into technology, and frankly, have technology go to a lot more people so that we can actually produce that middle ground. Right now, we’re still having debates over who’s responsible for developing that sense for computational thinking or the ability to understand technology in a deeper way than cooking through a GUI. And I think the most transformative thing that we can have moving into the future beyond the technologies or what comes next is education and making it possible to understand this.

Brian: Yeah, yeah. Last question on this. Was there one thing you learned and maybe changed about the user experience or the interface design when you were working here that, like, wow, I never would have known that? Or [laugh] when I showed this to people, they freaked out or they totally misinterpreted something. Was there any moment like that?

Sean: The really challenging point, at least in the visual analytic designs is—for me at least—is when you have high investment versus low investment people. So, like a land manager, if they’re basing all their workflow around this, you can actually put in a lot of visual renderings that are less intuitive but are more informative to a person that understands it. If you’re talking about the general public, though, someone that’s going to do a drive-by on the New York Times article or something like that explores the properties, you need it to be something that is just based in terms of a rendering that they’ve seen before, that they understand the dimensions and the way that your X and Y are getting carved up, and how color is being used in everything. And the hard part is really figuring out what in your toolbox you can use for your user because if you are in a low investment side of things, or even if you’re in higher investment and you’re just trying to convince them to use it if you go straight to that end state, that end visualization things, they’re probably going to reject it or not want to use it. People have come to expect that there is no manual you read; you need to be able to just jump in and start getting the benefit from the technology immediately. Which I don’t think is wholly unreasonable. [laugh].

Brian: Yeah, yeah. Was there something that you ended up changing? Maybe a first cut or some iteration that you learn something through showing it to a land manager and, like, “Okay, this needs to be redone or rethought?”

Sean: Yeah. There was a lot of what I think was clever design in the use of parallel coordinates that I quite liked, and got a really strong sense internally for what it was I was seeing and adding additional context within the parallel coordinates than just having a series of vertical bars with lines connecting them. I was finding that most people were not seeing what I was seeing; most people weren’t understanding that. It was something that was powerful in the high investment side of things, like if you get someone to do a workshop and worked with them for a few hours, then they would be solid with it. But in the end, most of the time, they’re not going to do that and you can’t base your system design around a workshop existing and the delivery of it. So… really had to kill the feature. There wasn’t much of a place for it.

Brian: I don’t think everyone out there, and especially in data science necessarily, thinks that. I think that a lot of times, unfortunately, the assumption is, it’s like, “Well if you don’t understand it, that’s not my problem. That’s your problem, and you need to learn it.” But my feeling is, “Well, do you want your work to matter or not?” Because if no one’s using it, then it effectively doesn’t exist. [laugh].

Sean: Yeah. And it’s not about making the most beautiful, best presentation of things. Yeah. It’s, you’re bringing something to market; you’re looking to change the world, and for that, you’re dependent, unfortunately, on people that just don’t get your art. [laugh].

Brian: Yeah, yeah. I understand. Let’s jump over to the AI Incident Database. So, first of all, why don’t you tell people what it is?

Sean: Sure. So, the AI Incident Database is really inspired by the experience of the aviation industry, where over the course of the last century, we’ve been recording all the cases of accidents and incidents where planes have crashed or fires have ignited, just bad things happening with planes. And a big reason that aviation has such a safety factor as compared to its historical [laugh] unsafety is these recording of incidents, and then the design and engineering of human and mechanical systems that make aviation safer. You have to measure before you can improve; you have to record before you can learn from history. And where this is an important lesson for artificial intelligence is we, up until the AI Incident Database existed, didn’t really have a systematized collection of all the AI harms that exist in the world in a way that you could visit that collection, find things related to the products you’re producing, find things related to the application domain that you’re applying them to, and then learn from those past incidents of harm.

So, it’s a collection of largely news articles, people writing about the most sensationalistic and bad things that have happened from AI, and put that into a full-text search, incident search database where you can search for facial recognition and find all the incidents of things that have happened with facial recognition, and predictive policing, and public security cameras. There’s a ton of examples in this that really leverage the power of example to try and prevent history from repeating itself, and realizing more and more of unintended and bad consequences from AI, as opposed to all the good things that we can do in the production of AI systems.

Brian: Mm-hm. Are there any particular incidents recently that are really standing out to you as learning opportunities?

Sean: Sure. So, one of my favorite examples because it brings in elements of technology, unexpected behavior of system, and even culture, is one where a woman in China was publicly shamed on a billboard for jaywalking because they use a facial recognition system in China to detect a person moving across the intersection, and then they shame people to try and prevent them from jaywalking in the future. So, element of culture, we—[laugh] I can’t imagine that happening in the United States. That’s a… very different terms of cultural norms, but we can actually learn from this application deployment that if and when we do bring more computer vision systems in the real world, thing that you need to recognize is that this particular woman in China wasn’t actually in that intersection; she didn’t move through it. A picture of her was on the side of the bus, and the bus went through the intersection, and the cameras detected her picture on the side of the bus and then shamed her picture on the billboard.

So, we can learn from this, that there are images of people moving through the world and if you have, in particular a safety-critical system that is looking to detect people, you should probably adequately test what happens if you have, like, a cardboard cutout kind of a person going through there that lacks any kind of third dimension to them, and you can engineer safer systems and learn from that. That’s one of my favorite examples, at least. There’s a great many others in the database as well.

Brian: Do you think design and designers—human-centered design—is an antidote to some of these problems? Or do we need something else to get your database to zero?

Sean: Human-centered design will prevent a great many of the incidents that have and are being ingested in the database. It’s not a hundred percent thing. Even in human-centered design, there’s going to be an absence of imagination, or at least an inadequacy of imagination for how these things go wrong because intelligent systems as they are currently constituted are just tremendously bad at the open-world, open-set problem. New experiences, if they’ve not been experienced in the course of training, are going to—we just don’t know what they’ll stumble to; it’s going to behave in very strange and unpredictable ways. And so, human-centered design helps prevent a large number of things but a lot of what we need to do is just close the gap between the engineering of systems in industry and the deployment of those systems, and what it looks like in that generalization of the model. If you have any gap between those two, you’re always in a space where imagination probably isn’t going to keep up.

Brian: What’s the gap exactly?

Sean: The open world is complicated because you’re talking about a system, like a computer vision system, that you’re going to have some set of experiences that you bring into the training process in the production of that model, and you’re not going to be able to bring that model to every country in the world to look at how it responds to different physical infrastructure; you’re not going to be able to bring it everywhere in the world to test it against different audio environments, different accents, you’re never going to have a complete sampling for everything that can exist. And the intersection of all things that can exist and your model is quite large. It’s just, even if you’re a very imaginative person, you could think of a lot of strange behaviors that could exist. It’s just, the space is too large. You’re not going to be able to comprehend all of them.

Brian: Yeah. I mean, I would completely agree with all of that. I guess, what I wonder about is whether or not some teams are not even asking the most basic questions because they may not even have the right people in the room, and by right people I mean if they have a lot of the same kind of people, they’re asking a lot of the same kind of questions, and they’re not even knowing to say, “Oh, someone that doesn’t speak English,” or some context that you can’t get your head out of because you only speak English like everyone else does, or someone that doesn’t have a right arm, or someone that—name your thing.

Sean: To bring it back to the fire example of you don’t have the people in the room that are breathing the smoke or the people that are going on hikes there, you don’t have all those constituencies present and so, you’re not going to be able to express that. This is a huge problem in AI right now. There’s a lot of different sub-communities that are looking to solve different parts of this. Responsible AI, there’s groups coming up and developing processes around this umbrella term, there’s audit processes that are likely to come online, there’s regulation that is being discussed. Europe has been pushing very quickly on this, and they’re just trying to figure out how to address it.

But it’s a huge problem. You’re never going to have everyone in the room, but it’s always better to have more than one perspective in the room, but you will not get a hundred percent of the way there by saying we’re going to have a more representative development team; you’ll probably go from ten percent to seventy percent by having at least a few different viewpoints and communities represented on your engineering team, but a hundred percent requires a hundred percent of viewpoints. [laugh].

Brian: Yeah, yeah. Do you think this AI incident situation, are we moving fast enough that the incidents are going to outpace the corrective work that we’re doing, or is regulation and teams, in general, are getting better at this, and so we’re going to slow down a little bit to put better quality stuff out first? Who’s winning that race?

Sean: Right now, I would say that technology is moving faster than our ability to understand and adapt to it. And I don’t mean that just in the failure sense; I mean that in the social sense of social systems take time to develop, they move much more slowly. The social systems are a consensus of people and the way that they develop informally and formally. And every time we’ve had a great technological revolution, there is an interim period where we have to invent the new social norms, and in some cases social systems—formal systems—to work through those. And a good example of this would probably be the Agriculture or the Industrial Revolution.

All these things produced a huge number of negative consequences, but that was also on the way to solving things like infant mortality rates, which are far lower across the board than they were a century or more ago. So, we have so many benefits realized from technology, but we also have to develop things like the American Medical Association as an organization that decides who is a good and proper doctor so you don’t have a person that does frontier medicine that pulls up with a tent and decides to arbitrarily invent a new surgical procedure because they think it’s it might work and do something.

Brian: [laugh]. Yeah. If you were going to advise a team, or a team wants to get better at this—maybe they don’t want to be in the AI Incident Database, or there’s just a better business value in doing it, quote, ‘the right way,’ is the answer that engineering and data science teams need to develop some other skills to do this work better, or we’re actually missing some key skills that need to be at the table that aren’t there? Like, the team is incomplete or the team just doesn’t have all the skills and we need to be learning something else? What would be your way of approaching that?

Sean: I’m trying to formulate an answer to this that doesn’t involve a multi-million dollar assurance budget associated with it, which is not really a possibility for the vast majority of projects. In some cases, it’s a functional requirement baked in, like autonomous driving is a good example of something where you can spend millions of dollars on that, and people said, it’s a readily justifiable business expense to do so. A lot of what’s happening with the Incident Database is helping empower the person in the project that’s saying, “Maybe we should hire at least one person to look at the bias problem,” or something of that nature. Because otherwise we’ll end up with an embarrassing news article written about it, and here’s all the news articles previously written about it before. In terms of what the maximum impact, lowest effort thing is here, probably the best thing you can do is—as a starting point—is having people representative of the deployment scenario for your models involved in the engineering process in some form.

I engineer systems, I train deep learning neural network models as a lot of my efforts at this point, and I’m so thankful that when we deploy models to India for instance, that we have people on our development team from India. But if I’m deploying a model to Madagascar, I have no idea what people’s accents for Madagascar sound like; I don’t even know if we can get samples out of there. So, it’s immensely powerful to have the population represented in your team, as a starting point. There’s a lot more you can do beyond that, though.

Brian: Any other final thoughts just about one of the dirty secrets, I think, with data products, you know, analytic solutions, and data science right now is that just a whole lot of the work is not getting used. It doesn’t ever get out to the people it’s for, it doesn’t create value. We’re building things but we’re not creating outcomes from the things. Any other suggestions from your experience on how teams can drive adoption?

Sean: I think having a complete briefing of what it is you plan on delivering at the end of an engagement and what you are able to do, and walk them through their user story and your mind is probably one of the most powerful things you can do. A lot of times, leadership at a company, they’ll have tens or hundreds of projects under their purview, and they’ve read business case studies of other things and said, “Okay, there’s something going on here. There’s something of substance and we should have a team working on it.” And there’s a lot of distance between that of, like, “All right, we’re going to throw a budget, this person is going to lead the effort, and they’re going to report back a product, and then they’re going to move on to the next thing.” There’s a lot of distance between that and impact in an organization and problems.

And it’s worth the time and effort to work with the people that are going to be the proponents of the system in the organization—the ones that assure adoption—to kind of move them through the wireframes and examples and things that at the end of the engineering effort you believe are going to be possible. You don’t always know that because it’s sometimes you have to know the nature of the data and what inferences can be delivered on the basis of it, but really not jumping into the principal engineering effort until you adopt and agree to what the target is, IS incredibly important and very often overlooked.

Brian: I fully agree with that. And it’s good advice and it is not happening a lot, so I think, again, exercising that last mile experience as much as we can before we’ve committed too much is always a good way to go. So, any final thoughts you wanted to share with us, and I wanted to ask, too, how to get in touch with you but wanted to give you the last word first.

Sean: I think I should—I would be remiss to not thank and acknowledge the collaborators on things, and in particular, working with the Partnership on AI on the AI Incident Database, which is a collection of big tech companies and civil society organizations working to make sure AI benefits people in society. And I’ve been very thankful for the support and collaboration of the organization. And then I have a number of collaborators to mention in the fire work as well. I hope people will check out the research papers associated with it, and the set of collaborators there.

And just, I guess in wrapup, say the things that we’re working on in these technological space are incredibly impactful, and you are incredibly powerful in the way that you’re influencing the world in a way that has never, on an individual basis, been so true. And please take that responsibility seriously and make the world a better place through your efforts in the development of these systems. This is right at the crucible for that whole process.

Brian: Excellent. Sean, thanks. Where can people stay in touch with you and where should they go to find out more about your work?

Sean: My vanity website is probably a good place of the nexus of all the stuff that I personally work on. So, if you search ‘Sean McGregor’ and then anything having to do this conversation today, I think my SEO is high enough to pop up. The website is seanbmcgregor.com. If you want, you can also tweet at me or message me on there at @seanmcgregor.

Brian: Awesome. I will definitely put those up in the [show notes 00:34:04] and link to the AI Incident Database as well. So Sean, thanks again for coming on Experiencing Data, chatting with me.

Sean: Thank you, and thank you for having me.

Brian: Yeah. It’s my pleasure.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe for Podcast Updates

Join my DFA Insights mailing list to get weekly insights on creating human-centered data products, special offers on my training courses and seminars, and one-page briefs about each new episode of #ExperiencingData.