Cennydd Bowles is a London-based digital product designer and futurist, with almost two decades of consulting experience working with some of the largest and most influential brands in the world. Cennydd has earned a reputation as a trusted guide, helping companies navigate complex issues related to design, technology, and ethics. He’s also the author of Future Ethics, a book which outlines key ethical principles and methods for constructing a fairer future.
In this episode, Cennydd and I explore the role that ethics plays in design and innovation, and why so many companies today—in Silicon Valley and beyond—are failing to recognize the human element of their technological pursuits. Cennydd offers his unique perspective, along with some practical tips that technologists can use to design with greater mindfulness and consideration for others.
In our chat, we covered topics from Cennydd’s book and expertise including:
- Why there is growing resentment towards the tech industry and the reason all companies and innovators need to pay attention to ethics
- The importance of framing so that teams look beyond the creation of an “ethical product / solution” and out towards a better society and future
- The role that diversity plays in ethics and the reason why homogenous teams working in isolation can be dangerous for an organization and society
- Cennydd’s “front-page test,” “designated dissenter,” and other actionable ethics tips that innovators and data product teams can apply starting today
- Navigating the gray areas of ethics and how large companies handle them
- The unfortunate consequences that arise when data product teams are complacent
- The fallacy that data is neutral—and why there is no such thing as “raw” data
- Why stakeholders must take part in ethics conversations
Resources and Links:
Future Ethics (book)
Quotes from Today’s Episode
“There ought to be a clearer relationship between innovation and its social impacts.” — Cennydd
“I wouldn't be doing this if I didn't think there was a strong upside to technology, or if I didn't think it couldn't advance the species.” — Cennydd
“I think as our power has grown, we have failed to use that power responsibly, and so it's absolutely fair that we be held to account for those mistakes.” — Cennydd
“I like to assume most creators and data people are trying to do good work. They're not trying to do ethically wrong things. They just lack the experience or tools and methods to design with intent.” — Brian
“Ethics is about discussion and it's about decisions; it's not about abstract theory.” — Cennydd
“I have seen many times diversity act as an ethical early warning system [where] people who firmly believe the solution they're about to put out into the world is, if not flawless, pretty damn close.” — Cennydd
“The ethical questions around the misapplication or the abuse of data are strong and prominent, and actually have achieved maybe even more recognition than other forms of harm that I talk about.” — Cennydd
“There aren't a whole lot of ethical issues that are black and white.” — Cennydd
“When you never talk to a customer or user, it's really easy to make choices that can screw them at the benefit of increasing some KPI or business metric.” — Brian
“I think there's really talented people in the data space who actually understand bias really well, but when they think about bias, they think they're thinking more about, ‘how is it going to skew the insight from the data?’ Not the human impact.” — Brian
“I think every business has almost a moral duty to take their consequences seriously.” — Cennydd
Brian: Hello everyone, welcome back to Experiencing Data. This is Brian T. O'Neill and today we're going to talk about ethics with Cennydd Bowles. Cennydd what's happening?
Cennydd: Hey, how’s it going?
Brian: It's good, hey. Now did I get your name correct?
Cennydd: You did, yeah it's basically the Welsh version of Kenneth more or less.
Brian: Excellent. So when people see this episode you're going to see the name C-E-N-N-Y-D-D, it's a good Welsh name but it's pronounced Kenneth and I wanted to get that right, so I'm glad I did. So give people a quick overview of what you do today with your work?
Cennydd: Yeah, sure. So I'm a designer by background, but for the last maybe five years or so I've been focusing on the ethics of design, the ethics of emerging technology and then generally the ethics of innovation, I suppose. So I wrote a book about called, Future Ethics, which came out round about one year ago and since then I've been consulting and speaking and writing and workshopping mostly around that topic. I still do a bit of hands-on design particularly in the privacy space and also do a bit of futures and speculative design work as well, so that plays into that space quite nicely too.
Brian: Tell me about innovation ethics.
Cennydd: Essentially the reason I'm choosing that framing is because there ought to be a clearer relationship between innovation and its social impacts. We talk about ethical design or ethical technology, but the focus there is often on the design and the technology whereas I think innovation is possibly a bit of an easier sell, that innovation is something that changes the course of the world, in some cases. So it's really just a framing that I think is advantageous for certain audiences, essentially.
Brian: Got it. So about your book, you have this book called Future Ethics. I'm still reading it and it's great, there's so much depth in it which I really appreciated. I would say, if this is a fair thing, you sound a little bit ticked at the tech industry. Is that a fair read of what you think, you're a little bit disappointed in where things are at, are they getting better? Is that a misread on my part?
Cennydd: Yeah, I would say disappointed is the right framing.
Cennydd: There are some in the tech ethics sector who are angrier than I am. But I still love this field and I still love the potential impact technology can have. I wouldn't be doing this if I didn't think there was a strong upside to technology, or if I didn't think it couldn't advance the species, I suppose. Disappointed certainly, maybe a bit disillusioned as well. Because I've worked in Silicon Valley and tech firms and startups and various things, and people maybe sometimes assume that there was a moment when I was just overcome by revulsion for what I saw, and I really wasn't. There wasn't any one thing, it was more an accumulation of decisions that I just felt like I couldn't support. And then obviously since, maybe the last three, four, five years there's been a significant change in particularly the press perspective on technology but then also the public attitude towards the tech industry, both of which have gotten pretty negative. And rightly so, to be honest, I think as our power has grown, we have failed to use that power responsibly, and so it's absolutely fair that we be held to accountable for those mistakes.
Brian: So on this show, my assumption is that the audience listening to Experiencing Data splits a little bit. Because it's targeted at people working in analytics and data science and technical product management, there are probably a good number of people here who may not identify as being part of the tech industry because say, they might be working at, say, an insurance company, so they're working in the analytics and data in a non-digital [environment] where the product is not necessarily technology. Do you feel like your disillusionment and that feeling goes towards that crowd as well, mostly doing internal B2B applications of data and data products, or is it all encompassing?
Cennydd: Yeah, yeah. They don't get to wriggle off the hook, I'm afraid, as far as I'm concerned. A lot of the ethical harms that we have seen, that may not be part of the tech industry but have at least involved technological decisions and data decisions and algorithmic or pseudo-algorithmic decisions, have still come from that sector, and they've potentially, in fact some cases had some of the largest impacts. The ethical questions around the misapplication or the abuse of data are strong and prominent, and actually have achieved maybe even more recognition than other forms of harm that I talk about. So yeah, I would say my message hopefully is relevant to folks who even may not identify individually as a technologist.
Brian: No, I understand. I've worked with a lot of startups as well, and we have some similar background here and I wonder if part of this is also, there's a prevalence of younger aged people, younger white male-dominated, at least in my experience, a lot of the startups are pretty homogenous looking, age-wise not a lot of experience, which means you don't necessarily understand the deeper ramifications of some of the choices that are being made. And the VC backing often puts this super-aggressive, “show revenue as fast as possible at all costs, and if you're going to take risks, now is the time, at all costs, and just plow forward” kind of thing and I felt like your book is trying to slow that down a bit and put it in perspective. And this usually just goes to regular design, when we talk about human-centered design for individuals, it's so easy to make decisions when you never talk to a customer or user, it's really easy to make choices that can screw them at the benefit of increasing some KPI or business metric. Do you think that homogeneity of the people working in tech, maybe over the last 10 or 20 years? I'm sure it's probably diversifying now, I don't have any data about that but that has something to do with where we are?
Cennydd: Yeah, okay there's then a whole lot to unpack here. With respect with something you said just toward the end of that section, I'd like to just take that a little bit further even, and say in user-centered design, you can definitely make mistakes that harm befalls the user if you don't consider them properly. But even that's not enough, because if we focus just on the user then there still may be all sorts of damage that happens to people who aren't users. This is something the economists will call externalities, costs or harms that fall on people who aren't part of the system. Passive smoking is the classic example. And we've seen a lot of user-centered products, things like Airbnb, which are great for the user or users too, in this case the renter and the lessor of the premises, and then all the harms and all the costs fall on the local communities, rather than those people. But as to whether it's to do with homogeneity, certainly doesn't help. You're absolutely right of course, the tech industry still skews white, it still skews male, it still skews young. At least certainly in the US and Western Europe, that's the case and as far as I'm concerned, diversity—although it's a political issue to an extent—I make no apologies for saying it matters for ethics because I have seen many times diversity act as an ethical early warning system. In the sense you have people who firmly believe the solution they're about to put out into the world is, if not flawless, pretty damn close. And lots of times I've seen in critique sessions or whatever it might be someone who's dissimilar to that group in some way say, "Well hang on, you do realize, right, that folks from my communities or the people that I'm closest to, we would never use this," or, "We would use it in a completely different way," or even worse, "This system could be used against us, by harassers, by abusers, by the state, by whoever." And the number of times I've seen that happen, and the rest of the folks in the room go, "Oh, actually we had no idea...." And we've still got so far to go on diversity, and I'm hopeful there are some small, small green shoots. I really hope that trend continues because it can only serve to make our ethical challenges that little bit easier.
Brian: So we jumped into some of the tactical things and I want to get into this, because what I was hoping really today is to get some of your wisdom and experience with this and give people some things they can walk away with today, particularly thinking about someone who's probably managing a data science or analytics team or again, a technical product manager. You talked about this front-page news test which as I read through all the different kinds of schools of ethics that are there and understanding that this isn't a checklist mentality but that front-page news test, it stuck out to me as something that's easy as a starting point. And so, tell us about the front-page news test as a way to check yourself on where you are. Let's assume the users here, everyone's trying to do good work, they're not trying to do ethically wrong things, it's more of an ignorance and a lack of experience and “I don't know what I don't know” situation. Tell me about this method.
Cennydd: Sure. So this is one of four ethical tests that I put forward, which I derived from the main three pillars of modern ethics, which I shan't go into in detail here. But this last test, essentially asked: Would you be happy for whatever it is you're about to do to be front-page news? If someone saw this and published it tomorrow, would you reflect on that with pride? Would you buy copies of the newspaper for all your friends? And also, what would people infer about your values, your virtues, your character from you having taken that decision? So this is why it's about what's called virtue ethics. It's all about what is the moral character that we try to aspire to as individuals, and then are we actually taking decisions that show that, that actually evidence that? A similar angle might be, would you be happy for that decision to be sent as a push notification to all your friends and family? And I'm reminded sometimes of, I think it was the design of, I don't know if it was the first or the second Apple Mac. I think it may have been the second, but the team was so proud of that device they actually molded all of their signatures into the plastic of the machine. But they put them on the inside and partly that's probably an aesthetics choice, but also I think that's a bold, almost an ethical choice because the only time you're going to be delving into the inside of the computer of that era, is usually when something's gone wrong. But they were still so proud in what they'd done, they said, "We are the people responsible for this product, for the decisions that have gone into the product." That's sort of undercurrent where, "If you've got a problem, take it up with us. We're happy to own it." And so it's really the same thing, would you really be happy to own that decision? There's so many ethical corners that get cut because there's no light shone on them. There's one, “oh I can get away with this one, no one's really looking. Yeah, go on, we'll sneak that one through whatever it may be.” So it's just a tool, I suppose to help us reflect whether that's really the most healthy ethical approach.
Brian: So if this is one of the four, could you give us a summary of the other three?
Cennydd: Yeah, sure. So in the order that I present them in my book. The first two are actually based off... And this is about as theoretical as I'm going to get, don't worry... But they're based off Immanuel Kant's work back in the late 1700s, I think. And the first one is more or less: What would happen if everyone did what I'm about to do? It's sort of similar to the virtue ethics test, but it's really asking us to universalize our thinking. To say if this was a common rule of behavior, would that world be a better place, or a worse place? Related to that is another Kantian idea which is: Am I treating people as means or as ends? So to unpack that, that means essentially am I treating people fairly with their own goals as important, or am I really just using them as means for me to achieve whatever I want to get done in the world? And I think I see this quite a lot in companies that are data-driven, that rely heavily on experimentation, A/B multivariate tests. A lot of the time in those organizations that scale, people do become means to an end. Companies start talking about, not individual users but as masses of users. So the conversation shifts to what behavior can I try to manipulate in my users for me to hit my OKRs for the quarter? So that's a question I like to ask in those situations. And then a third test of the four, the last one I haven't mentioned, is the cornerstone of what's known as utilitarian philosophy or utilitarianism, and it's essentially: Am I maximizing happiness for the greatest number of people? And this is a fairly well-established idea, and then by extension, am I minimizing harm or pain or suffering? So there's almost a calculus that the proponents of that way of thinking suggests. You could almost plug in happiness values, and in fact there are some people looking at what they call scientific morality in attempt to do precisely that, I know I'm skeptical of taking it to that level. But essentially those are four, maybe slightly reductive but at least reduced, ethical entry points I suppose, to those different ways of moral thinking.
Brian: So that's super helpful. I'm glad you summarized those. My next follow up to this is thinking about someone who's probably rather analytically-minded, I can see someone thinking about for example, am I maximizing happiness for the greatest number of people, well that's a scale right? If it was lever you could turn the dial to the right or you could turn it all the way to the left. And it's like, well in the game of business where these things are probably not binary, is this about our awareness and moving the dial more towards the directions these need to be ethically, as opposed to it's binary and you're either maximizing it or you're not and it's not really about these gray areas? Everything happens in the gray area, right? How does an analytically-minded team or a data team think about that gray area?
Cennydd: More or less. You're absolutely right that this does represent a gray area. There aren't a whole lot of ethical issues that are black and white. I mean yeah, I'm sure some are, but mostly they actually cease to become ethical issues, if everyone agrees that something is right or wrong then there's not a whole lot of ethical debate to be had. You're right, you could look at it almost mathematically, and say essentially I've got some function and I need to find the local maximum, right? Where I'm plugging in variables to represent my actors in the system, the stakeholders, users, the planet. One nice thing about that way of thinking is you have to consider, in the words of Henry Sidgwick, "You have to consider the point of view of the universe," which I think is a really interesting way of looking at it. Or you could essentially say, well if I weigh my happiness this much and the planet's happiness or the happiness we get from living in a healthy planet and environment, weighted as that, then you could say essentially that some function in my job is to tweak the parameters and maximize it. I'm not a huge fan of that way of thinking because it becomes a bit overly analytical and then you're always fudging numbers, right? So for me it's much more about recognizing the trade-offs, the compromises that, now if I choose to extract more value from the system, if I choose to try and squeeze more dollars and cents out of my customers, then there's a chance that might make them less happy or it might reduce trust that people have in my organization, or it might reduce trust people have in the entire tech industry. And then you have to at least talk about those competing claims. I want to do this, you want to do this, it would disadvantage these people but help these people. And at least then you have the basis for that discussion to happen, and that's the critical thing for me. Ethics is about discussion and it's about decisions, it's not about abstract theory. It's about evaluating, well what are our options and what actually should we do? And that really only comes from taking these issues seriously and talking them through at potentially quite some length with your peers and the folks who can actually enable that change.
Brian: Yeah. I talk sometimes about you may not be a titled designer but I encourage people who are makers—and the community that I'm really interested in helping out is this data science and analyst community—but if you're making products and making software and you're a maker, then I think the design thing is about being intentional and this feels like another area where a lot of this is about having the conversation and making intentional choices as opposed to just, “oh it's just where we landed and no one talked about it.” And this is how you fall into ethics. Like you're a great person but you still participated in something that had a really negative outcome. Would you agree that a lot of this is about the intentionality of these decisions?
Cennydd: Yeah, absolutely. Intent isn't going to guarantee you the right outcome, you could still have all these conversations and still screw up, of course. But I would hope it certainly reduces the chance of you getting it wrong. So much unethical design happens not through intent to harm, just with no intent whatsoever, just through negligence, just essentially through carelessness. Or through at least just assumption, well we're the good guys We all are good people, we're empathetic, we are analytical and frankly we're smart so we're probably doing the right thing anyway, right? We're not evil people and so on. And that complacency, I suppose when it comes to ethics, that's often when you see companies and individuals make some of the most harmful choices. So yeah, being alert to those questions is the first step and so that's what a lot of my work is, or has been until recently, is essentially saying, "Well here are the questions that you need to start asking yourselves, if you're not asking them already, here's a primer and then here's a baseline, then we can take that forward and develop. Okay once we've done that, how do we have proper arguments to evaluate these trade-offs properly?"
Brian: Talk to me about your take on the concept of using red teams. Say you're producing your first machine-learning model or you're in an analytics group and you want to put some of this into play. I've read about red teams having an ombudsman on projects, or people that are responsible to take in concerns to an outside team. Do you see these just as different tool sets? Go ahead, I'll let you expand on these different tools.
Cennydd: That approach, I think has a lot of value and you see a lot of fairly similar concepts bundled up together with this. You sometimes hear of, what's called a designated dissenter. This is an idea I first came about in Eric Meyer and Sarah Wachter-Boettcher's book, “Design for Real Life.” Essentially this is someone who role plays as someone who more or less is a constructive pain in the ass in the correct development process and they throw in challenge to the team's assumptions like "Why are you asking people this?,” ''What if I don't want to?," or "What if I'm from a vulnerable group?" for example. So there can be a lot of value in having someone act as, not necessarily the antagonist but someone just to lob in the occasional grenade of defiance. And you see this a lot in security. Security teams are already well trained to think about, well “What could a bad actor do in this situation? How might they twist this development for their good?” So it's almost bringing that to other aspects of human/computer interaction, product design, product management and so on. There is a potential limitation. If you're just role playing with someone who's challenging those assumptions, you may not be covering the right ones. So one thing I'm keen to get teams to do is to be broader I suppose, in the input that they accept, to reach out to people who are more likely to suffer from the harms of the decisions we take. And because technology is ultimately a very human thing, sadly those people are often from the most vulnerable sections of society already. It might be amplifying harm onto those people. So we should be listening to them, we should be reaching out to them and saying, "Here's what we're working on, here's what we're considering but can you foresee any challenges, any problems in what we're doing?" So this idea of participatory design essentially, or participatory development of products and systems, I think is a really important way to, not fully inoculate but to at least reduce the risk of us going off the rails.
Brian: Can I pull in my colleagues at my company for example, if I'm an employee, or is that too native? Even if they're in a different department, I'm sure it's a good start. Maybe it's not ideal but thinking in terms of an MVP effort to get going with putting a more of an ethical practice into place, is that a good start? Walk down the hall and talk at the water cooler, grab some people you don't ever work with but you see them every day?
Cennydd: As with a lot of things in ethics and in design the answer is, it depends. I would say most of the time, that's probably going to be an okay step. It's very likely better than nothing. There is the chance that it's worse than nothing but probably a slim chance and I'd say it's mostly only going to be worse than nothing if you then say, "Well, job done," I am, let's say, although I don't want to make it just about certain physical aspects, but let's say I'm a 30-year-old white guy. If you go and ask a bunch of other 30-year-old white guys and then say, "Well that's my ethical research complete." You're probably missing a whole bunch and so if it forecloses on a deeper discussion then that's where it might be harmful. But so long as you're mindful of trying to be a bit more open-minded and drawing in people with different personal context as well as different personal traits, then it might well be a first step. But ...
Brian: What's the right step? you sound hesitant. Tell us your starter recipe. You're just learning to bake bread, you're not going to get crazy with rye flour and whole wheat flour and double rises and all this kind of stuff, you're just going for simple bread to get going. So what's the recipe you start them out with?
Cennydd: Well I'm a designer as I say by background and so for me the easiest point of leverage is critique sessions. Designers already have critique as part of their process. And that's the closest a lot of organizations get to talking about ethics, because sure, some of it's around, “Well is this actually the right language?,” or “Is this producing the right input?,” or “Is this button in the right place? But inevitably there's a conversation about, “Who's this even for anyway? Is this really what we should be making?” And that's pretty close to a lot of ethical discussion, so for designers I tend to lead them that way. Now I don't want to come across as too negative, just saying maybe this is an encouraging step. At least you're talking to other people in your organization saying, "Hey do you think this might cause some problems?" But I guess my reticence comes from the fact that I think this industry is still very insular, and we have this infuriating habit of almost thinking that we're so smart and that we can solve everything from first principles. So what that means is, that when we see technological problems, I think we often think, and then we can solve those. These ethical problems, well that's for us. So we'll just talk among ourselves and we'll get it fixed and in the meantime you've got philosophers and sociologists and people in science and technology studies who've been studying these phenomena for literally decades. On the sidelines they’re shouting and they're banging on the glass saying, "Why aren't you listening to us?" Right? So yeah, by all means we should be conducting our own first principles work and talking among ourselves, but we should be learning from these people. There is a whole field that exists around ethical technology. And so in my work, I'm very careful to claim as little influence in that as I can. I say, "Well, these are the people you should be reading, here are their ideas.” And obviously I hope I've done those ideas justice. But getting out of the building, I think is important, both literally and cognitively I suppose, mentally getting outside of the building as well, so that's I'm “umming” and “ahhing” a little. Yeah, sure by all means, go down the hall and talk to a different team but really we need to be talking wider than that.
Brian: Yeah, I'm trying to make it black and white, and again thinking about someone who's on the side of the business where they're not necessarily a tech company, so their product is not software, so it may be internal software or business intelligence or producing a model that's going to change company workflow. Business processes are going to change. Roles, jobs might change. And the intention is, we don't want to do harm but that's not what our training [covers]. It's people with advanced math degrees and particle physics. The range runs the gamut especially in the data science field. And my general [belief] from talking to some of these people is, really good people, they mean to do well. They're not trained in the liberal art side of things as well and so they don't know what they don't know but they're eager to learn.
Cennydd: So if I may offer some advice to those people: I think the number one step or the number one realization that folks in that position need to come to is that data and technology are not neutral. And this is an old idea but it's been recently revived. I really like the framing from, I think it was Professor Geoffrey Bowker, who says that raw data is an oxymoron. There is no such thing as raw data. All data is steeped in, marinated in, polluted by the social contexts that surround it. The methods which the data is collected, analyzed, displayed all contain at least the potential for biasing. So I think sometimes when I speak to folks in that domain, there is a belief in the purity of data and the purity of the analytics and the statistics methods. But being conscious of the potential for bias to creep in, even into something that seems ideologically neutral, I think that's the really important first step. And once you've done that, then that's when a chance starts to build up and you say, "Well how might I be causing damage?," or, "How might I be coming to the wrong conclusions?" And once you get past that ideological block, where I think these conversations get a lot easier.
Brian: Yeah, I like that you say that, because I've felt this thing too where, as you talked about this purity of the data or—let me step back for a second. I think there's really talented people in the data space actually understand bias really well, but when they think about bias, they think they're thinking more about, how is it going to skew the insight from the data? Not the human impact. So it's like they already have some of the key principles and mind about bias, but not so much the human impact. They're thinking, not even so much the business impact, just about the scientific accuracy of the insight I think it tends to get prioritized over all else, and so maybe it's just more training on the human element here, is what they need to just dial that in or something, I don't know.
Cennydd: Yeah I think so, and I think that's obviously a trend now within a lot of data science. I think a lot of folks in that space are getting more aware and more literate in these issues but we still have a long way to go. The COMPAS crime prediction algorithm is a famous example. I'm sure your listeners are familia. Essentially it took arrest data and conviction data and used that to predict the risk of re-offense of a suspect. And in retrospect, not terribly surprising and that was found to be biased particularly against black defendants because that training data, that initial data is not neutral, it is encrusted with the bias of the justice system, of the police systems, of the courts and so on. So it's the awareness that it's not necessarily just your algorithm may be biased, but your training data is so much a product of history and thereby of humanity, factoring that into your thinking. Really looking at all the possible contexts where that bias might occur and then taking steps to reduce, because you can never eliminate, I think but reduce that bias all across the board. I think that's really important.
Brian: Yeah, I'm glad you said that because one thing I thought about when you read some of the big headlines in the space and it's like, "Oh look what machine-learning and AI has done," and it's like, all that did is exposed what was already recorded in historical data. It doesn't mean okay so we have a past because it's already in the data and it's not like the model was bad. It says something about it but I feel like something new popped up and I don't know if because the technology is new, it simply shines light on these biases more so than the data at rest. It's harder to see it until you start applying it in this way.
Cennydd: Yeah perhaps, and it's also tricky for the people who are doing this work because it's not your fault. That arrest data going back to the 1950s is racially encoded and racially biased. But it's still your problem. That's the thing. And so that's one of the punishing things about working on ethics, it's still your responsibility to address, you can't fix that of course, yourself but you still have to account for it within in your system and try to mitigate it and so it's hard and punishing and occasionally thankless work but hey, this is what we signed up for, this is why we're hopefully sought after individuals because we've got the skills to try and tackle these problems.
Brian: Sure. You just made me think of something else too, which is if you are highly technical then turn that into a math problem. Maybe there's a way that the model can compensate for the bias that's in the data. And not take some ethical discussion to realize that there's bias in the data. Maybe there's something that can be done about that at the modeling phase to move things forward ethically. I don’t know.
Cennydd: Yeah. And Kate Crawford has a fantastic talk, I think it's called, “The Trouble with Bias,” or something like that. She outlines what she calls fairness forensics and these are mostly statistical techniques to look at the quality of your data, to look for any gaps within it, to take statistical steps to then audit that data, or test it with a wide array of input against expected outcomes and things like that. So there are technical mitigation strategies, but of course any technological intervention has the risk that it creates some unintended consequences of its own, so that's why you need also the discussion and the anticipation to hear all the maybe the woolier, the more human side of ethics as well. You'd never want to just have a mathematical minimization approach to it, you need something a little bit more well-rounded.
Brian: I was on a webinar yesterday with a large analytics advisory firm and they were talking about their 2020 outlook and trends and things to look at, and they had predicted that ethics was going to be big in 2019 and 2020. One of the things that I caught on there was that while ethics is important, I think there was even a statement that every company doesn't necessarily need a chief ethics officer. There aren't a ton of high profile examples of this and like the news stories that we do see have made it sound like this is a rampant issue all over the place. In reality it's not been a huge problem so much and again, this is probably talking more to the non-tech [crowd]. This is more for internal business data products and business intelligence and things like this." Would you agree with that? I think you probably would not agree with that perception.
Cennydd: No I would actually, I would. I mean there are obviously some sectors in some companies that will just inherently carry more ethical risk or threats than others. If you're building models for military drone targeting then, hell, you better have your ethical game really sharp. You've got to be able to tell friend from foe, you've got to be able to tell a pebble from a grenade, all sorts of deeply important moral decisions rest on your shoulders but probably don't if you're building models for where to construct a warehouse for example, for distributional logistics or something like that. But that doesn't mean of course, that some people should bother and some shouldn't. I think every business has almost a moral duty to take their consequences seriously. and they may have those conversations, they more do a bit of ethical interrogation and conclude, you know what? I think we're in the clear, I think if there is some harm we do, it's going to be so minor and mitigated it the best we can and so be it. Yeah, in fact I don't think many companies do need a chief ethics officer, but certainly there's military companies should and your Googles and Facebooks, given the scale that they operate, yeah it makes sense to have high profile and very capable, very skilled people in those conversations. What I never liked in ethics, the idea of a one size fits all strategy, are those blanket prohibition or command saying, "You must do it this way," or, "Thou shalt never do it another way." That for me is almost the opposite of ethics, that shuts down that discussion that I think is so important. So I'm all for companies and teams taking the approach that best suits their own circumstances.
Brian: I'd like you to close out with some next step, so if someone wanted to walk away here, it's like, I want to put some changes into my team, again thinking about someone working in analytics and data science here. I'd like you to think about those, but before you answer that question, I'm curious what you think about some of the fact that the largest tech companies in the world actually have really large design organizations as well. And so these design organizations sometimes publish guides on using the latest technology and here's how to do human-centered artificial intelligence if they're going to build AI products. At the same time, they're also in the news for high profile things and I wonder if this is, are these groups not talking to each other? Is it just a result of the scale of the company? Do you find it hypocritical? What's your take on that?
Cennydd: Yeah, I think it mostly is a factor of scale. These companies are so big that they can take multitudes, and a lot of organizations that will have spun up an ethics team, or an ethics and AI team, or whatever it may be. And then in another wing, in another building and something like that there's a team who's maybe a growth team, or something like that who's talking about all the dark patterns and manipulative interactions that they can put in the product possible to achieve their own targets. Now that team's job isn't directly, they're not intending to negate the work of the ethics team but they will certainly be talking in different directions. I know it's frustrating for those folks as well.
Brian:I can imagine, there's some designers not happy about certain things. Why are we doing it this way? I remember some of those times when I was an employee, you get frustrated with the business decisions, you know it's wrong or you have a real strong aversion to certain choices and product management wins or whatever, the business wins. You talked about some of those in your own book, I recall about this app scanner, the company you were working with wanted to install a listener basically on the phone to detect what other apps were installed and it backfired.
Cennydd: It did yes. So you know I have firsthand experience of what it's like to be in those conversations. It is tricky. Sometimes the company can try to exert a bit of a codification or some policy approach, like having stronger core values that are actually adhered to across the organization or having design principles. Google for example, has actually got some fairly strong principle around how they will build ethical AI. A lot of that came out of the Maven debacle where they were building essentially Pentagon projects for drones and there was mass rebellion among the ranks. And so someone said, "Well we need to actually codify how do we take decisions about the projects we take on, those we don't, etc.." So you can have a bit more top-down standardization I suppose, that reduces some of that left hand not talking to the right problem. But we know what big companies are like, that's a cultural shift as much as anything else, and it takes long time and it takes real top down senior support. Some companies get that, and those are the ones that are making the change. Some companies say they get that, but are really just paying lip service to it.
Brian: So we've been talking to Cennydd Bowles here, the author of Future Ethics and a product designer. It's been fun. You're actually the first product designer I've had on the show, so this has been fun to nerd out with you. I don't really know what a dark pattern is and there's probably some design nerd talk in here, but I'd like you to leave us with some actionable stuff that a data science analytics team or technical product manager could do, and I'd like to summarize first those four areas you talked about: The front-page news test, the asking yourselves, “What if everyone did what I did?,” the, “Am I treating people as a means or an end?,” and “From the utilitarianism school, am I maximizing happiness to the greatest number of people?” So having those conversations sounds like a really good start. Are there a couple of other things people can walk out of here with?
Cennydd: I would say, early in your project, whatever your project may be, think broadly about stakeholders.You open a lot of MBA textbooks, and they say, stakeholders are people who can affect what you're doing, usually people in suits or in chinos, or regulators, sometimes partners, things like that. Don't forget there's usually a larger group of people who are also stakeholders, who are people who can be affected by our work. And we've trained ourselves to focus on user needs or the business needs that we overlook those folks so often. So get them into the conversation, as I was saying at the start, either by talking about them or ideally actually bringing some of them some of them into the conversation. So that's one of the most fundamental shifts that I think you can do. And then the other axis that I think we need to enlarge or stretch our thinking in, is the longer term implications of our decisions, it may well be that a decision is actually quite safe in a month but becomes quite dangerous in five years time. For example, Facebook has the face data of 2.1 billion individuals. At the moment facial recognition systems are still relatively expensive but they won't be for long, so at some point we'll probably have mobile devices that have full facial recognition capability on device. That completely changes the threat factor in the privacy implications of having access to all that kind of data. So we need to think about, not just is this a safe decision now, but what would it take for it to be unsafe in future and is that our problem and can we do something about that?
Brian: Great, this has been such a good conversation, thank you for summarizing that and obviously your book, it's Futurethics.com, correct?
Cennydd: Future-Ethics.com, yeah.
Brian: Future-Ethic.com, where else can people follow you? LinkedIn, Twitter, where do you hang out online?
Cennydd: Well I'm very easy to find, because my name is spelled so unusually, so I'm @cennydd on Twitter, that's @C-E-N-N-Y-D-D, that's where I'm probably most active. My website, cennydd.com, my email address you can probably guess, so yeah Google me, I'm fairly findable.
Brian: Awesome. Well this has been Cennydd, it's been a really great conversation here, I'm glad you shared some of these insights with my listeners and it's been great to have you on Experiencing Data.
Cennydd: Sure thing, thanks for the invitation.