117 – Phil Harvey, Co-Author of “Data: A Guide to Humans,” on the Non-Technical Skills Needed to Produce Valuable AI Solutions

Experiencing Data with Brian O'Neill (Designing for Analytics)
Experiencing Data with Brian T. O'Neill
117 - Phil Harvey, Co-Author of “Data: A Guide to Humans,” on the Non-Technical Skills Needed to Produce Valuable AI Solutions

Today I’m chatting with Phil Harvey, co-author of Data: A Guide to Humans and a technology professional with 23 years of experience working with AI and startups. In his book, Phil describes his philosophy of how empathy leads to more successful outcomes in data product development and the journey he took to arrive at this perspective. But what does empathy mean, and how do you measure its success? Brian and Phil dig into those questions, and Phil explains why he feels cognitive empathy is a learnable skill that one can develop and apply. Phil describes some leading indicators that empathy is needed on a data team, as well as leading indicators that a more empathetic approach to product development is working. While I use the term “design” or “UX” to describe a lot of what Phil is talking about, Phil actually has some strong opinions about UX and shares those on this episode. Phil also reveals why he decided to write Data: A Guide to Humans and some of the experiences that helped shape the book’s philosophy. 

Highlights/ Skip to:

  • Phil introduces himself and explains how he landed on the name for his book (00:54) 
  • How Phil met his co-author, Noelia Jimenez Martinez, and the reason they started writing Data: A Guide to Humans (02:31)
  • Phil unpacks his understanding of how he defines empathy, why it leads to success on AI projects, and what success means to him (03:54)
  • Phil walks through a couple scenarios where empathy for users and stakeholders was lacking and the impacts it had (07:53)
  • The work Phil has done internally to get comfortable doing the non-technical work required to make ML/AI/data products successful  (13:45)
  • Phil describes some indicators that data teams can look for to know their design strategy is working (17:10)
  • How Phil sees the methodology in his book relating to the world of UX (user experience) design (21:49)
  • Phil walks through what an abstract concept like “empathy” means to him in his work and how it can be learned and applied as a practical skill (29:00)

Quotes from Today’s Episode

  • “If you take success in itself, this is about achieving your intended outcomes. And if you do that with empathy, your outcomes will be aligned to the needs of the people the outcomes are for. Your outcomes will be accepted by stakeholders because they’ll understand them.” Phil Harvey (05:05) 
  • “Where there’s people not discussing and not considering the needs and feelings of others, you start to get this breakdown, data quality issues, all that.” – Phil Harvey (11:10) 
  • “I wanted to write code; I didn’t want to deal with people. And you feel when you can do technical things, whether it’s machine-learning or these things, you end up with the ‘I’ve got a hammer and now everything looks like a nail problem.’ But you also have the [attitude] that my programming will solve everything.” – Phil Harvey (14:48) 
  • “This is what startup-land really taught me—you can’t do everything. It’s very easy to think that you can and then burn yourself out. You need a team of people.” – Phil Harvey (15:09) 
  • “Let’s listen to the users. Let’s bring that perspective in as opposed to thinking about aligning the two perspectives. Because any product is a change. You don’t ride a horse then jump in a car and expect the car to work like the horse.” – Phil Harvey (22:41) 
  • “Let’s say you’re a leader in this space. … Listen out carefully for who’s complaining about who’s not listening to them. That’s a first early signal that there’s work to be done from an empathy perspective.” – Phil Harvey (25:00) 
  • “The perspective of the book that Noelia and I have written is that empathy—and cognitive empathy particularly—is also a learnable skill. There are concrete and real things you can practice and do to improve in those skills.” – Phil Harvey (29:09)



Brian: Welcome back to Experiencing Data. This is Brian T. O’Neill. Today, I have Phil Harvey on the line with me from the UK. He’s the co-author of Data: A Guide to Humans, which sounds like a title is backwards, almost. Wait, is it supposed to be Humans: A Guide to Data? [laugh]. So, welcome to the show, Phil.

Phil: Hi, Brian. Good to be here. And the title was, it was actually the chief publishing officer of the publisher who insisted on that title because of its ambiguity.

Brian: I love it because it’s like a little double take because you’re like, oh, a book on making data for humans. It’s like [makes buzzing noise] [laugh]. Let’s jump into that, the title here. I mean, I know the titles are always interesting with the publishers and all of that. Tell me a little bit about more how you arrived at that and does that feel like it encompasses what this book is about for you?

Phil: Yeah, I think the simple way that it plays out to my head, the book is called Data: A Guide to Humans, so it’s kind of like, “You’re interested in data. Let’s talk about people,” that kind of way around. And there’s been talk of all kinds of, you know, what else can we put before the colon?

Brian: And maybe even before we jump to the book, give my listeners a little bit of background just on you, briefly. What kind of work are you doing right now? And kind of where are you going in the future with your work?

Phil: Sure. So, I’ll start at the beginning and work up to now. So, I have a strange history in that I can say I’ve been in artificial intelligence for 23 years because I started with a Bachelor of Arts in artificial intelligence back at the turn of the millennium, so 23 years ago. Since then, I’ve worked across architecture, rendering, advertising, data, being the CTO and technical founder of a data technology startup, and most recently, I’ve done six years at Microsoft in various roles, including cloud solution architect for data and artificial intelligence and being a sort of applied AI architect for incubations.

Brian: Where did this book come from? And you have your co-author, Noelia Jimenez Martinez, she looks like she has a pretty deep technical background. How did you guys meet and what was the evolution of arriving at this book?

Phil: So, it’s between my startup days and my Microsoft days, I met with and started to lecture at something called S2DS, which is a course for science to data science conversion. So, they pick up PhDs who no longer want to travel the academic route and help transition them through paid projects in industry into data science roles. And I met Noelia first when she was a student on that course, coming out of astrophysics and a very high-flying career there, and she actually worked with our publisher for a bit as a data scientist. And we stayed in touch around that and decided to write the book together.

It’s kind of because of the course. So, I was lecturing the content for five years. Noelia and I realized we have two very differing perspectives on the world and that was really important to us. So, Noelia is from Argentina, and I’m from the southwest of the UK, and I’m a bearded white dude in tech and Noelia is a highly academic woman from Argentina. And so, these two perspectives came together and results in the text that people can read themselves.

Brian: The book website, there’s this quote, it says, “In the world of data, empathy is a powerful tool that will unlock and amplify success.” Success for who? And what does that mean? Can you unpack this? Right now, I’m just going to say this for I think, maybe less so for the people listening to this show, but I think this empathy stuff sounds very hand-wavy to the technical crowd that’s out there. It sounds very ambiguous. It sounds like that’s the first thing to go when, like, we need to make a cut somewhere because nobody can measure that stuff. Counter everything I just said [laugh].

Phil: Perfect. This was actually very early on in teaching the course, this is a challenge that came up from the students in terms of this wooly thing. The reason for the book is that I was, sort of, a programmer for 15 years of my career and I was often told, “Have more empathy.” I was hit with that particular stick. And for me, it was like, “What do you mean by that?”

People who said it would get flustered and say, well, it’s obvious, right? Everybody knows what that is. And when I dug in—and we can go into this in more detail—I developed this idea that empathy will make you more successful, but that statement needs to be broken down, just as you’ve asked. If you take success in itself, this is about achieving your intended outcomes. And if you do that with empathy, your outcomes will be aligned to the needs of the people the outcomes are for. Your outcomes will be accepted by stakeholders because they’ll understand them.

So, if you’re doing a piece of data work for example, or you’re working with a data team, success needs to be accepted. You can’t just shout, “We’re successful,” run away and leave people confused. And that means you can start to add value in a measurable way. Because the metrics of success that you’ve agreed upon you work to will be aligned to the needs and feelings of those stakeholders. So, empathy—I’m talking specifically about cognitive empathy here, which is a rational act that can be learned—means that your success is amplified by your alignment to the paradigms of the people that you’re working with.

Brian: Why should I care? Again, putting my—mainly my listener hat on—why should I really care what the feelings of my stakeholders are for building a model for the sales team to—propensity model for forecasting a sale? I don’t know. Why does the feelings of the head of sales matter to me? Isn’t it about, “Let’s nail some sales. Who should I call? Tell my team who to call.”

Phil: It’s really interesting that you state it like that because for me, the needs and the feelings of the stakeholders are wrapped up in the way that you tried to reject empathy in that way, right? Because you have a core need, right? You have a core need of a sales team making your sales, but we’re not. And I say this with a smile on my face in this current AI situation: humans are not robots. We wrap that need in layers and layers of many different parts of our own personal paradigm, and that is led in large part by feelings.

So, for example, “Give me a propensity model. Help me increase sales.” If it is that transactional, then maybe that will work. Maybe your sales team are hyper-focused and they’ve got a great relationship with the data science team and it just works like that. Never have I seen it work like that in my professional experience and there are layers of, “What do these numbers mean? You give me a propensity model, but what do they mean? I’m sad, I’m confused, I feel a little bit like you don’t understand me, so I’m not going to use it. I’m going to go out and do what I did anyway.”

And the data scientist is sad and confused—or the machine-learning engineer—they said, “But it’s a good model. You get this level of statistical accuracy and we’ve used the latest research. Why don’t you care about my work?” And then they end up sitting in a room, an enforced coffee chat between them to try and sort out their differences and all these kinds of things. To deny that those needs are wrapped in feelings in and of itself is part of the problem.

Brian: Can you play back a scenario—you know, maybe one of your AI projects at Microsoft, you’re welcome to abstract it out a little bit or anonymize it—but is there a game you can replay for us where you went through this process? And how would the old Phil have done it and how did the new Phil do it and what was the result?

Phil: I’ll keep it abstract for the moment, but I’ll give you two flavors of that story, which I always remember very sharply. I wouldn’t say where they were placed in, to protect the guilty, innocent, whoever.

Brian: [laugh].

Phil: So, let’s talk about our sales team again. Sales team quite often are asked to use tools, but they end up in spreadsheets. The tools don’t quite do what they need and so they track stuff in a spreadsheet and then they try and put it into the data system for people to use, required fields and all that stuff that get filled up with random letters off the keyboard because it’s not relevant. So, these sales spreadsheets are really useful data assets, right, to a business; they contain what’s really going on.

And if you want to understand changes to the CRM system or the sales system, you really should study those spreadsheets. And there was one that was sent—this was, you know, in a particular role—a collection of these spreadsheets to try and work out the change to the system that needs to happen. Days of the week had colors because days of the week is really important when you’re making closing in sales. So, it’s like Thursday morning, great closing time.

So, they were color-coded and orange meant Tuesday. And I was meant to pick that up in code as a data professional, the color of this spreadsheet cell. That is stupendously difficult. So, at the time pre-empathy Phil, bit of rage, why didn’t you create a new column for that? You could give it a score of one out of five for the you know, usefulness of the data sales. But no, orange meant Tuesday.

Sales team loads their spreadsheet, they see exactly what’s going on and they know exactly what to do. They get that visual input. Afterwards, you’ve got to realize the reason they’re not using the sales system. The reason they’re using this spreadsheet is because they don’t get to be that intuitive with the data tools that they’re using. Their job isn’t not to be better at data; their job is to close those sales.

And there’s a great paper on de-skilling and this kind of resentment of data science—and I’ll use this before I transition into the next story—and it’s a paper that studies medical applications of data and data science. And the volume of complaints that data scientists have made about medical professionals being lazy, data illiterate, and all of these things. When you dig into that—and I can [share the link 00:10:30] with you to share with the listeners, you know, the paper, I think it’s up on archive—you find out that nurses and doctors were told to record this data. They weren’t incentivized; they were told it was part of their job. And so, treating a patient, they now had to fill in ten more records.

So, they copied them down, they filled in all of that stuff, but the data wasn’t good because it wasn’t explained properly, it was just made as part of their jobs. So, they were now more busy, more work for the same amount of money. As the data scientists who receive that data, it’s poor quality data. The doctors are lazy and useless, and you know, they don’t really understand what’s going on. So, that’s similar to the sales team.

Where there’s people not discussing and not considering the needs and feelings of others, you start to get this breakdown, data quality issues, all that. The second example I wanted to give is one of my favorites because it’s still relevant today even though it sounds like it should have been solved, which is PDFs.

Brian: [laugh].

Phil: PDFs, Portable Document Format files. That’s what PDF stands for. So, this document doesn’t say data. At a certain point in my career, I was of the opinion that data should never be in a PDF. If you’ve sent a PDF, you sent me the wrong thing.

I needed to be educated on this fact that in certain industries—finance, for example—PDFs are considered to be a legally closed format. So, if you email somebody a PDF, it’s got some lineage to it, the PDF has recorded when it was generated and all that kind of stuff, so you can trace back through the particular files that people have seen and when they open them and all that kind of stuff. So, it is a valid format for certain kinds of data. But PDF as a format itself, is insanity-inducing when you try and parse it. Even the proper pausing software for it doesn’t give out useful data.

So, what ended up happening is new machine-learning tools came in, computer vision tools and now people do a form of computer vision, ML-based OCR on essentially screen grabs of each page of the PDF to try and draw the table out. And you think about that, and you think about the huge stack of technology that’s sitting on top of the need for this file format and all the different people who’ve been involved in that, and you think through the person from the law side of the house, who said, “We need some legal traceability here because we’re talking about finance, and money, and culpability, and all those things.” And as opposed to really digging into that need and working out what to do, you end up with emails of PDFs and all that kind of stuff that end up in—we have 100,000 PDF documents with all the data from the last ten years. Data scientist off you go and process our data, please.

So hopefully, there are a couple of—well, three now—examples that somewhat useful in explaining what I’m talking about. And what I want to identify there is, it’s all people through that chain. You can discuss it from a technology perspective, but you won’t get very far. You need to work out why people have done these things. Why does orange mean Tuesday? Why is it an email PDF? Don’t just solve a technical problem. And that was something I try to do at each stage was, we’ll solve it technology. Don’t put me in a room with people.

Brian: Sorry, that’s a loaded question. I’m making the assumption that you might not have wanted to go understand the why and instead, maybe jump into the technical part, but did you have to personally get over some discomfort to do this? I guess what I want to hear is, how is your quote, “New process”—like, if you were dealing with that sales team again, how would you approach that today? Were there personal things you had to get over to get comfortable digging into the kind of non-technical work that’s required here? Tell me a little bit about that.

Phil: I think it’s easy when you know the technical arts, when you’re a programmer, a data scientist, or statistician, to be comfortable with your tools. And this is the same for designers and marketers and salespeople. You end up with a sort of world of tooling around you. Having to explain your tooling sounds like a kind of interrogation of kind of why you’re using it like that? You don’t understand me. Why does my data have to fit your process?

And that gets uncomfortable. It can make people resentful, I felt resentful. You know, I wanted to write code; I didn’t want to deal with people. And you feel when you can do technical things, whether it’s machine-learning or these things, you end up with the I’ve got a hammer and now everything looks like a nail problem. But you also have the aggressiveness around your tooling, which was, my programming will solve everything.

The evolution of that is to start to unpack—and this is what startup-land really taught me—you can’t do everything. It’s very easy to think that you can and then burn yourself out. You need a team of people. As soon as you have a team of people, you need a diverse set of perspectives in that team because if everybody’s just heading in the same direction, you’re not going to get any change. And empathy and diversity have this wonderful interdependent relationship and as soon as you start to practice those empathy skills, you start to feed on a wider and wider group of people to fulfill that need for learning.

So, it’s like as a programmer, I used to learn a new programming language every six months for fun because that learning need was fulfilled. As soon as you start to practice your empathy skills, you need to learn that, and you realize that’s when people and the team is so important. And I’ll tell you just sort of a concrete example. When I was writing the book, we offered workshops as a form of funding. So, we’d go in and do an empathy workshop that the company would pay for, and that will go towards the funding of the book.

One company, we did a day’s workshop with them and they kindly invited me back a year later to show me what they’d done. They were proud of that changes and it was all about allowing people to move in their roles away from the technology and towards people, where it suited them. So, they went from a team of data scientists to having data architects whose job it was half the time to find out what the people needed, to go and listen, to go and explore in that way. Because the empathy work of it is not for everybody. So, as soon as you got a team in there, you start to find the people who can listen, who want to listen, who can develop and practice these skills.

So, that’s the real change is to kind of drag yourself away from the safe zone, the comfortable tooling: my hammer will bash in every nail, to going, “We need a full toolbox. I can’t carry it all. We need a team.” That kind of feeling.

Brian: If you don’t… have a big team, and maybe you’re, I’ll call it design curious—and I want to talk about design in a second because we’re not using that word, but we’re largely talking about a discipline that I associate 90% of what you’re talking to with design, but putting that aside for a second, if you’re on a small team and you’ve recognized some of the challenges of being technology and data-first and your perspective and not customer-user-stakeholder-first, if you start making this change, how do you know what’s working? What are the signals that you would get to know? Because I’m sure everyone’s like, “I know if my code compiles and it doesn’t error out, it’s clear. The feedback is so clear.” And in this world, this is the world of gray, it’s never black and white, it’s feelings, it’s all these kinds of things. So, do you have a sense of, like, how to detect or how to teach a team, what are the signals that you’re quote, “Doing it right?”

Phil: That goes back to your first question about how is empathy useful as a learned skill. Every project needs its own metrics. You can’t judge building a bridge and building a dating app or building AAA video game on the same metrics. It doesn’t work like that. Now, if you do try and do that, I would have said, that’s a less empathetic way to go about it.

So, when you start to measure and you’re looking for success, what you find is that the metrics are better, you can measure success in a way that everybody understands, and it means each of your projects can be either stopped sooner—which is valid; account for the learnings that this was a bad idea; we should stop—or there is now measurable and continued success. For example, you’re going to change from one monolithic relational database to a data lake, a delta lake, a data mesh, one of the modern data technologies, usage is still a metric that you care about, but the kind of usage will change. It won’t be transaction count on the database, it won’t be activity on your views. It will be different layers of tooling interacting with different pieces. You’ll work out what your cost per use is based on the technical architecture.

And the same is true of a project team. We go back to those doctors. Doctors providing quality data, which means that machine learning models can look at the spread of disease more effectively across a wide area of the world is a great measurable outcome. You can look at the volume of data coming in, you can look at the errors, you can look at all of those things. They are good metrics to trace and if the whole team are looking at them, the whole team is—and thinking about the people who are involved in that process, you don’t have this, “I hate my data provider problem. They always give me problems to solve and it’s all about me and my test pass. Why isn’t the world caring about that?”

Brian: If I can summarize that, what I think I heard you say is, if I’m trying to be more empathetic, deploy more human-centered design skills into the data work that I’m doing, it’s not so much about measuring the activity about how I’m changing the work that I’m doing, how I’m doing research, or how I’m doing needs analysis, or et cetera, et cetera, you’re really focused on the project or the product’s success criteria, and if those are going up, then therefore you’re quote, doing it right in terms of your approach. Is that kind of what you’re getting at? Don’t measure the activity measure the results as a way to know if the activity is working.

Phil: On the one hand, yes, very much. When we do talk about—if we do transition to talk about user experience design and those things, you also need to look at how your team is made up. Who’s meeting, who’s talking, who’s doing the design work because let’s take a dual view of it. Somebody is designing the dashboard and they care about user experience and all of these things, and somebody’s designing the database over there. You could do a huge amount of research, you can measure all that activity, you can say how much the users enjoy the process and how much they’re looking forward, you know, that will give you the equivalent of your net promoter score for the dashboard excitement they’ve got, you can measure all of that.

Nobody would measure dashboard disappointment at the end of the day. And as soon as the user comes back and says, “It doesn’t work like I want,” there’ll be a lot of finger-pointing. “Oh, but the database is slower than we want it to be,” and this, that, and the other. So, that outcome, that metric, if the team shares it, and you can see them interacting and discussing it as a shared goal, then you’re going to see more success.

Brian: Have you had exposure to design and user experience professionals on your team? Did they have something to do with this transformation transition that you went through that led to a book? I mean, you’re talking the language of design very much here. And you actually had a comment, I think about the language of user experience isn’t comfortable. I think that was something I quoted from our intake call. Can you kind of unpack the relationship between your book, this method you’re talking about, and in the world of user experience design?

Phil: The difference in language that you’ve raised there is incredibly important. For example, I learned at university about neural networks and Java programming. Nobody taught me user experience. User experience people, when I first met them, were… irritating.

Brian: [laugh].

Phil: They didn’t seem to want to listen to half the people in the room. Let’s listen to the users. Let’s bring that perspective in as opposed to thinking about aligning the two perspectives. Because any product is a change. You don’t ride a horse then jump in a car and expect the car to work like the horse

So, the designer of the car is thinking about that user need, but I think about changing the user need and educating them on this new tooling. To do that, you have to listen to both sides and bring them together, not punish the people and the engineering folks from not knowing how to listen to the customer need in the right way.

Brian: Unpack that for me. Like, you used the word punish. That’s pretty harsh. If you feel like sharing this one incident, multiple incidents, an entire job’s history worth of incidents. I mean, that sounds unfortunate and that’s not what I typically think about with designers. But tell me more about that.

Phil: Most people don’t think about designers like that because designers, the human-facing soft and kind people who listen, yes, they do, to the target of their work, right? So, I worked, for example, in a company that had over 60% artists. They were not kind and understanding to IT professionals. It’s the hidden punishments in there. For example, “Oh, I don’t understand what you’re talking about, Phil,” is a punishing statement. “Oh, you’re so technical.” It can be wrapped up in these very sweet-sounding statement. “Oh, you’re so clever. You’re saying all this stuff that I don’t understand.”

There’s no movement in that. And this was one of the—this is exactly it which is, “Have more empathy, Phil.” That’s the stick I was hit with is the way I put it. Sounds like a nice thing, but then you say, “What is this empathy?” “Oh, of course. Everybody knows about that. I’m not going to explain it to you.”

But when you dig into it, it’s those interactions which can be really formative in a team. You’re like, “Oh, we want to change that dashboard. We’d love to give the user that feature, but the database administrators just won’t action that ticket. They’re always off doing something else. They say something technical and then they never really listen to us as designers.” And I think that’s one of the core parts of the book is giving people structured ways to listen because you’ll hear those statements.

Let’s say you’re a leader in this space. You know, you’re Head of Analytics or you're a Chief Data Officer or you’re leading a project. Listen out carefully for who’s complaining about who’s not listening to them. That’s a first early signal that there’s work to be done from an empathy perspective.

Brian: It sounds like you kind of had a bad experience working with design and the irony here is this book is so wrapped up in so many of the principles that design professionals do talk about. So, there’s an interesting rub there. It tells me more about this.

Phil: I think about three perspectives on that. The first one is we talk about a paper called “Implementing Grutter’s Diversity Rationale” in the book, and this is based on a court case in the States about diversity quotas for legal universities. Because legal universities don’t just put lawyers into society; many people who do law become leaders and what they found through deep study that leaders trained in a diverse environment are better leaders. So yes, that kind of diversity of perspective is important. You don’t get that in the kind of technical training that people who follow that path do.

So yeah, whether I had a bad experience or people that I’ve talked to in the industry have a bad experience, there is a lot of bad experience happening. There was somebody, I was giving a presentation on the book at a conference, and somebody came up to me from the autistic community, somebody who was—I’m not sure the correct way to say it, but quite high up the spectrum somewhere—and they said, “This is great. This is very much what we get taught in the autistic community about interacting with people who are not autistic, this kind of reason-based, learning-based listening tools, when you don’t pick up on those emotional signals.” And if you start to think about that, yeah, this kind of conflict happens because people are trained in different things. You’re obviously steeped in the design methodologies. Great.

Are those designers steeped in technical methodologies so they’re able to have empathy for the people implementing what they design? Is everybody working out how to bridge between these worlds? I don’t think they are. And I think there’s conflict happen, the trauma happens, and certain people like me from a technical side have to work out what it means from our perspective to start to build those bridges. Me going on a design course isn’t the same as me learning it in my own paradigm and starting to move myself towards the perspective of a designer.

Brian: It’s interesting you say that. I think learning by doing this is a big part of the training that I do. I think there’s plenty of information out there to go read, but you learn a lot of this by trial and error and actually going out and doing the work. I will totally agree with you that I think there are definitely designers out there that they’re so relentlessly focused on the U in the word UX—which is actually part of the reason I don’t use that terminology too much—I tend to use human-centered design because it encompasses people who are affected by the products that we’re making that maybe aren’t even in the room, we’re talking about the stakeholders who are paying for it. We’re talking about the end-users, of course, as well, but it’s a broader umbrella of humanity that we’re thinking about.

And a great way to develop that is, for example, learning how to code because then you understand what you don’t know about technology, you can start to understand how big of an ask am I making to change this dashboard so that it updates in real-time versus not. You may not know how to do it, but if you’ve done any work with data, or basic coding, or whatever, you start to understand the scope and the level of difficulty there, as well as being able to telegraph some of those kinds of requirements to a technical person. And so, now you’re starting to connect and you’re building a bridge between your technical counterpart and your quote, “Agenda,” for the user that you have in mind. I think walking that, the more we can all touch each other’s worlds and get involved, you start to really understand what you don’t know [laugh], you know?

Phil: That’s beautifully put. And I love the way you frame technical skills as learnable skills, right? Go and learn some programming. I say the perspective of the book that Noelia and I have written is that empathy—and cognitive empathy particularly—is also a learnable skill. There are concrete and real things you can practice and do to improve in those skills.

Somebody from the design world who I know well, a good friend of mine, she’s called Lisa Talia Moretti, she’s a digital sociologist. And so, she’s very interested in studying these points of conflict, the use of signs and symbols, the sociology of people who interact with that digital space, for example. But sort of bringing it back to the book, I wanted to raise a, sort of—we give three high-level strategies in that kind of learning process. The first is listening. That means not just waiting your turn to speak.

Then we break down what it is to listen. What kind of words you’re looking for. You talked about not liking user experience and thinking about human-centered design. That’s a language difference. That’s important to you and why you’re doing it.

It could be that we talk about dashboards, it could be talking about reports. We could be talking about departments versus teams, we could be talking about APIs versus real-time tools. All these words have meaning and they matter to people in different ways. So, plugging that into the model of how people learn how they do things and how they relate to the world is part of that structured listening.

Second one is questioning. When it is your time to speak, it’s better, often, to ask a question than it is to make a statement. When you start to listen and use the right words, your questions change in structure. “What do you mean when you say human-centered design? Why are you making that change?” “When you say dashboard, what does that really mean to you? Wouldn’t you rather have a report because it’s static in this way?” Those kinds of questions.

And then the third one, I think, really relates strongly to the point you’re making about coding. Failure is a tool. You need to conduct experiments in understanding. If you have trust, if I was asking you about human-centered design, I would say things that are wrong and you would correct me. And quite often, that is a much greater source of learning than asking questions where you’re trying to be right.

And particularly in technical fields, you get a whole bunch more out of people, if you say the wrong thing, if you fail in front of them because they have a desire to correct and help. People in technical skills because it’s about that kind of craftsperson life, “Oh, no, you don’t cut it like that. You cut it like this,” when you meet her, you go into a kitchen, if you’re cutting vegetables wrong, you will be corrected because it’s safer, it’s better to do it. If you’re doing some engineering, if you’re working on a vehicle or you’re doing some programming, people will correct you. So, doing the wrong thing can be a massively valuable source of information. But you need to have trust to be able to fail, to get that information.

Brian: I want to jump back to something you just said. So, use this good cooking analogy. This is really good. You’re talking about knife skills. If success criteria is the final soup that we’re making or the dish and the enjoyment of the dishes is kind of the downstream success metric, and it sounded like you said, in order to measure that, if I’m trying to adopt these methods in my work and I asked, “How do we know if I’m doing it right?” And you—I think you said, does the soup taste good? Do people enjoy the soup that you’ve created?

In this case, you said we can actually evaluate at the chopping stage with there’s things we can see there. So, I guess, in the design phase, if I’m starting to use these techniques that are in the book, who’s going to judge my chopping in that way? Do I have to wait for the soup and then will I be able to connect the dots back to, like, “Well, the soup was very salty and people, I don’t know, they just didn’t—they said it was just kind of missing something.” Nice, vague, gray design feedback. “The dashboard, just… I wish it had better visualizations, you know?” You get that kind of feedback.

It’s like, well, where in my process did I go wrong? Can you help me fill that in? Is there anything we can do? Because you said there’s practical ways to practice this, so that suggests to me that one could identify areas of improvement while you’re doing the work. Can you help me know those, or see those?

Phil: If we’re talking about a meal, if it’s a commercial exercise, you’re talking about a restaurant or a cafe or something like that. So, everybody’s going front-of-house and back-of-house, so going better customer happiness means more revenue. Revenue goes up. Good. And your question of how do you connect that back to cutting skills is really indicative of the kind of work that’s involved in cognitive empathy.

Because you have to work back and say, “Okay, soup. Customers liked the soup this week they didn’t like the soup last week because we made more money.” Okay, different chefs on: Scary Chef and Friendly Chef. Let’s say that Friendly Chef makes better soup than Scary Chef. Why is that? What is it that I need to get into in that kitchen to understand?

And it could be that Friendly Chef cuts the onions smaller. And there’s a technique for doing that, where a Scary Chef doesn’t think the onions matter so much and cuts them bigger. And you find out that the size of the onions matter on the way that they’re—and that means the way that they’re cut matters. But you’re tracing through the people, not just looking at firing one at the chefs. Because it could be Scary Chef goes, “Oh no, I can cut like that. I didn’t think it mattered. Show me where it matters.” “Well, we made more money when the onions were cut like this.”

Quite often, that would be too simple of an example. There’s usually a whole bunch of factors, but the fact that we can trace back from flavor of soup to the way that the onions are cut is important. Because people made those choices in the way they applied their skills. I’m going to give us a jump to a more technical example. Working with artists, I had to manage a printer. And they would work on a screen and they would print things out.

And the wonderful gray feedback such as this was, “It was way more of an aubergine purple on the screen. It’s looking a lot more plummy now.” So, me as a technical per—aubergine versus plum? Not sure what I can do with that feedback, thank you. But then you realize that, okay, what’s the technical thing that’s going on here? A screen is projected light; the light shines out at you. Once it’s printed, it’s reflected light. Entirely different ways of dealing with light.

So, the reason the printer didn’t work on certain colors was because of the lighting, the review space, all of those kinds of things. And invested in a very high-quality printer, didn’t invest in a document review space where the lighting suited that kind of different experience of color. So, there’s some highly technical things you can do there with lighting and all those kinds of things, but also, the artists has to understand that they can change what they do with the projected light on the screen to get a different outcome in the printing. And it might not look the same, but there’s this huge assumption that it always just, “Well, it’s an aubergine purple.” And I think whether you’re talking about cutting onions or whether you’re talking about printing or whether you’re talking about the experience of a set of data to help drive sales decisions, that kind of empathetic process where you think through what you’ve described as the human-centered design and that design techniques, what I talk about the cognitive empathy skills of unpacking the paradigm and listening, that’s the set of skills we’re talking about practicing. Which is why in the book, we recommend teams go to art museums and discuss pictures in relation to their technical work or their data work just as much as designers learn programming and programmers learn a bit about designers.

Brian: The book’s called Data: A Guide to Humans. Phil, it’s been great to talk with you. I want to know something. Are you a beard model? You have—I’m looking—I know our audience is only listening on audio—you have this magnificent beard. Tell me about this beard. Because I saw it says model and you have word music in your Twitter profile. Are you a beard model?

Phil: I have done some modeling, but I say that with a slightly amused face because I’ve had a beard for a long time, and I modeled rosettes for a Japanese fashion brand. So, it was my beard as context for a rosette, which was a particular boom in Japan at one point. But it’s a Covid beard that kind of stuck because looking like a weird wizard is sometimes helpful. Especially when you make strange electronic music. So [laugh]—

Brian: Excellent. Excellent.

Phil: —people can find my music online if they so wish [laugh].

Brian: The handle is @codebeard, am I correct? Just as it sounds?

Phil: That’s correct.

Brian: C-O-D-E-B-E-A-R-D on Twitter. Where’s the best place to get the book that helps you the most? Where can people get that?

Phil: It’s available in all good bookshops and the evil ones.

Brian: What’s a bookshop?

Phil: I, I—like a website, but they have the stock right there that you can look at and touch.

Brian: [laugh]. Well, I appreciate that.

Phil: “What’s a bookshop?” I mean [laugh]—

Brian: [laugh].

Phil: Haven’t you noticed that ebooks didn’t kill physical books?

Brian: Yeah [laugh].

Phil: I mean now [unintelligible 00:38:17] physical books. You should go to a bookshop, right? It’s a wonderful experience. What you should do, you should go and find weird ’60s science fiction. That’s my mission to you. Go and find a science fiction book from the ’60s that you’ll buy for—you’re in the States, right, so, like, 50 cents—

Brian: Yeah.

Phil: Pulp science fiction from the ’60s.

Brian: Okay.

Phil: It will give you a whole different perspective on the world.

Brian: You don’t know anything about my music, but we’ll maybe talk about that another time because you m—

Phil: [laugh].

Brian: But, we can probably go on for a long time. But so, this has been a wonderful conversation. I really appreciate you coming on here. Again, Phil Harvey, please check out his book Data: A Guide to Humans. Is there someplace people can get in touch? Is Twitter the best place if they wanted to reach out?

Phil: Twitter is good. LinkedIn is good. I think I’m on Mastodon now, too, @codebeard again.

Brian: Excellent. We’ll put those links in the show notes. And thank you again for coming on and talking about this topic with me.

Phil: It’s been fun. Thank you.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe for Podcast Updates

Join my DFA Insights mailing list to get weekly insights on creating human-centered data products, special offers on my training courses and seminars, and one-page briefs about each new episode of #ExperiencingData.