044 – The Roles of Product and Design when “Competing in the Age of AI” with HBS Professor and Author Karim Lakhani

044 – The Roles of Product and Design when “Competing in the Age of AI” with HBS Professor and Author Karim Lakhani


044 – The Roles of Product and Design when “Competing in the Age of AI” with HBS Professor and Author Karim Lakhani

 
 
00:00 / 00:50:47
 
1X

If there’s one thing that strikes fear into the heart of every business

executive, it’s having your company become the next Blockbuster or Neiman Marcus — that is, ignoring change, and getting wiped out by digital competitors.

 

In this episode, I dived into the changing business landscape with Karim Lakhani who is a Professor at Harvard Business School and co-author of the new book Competing in the Age of AI: When Algorithms and Networks Run the World, which he wrote with his friend and colleague at HBS, Marco Iansiti.

 

We discuss how AI, machine learning, and digital operating models are changing business architecture, and disrupting traditional business models. I also pressed Karim to go a bit deeper on how, and whether he thinks product mindset and design factor in to the success of AI in today’s businesses. We also go off on a fun tangent about the music industry, which just might have to be a future episode!. In any case, I highly recommend the book. It’s particularly practical for those of you working in organizations that are not digital natives and want to hear how the featured companies in the book are setting themselves apart by leveraging data and AI in customer-facing products and in internal applications/operations.

 

Our conversation covers:

  • Karim’s new book, Competing in the Age of AI: When Algorithms and Networks Run the World, co-authored with Marco Iansiti.
  • How digital operating models are colliding with traditional product-oriented businesses, and the impact this is having on today’s organizations.
  • The critical role of data product management that is frequently missing when companies try to leverage AI
  • Karim’s thoughts on ethics in AI and machine learning systems, and how they need to be baked into business and engineering.
  • The similarity Karim sees between COVID-19 and AI
  • The role of design, particularly in human-in-the-loop systems and how companies need to consider the human experience in applications of AI that augment decision making vs. automate it.
  • How Karim sees the ability to adapt in business as being critical to survival in the age of AI

 

Resources and Links

 

 

Quotes from Today’s Episode

 

“Our thesis in the book is that a new type of an organization is emerging, which has eliminated bottlenecks in old processes.” – Karim

 

“Digital operating models have exponential scaling properties, in terms of the value they generate, versus traditional companies that have value curves that basically flatten out, and have fixed capacity. Over time, these digital operating models collide with these traditional product models, win over customers, and gather huge amounts of market share….” – Karim

 

“This whole question about human-in-the-loop is important, and it’s not going to go away, but we need to start thinking about, well, how good are the humans, anyway? – Karim

 

“Somebody once said, “Ethics defines the boundaries of what you care about.” And I think that’s a really important question…” – Brian

 

“Non-digital natives worry about these tech companies coming around and eating them up, and I can’t help but wonder ‘why aren’t you also copying the way they design and build software?’” – Brian

 

“…These established companies have a tough time with the change process.” – Karim

 

 

 

Transcript

Brian: Welcome back, everybody. This is Brian O’Neill. This is Experiencing Data and I’m happy to have Karim Lakhani here from Harvard Business School, and you have a new book. Tell us about your book.

 

Karim: Yeah, great to be here, Brian. Yeah, the book is called Competing in the Age of AI: When Algorithms and Networks Run the World. This is co-authored with my good friend and colleague here at HBS, Marco Iansiti. And the book is really trying to make accessible this AI revolution that is going on in the tech world, but as we make the arguments in the book to—really applies to all types of companies. And the book sort of argues that relatively simple AI—or weak AI—is radically changing the business architecture.

 

So, both business models, how companies create value and how they capture value; and operating models, how they achieve scale, how they serve lots of customers, how they achieve scope—which is how they provide more and more services and products to their customers—and how they learn and improve. And our thesis in the book is that a new type of an organization is emerging, which has eliminated bottlenecks in the old processes, and that the future of many organizations is to emulate what many of the AI-[first] companies are doing today.

 

Brian: Yeah, thank you for that. And just for listeners who may not know Karim’s work, in addition to teaching at Harvard Business School, you also co-founded the Harvard business analytics program, so very relevant to this audience. And while I would characterize this as more of a business book—it’s not a technical book on data science—you know your stuff, and so you’re able to really relate the technical aspects here to business. And that was one thing that I got out of the book. And I do want to ask you about the analytics a little bit later, but jumping into the book portion here. I was curious, so what surprised you since you put the book out there? Have you had any unexpected readers, or unexpected comments, or anything you might feel like sharing?

 

Karim: Oh, yeah, sure. So, two things. One, I put the book out there, and I let my form—I’ve been at HBS for 15 years. So, I let all of my former students in the MBA program know, “Hey, this book is coming out. You might want to buy it and read it. [laughs]. I can’t assign it to you anymore, but you should definitely still read it.”

 

And one of them wrote back—he’s in digital at a large consumer company—and he goes, in this cryptic email, the subject line says, “About your book—” I’m like, “Oh, shoot. This guy doesn’t like the book.” And he goes, “It looks like you wrote the book just for me, and the challenges, and issues I’m facing today in my company.” And I was like, “Okay. Mission accomplished. At least my former students found it relevant.” The second thing in the book—and I’m sure we’ll talk about this—we make this argument that digital operating models are colliding with traditional product businesses.

 

And the digital operating models have an exponential sort of scaling properties, in terms of the value they generate, versus the traditional companies have basically what we call, like, a concave function. They basically have value curves that basically flatten out, and have fixed capacity. And that, over time, these digital operating models collide with these traditional product models, and win over customers, and gather huge amounts of market share and market value. This notion of collision has come alive in today’s crazy COVID world, too.

 

So, the argument we make in the book is, “Look, traditional companies don’t see this happening, respond too late, and are really suffering a ton because of these collisions.” And guess what? The same thing happened with COVID. COVID is a disease that has an exponentially growth process. And it’s colliding with hospital capacity, healthcare capacity, that is fixed. And when we talk about flattening the curve, it’s all about trying to avoid this collision. And the whole world basically faced an exponentially rising system that we didn’t understand, we ignored, we wished away—especially in Europe and in the Americas—and now we’re paying the price for that. And the same thing that we talked about in our book about digital systems, same thing is happening with COVID. So, in many ways, that was an unwelcome realization of what we saw happening in the business world, happened to the rest of the world as well.

 

Brian: Yeah, yeah. It’s challenging times, for sure. It reminds me a little bit about—you talked in the book about the echo chamber, and reinforcement of certain types of political messaging and we talked about the anti-vax movement and how the network system amplifies a certain message over and over to certain groups of people; they can start hearing the same types of messages over and over, and what you just said there about COVID-19, this kind of exponential growth reminded me of that part of your book.

 

Karim: Yeah, absolutely. In the book—actually the toughest chapter to write in the book was the ethics chapter because we’re Business School professors. Of course, we want to be ethical, but we don’t study that as an object of scholarship. There is other experts in the philosophy department, or in the law school, and so forth, that think about that. But as we were writing the book, we felt it really important to actually have a question and address these worries about how do we think about the ethics of digital scale, scope, and learning.

 

Like, if you build an operating model that is going to scale exponentially, then how do you actually think through the ethical questions as well? And that you can’t have an ethics department after the fact, right? You need ethics to be built into both the engineering teams and the management teams, and how we design our algorithms, how our systems work, have to be ethically designed. I teach in the technology and operations management course here at HBS for MBAs, and we have a case on the Toyota Production System and one of the questions you ask is, “Where is the quality department in Toyota?” Well, guess what? There is no quality department in Toyota because quality is built right in. And I think the same thing is going to be needed around AI and machine learning systems in companies, too. That ethics will have to be an engineering consideration and a business consideration from day one; it can’t be ex post at all.

 

Brian: So, this ethics—I’m glad there is a—as I was reading the book, I was waiting for something about this to be stated—my practice is Human Centered Design, and trying to help the data community use Human Centered Design, which I feel is a more micro approach to ethics because we’re putting people at the center of our work, and it forces us to think about the human experience with these technologies, and so I’m curious—I actually contributed an essay—there’s a new book coming out on ethics and AI and it’s a bunch of essays from different writers and I contributed one, and even though I’m a designer, I’m also not an ethics scholar anything along those lines. Did you feel a little bit of imposter syndrome writing about that, despite the—I totally agree with you that everyone that’s working on these systems, this isn’t a department; it’s not a checklist item as you go through your delivery sequence. Did you feel a little bit out of place writing it? You said it was difficult, but I felt a little bit that way.

 

Karim: Same here. That was a real difficulty because again, I mean, I think, again, we take scholarship seriously. You want to defer to the experts in that domain, but I think a well-rounded data science education, both for executives, and leaders, and managers, and engineers, and technologists, we’ll have to include ethics dead-on because I think the—and so we can’t anymore shy away from it. You have to address it upfront. The example I often give in these situations, I talk about bias.

 

All human beings are flawed and we have biases. And in the analog systems, in the traditional systems, those biases can be limited because we limit our footprint naturally. So, if you have a bank manager that’s biased and doesn’t give loans to a particular class of people, that thing is bad, but it’s still limited. If we now train off many of these bank managers, and they’re all biased, and then all of a sudden our data are biased and our algorithms are biased, then we’re going to be discriminating at scale. And this is exactly the problem that Apple faced when they realized when the Apple Card got released, that the algorithms that they were working with Goldman Sachs and themselves were discriminating against women and giving them less credit in the same household.

 

And they said, “Well, we don’t have gender in our models.” Well, duh, you don’t need gender in your models to have additional correlates that are additional variables that are perfectly correlated with gender. So, you’ve got to actually understand both the science behind this, but then what the consequences of these things are going to be as well. So, we finally got over our reluctance, and we said, “Look, we’re just going to lay out the issues,” and we give a ton of mini case examples, and examples in that chapter walking through issues around transparency, issues around bias, issues around selective amplification and echo chambers, and say, “Look, this is no longer—the data scientists, the managers, the executives can’t just say ‘Oh, we didn’t realize what we’re doing.’” It’s like, “No. You have to really think through what these implications are.” And so I’m still uncomfortable about it, and I read a ton around it, but I think this is no longer something that should be left to the specialists. We have to make this mainstream in this community.

 

Brian: Yeah. If there are any ethicists listening to this, you also just dropped probably an amazing book title called, “Discrimination at Scale,” So, I just want to throw that out there if you’re looking for a good title for your ethics book. That’s a good one. [laughs]. Well, probably going to ask you a little bit more on the ethics a little bit later, but my next question, so you co-wrote this book with Marco and I’m curious, where did you guys disagree during the book? And did you ever have competing opinions about what would finally go into the text?

 

Karim: Yeah. So, you know, look, I’ve known Mark, Mark was a friend, we’ve authored a bunch of articles together before, and he’s actually the one who recruited me to come to HBS, as I was graduating from MIT with my Ph.D., I was actually headed west to Berkeley, and Marcos convinced me to stay in Boston. So, we have a fairly tight relationship. And we’ve been exploring these topics together in a course that we started seven years ago now, eight years ago now, on digital innovation and transformation. So, we’ve written a bunch of cases—so our perspective was already pre-aligned. It wasn’t as if it was the first time we were co-authoring.

 

But then there’s still this iterative process where somebody puts paper to pen, and then gives it to the other person to react to, and then one of us throws up [laughs] on it. And then it’s this mutual throw-up process until it gets better. But the trick to our book was really our editor, Amy Bernstein. Amy Bernstein is editor of HBR, and she’s edited a bunch of our papers in HBR. And we went with HBR Press because we get Amy to work with us.

 

And Amy really is the midwife of the book. She really would be the skeptical reader saying, “Really, guys? Is that what you really mean? Give me a better exam—” and the iterations between Marco, myself, and Amy were critical to this book coming together. So, yeah, but the mutual throw-up process was definitely part of the process. And you get to enjoy that process, and you build a thick skin. I mean, I think as scholars, we get so used to peer review, and peer review typically is, “I hate all your ideas, I hate your writing. Why are you even trying this?” As the first go around, and most stuff gets rejected on the first go-around anyway. But as you build a thick skin—and the whole thing is that the iterations on writing and smoothing things out, and really getting an outside perspective, as Amy provided to us, was critical to the book coming together.

 

Brian: Nice. Nice. Yeah, editors, thumbs up—

 

Karim: Yeah. Absolutely. Absolutely.

 

Brian:—[laughs] for their contributions. One of the things, when I was reading the text, especially at the beginning, not going to say you were blessing full automat—looking at AI as a tool for complete automation. There’s—at least in the circles that I hang out in—there’s this human-in-the-loop scenario, where we’re using AI to augment human decision making, versus full automation. You cite Netflix and talk heavily about Netflix there, and I couldn’t help but think, Netflix risk is really low—

 

Karim: Yes.

 

Brian:—so I think they can automate everything because recommending a bad movie isn’t really going to hurt anybody. Do you have a position on this? And was there an intention to kind of—were you really trying to show the aspirational place that companies could get to with full automation? Because I feel like that’s just—there’s a lot of companies struggling to just get any model out the door, let alone one that’s fully automated. It sounded like such a reach, a little bit, from just what’s happening on the ground. I don’t know. Can you talk a little bit about this?

 

Karim: Absolutely. Absolutely, and I think about this a lot. So, I’m going to, sort of, first start with, like, many of us today use products and services that are already fully automated. So, when you do a search on Google, there’s no million dwarfs in the mines of Silicon Valley that are finding the search results for us, and magically giving us those answers. There’s no search goblins that Google has access to right—or people.

 

This is literally in a fully automated way. There’s no way Google can provide its services to us today, with people doing those services. The people create the software and the algorithms, but the algorithms do all the work. And that’s both in the search side, but also in the ad side. They have salespeople that drive contracts between Google and the media organizations, or advertisers, or whatever, but the actual creation of the match between your search term and the ad is all fully automated.

 

So, Google is one example. And same thing in Facebook, right? And then one of the examples we have in the book, also, is Ant Financial, which is a Chinese-based juggernaut, a financial services company spun out of Alibaba. They serve 1.2 billion customers, and I think probably by now they maybe have 20,000 employees or so. At the time when we were writing the book, they had 15,000 employees with maybe 7 or 50 million customers or something like that—or 10,000 employees. And they have a process that’s 3-1-0 before opening up a new account. It takes you three minutes to fill out the application, one second for approval, and zero human intervention. There’s no way you can run a company with 1.2 billion customers and have humans do much of the activities.

 

Again, it has to be algorithms, has to be people. So, we pushed our thinking to say, what does a fully automated company look like? There’s room for lots of people, but the people aren’t the bottlenecks. These aren’t manual processes driving it, that they’re actually done, fully automated way. But I think you raise a good question. I mean, I think this human-in-the-loop conversation is an important one for us.

 

So, like autonomous driving. Do you want human-in-the-loop of autonomous driving? I don’t know because it’s kind of like, I’m going to be checked out most of the times until there’s an emergency, then all of a sudden, my reactions have to be superhuman fast to be able to avoid that accident. I don’t know. So, even there, I think the human-in-the-loop processes are potentially, in fact, could be worse than otherwise.

 

But more generally, when I come to this conversation with people. Take for example, in radiology, or in X-ray diagnosis. And then, when would you want a robo-doctor to give a diagnosis versus a human doctor, and you can think through various ways as to why this would be good, or bad, and so forth, but what I want to remind people is that we should go back to the confusion matrix, which is standard and machine learning, which basically, it’s a two by two. So, one axis, you can think about in terms of the detected state: is it positive or negative about some prediction, some decision, and the actual state, positive or negative. And then you basically then start to construct an analysis that says, I want to think about false positives and false negatives.

 

So, imagine in the bank loan situation, or in a cancer diagnosis situations, false positives, and false negatives. And then you want to have a conversation that says, what is the cost—the real cost, not just dollars and cents, but the real cost of making a false positive, and false negative by the machines? And we can extract that. And then it from there, we can get into sensitivity, and specificity, and so forth. But let’s just stick with false positives and false negatives. And then you go, “Okay, what’s the cost of doing that?” What’s the actual ratio of false positives and false negatives for machines, and then be one something similar for humans, for the experts. We want the counterfactual analysis, and then we can make a decision to say, okay, in what situations would we want the machines to be better than humans? In what situations we want machines to be equivalent to humans, and in what situations are we okay when the machines are worse than humans?

 

So, in your example, if Netflix gives me a movie recommendation, it’s okay that it’s a crappy one. Now, if it continuously does that to me, I might get sick of Netflix and not go there anymore, but in the moment, it’s okay. And so if the false positive rates are high in Netflix, that might be okay. But if you are, let’s say, diagnosing COVID, you don’t want a high false-negative rate because that gives you a false impression that you don’t have the disease and that you’re not infectious. And then you’re going to go out there and infect, like, 100 people because you’re going to go hang out at the beach and meet, and party, and that kind of stuff. So, we have to be contextual.

 

So, this whole question about human loop I think is important, and it’s not going to go away, but we need to start thinking about, “Well, how good are the humans, anyway?” So, imagine pricing, or even fashion assortments. How good are the humans in getting false-positive rates and false-negative rates, and then what are the consequences when those happen? And then do we have the systems in play? So, that’s what we teach this to our executives and in our analytics program. We go through—look, you need to have a proper conversation about the risks of false-positive and false-negatives in the context. Build a counterfactual of what do the existing systems do with people, and then have the discussion to say how might the machines help you along the way?

 

Brian: Yeah, I mean, I agree with you. It’s not even desirable for search, or Netflix, or whatever. I don’t want to wait 15 minutes for a human to say, “Try these movies.” No, those suck, too. That’s not a great—

 

Karim: But, Brian, this was so interesting because that’s exactly why Blockbuster died because they felt like everybody wanted to go to the movie store, buy popcorn, and talk to their local video nut who would tell you what movies to watch.

 

Brian: I think there’s two sides to that, though. I mean, my other career is as a musician, and I can tell you that really diehard music fans, a lot of them really miss record stores. And Spotify providing automated recommendations, there’s something about that experience that is not the same when you learn to trust a source there. So, I guess I’m thinking of this more on community lines. Maybe another example might be, let’s take a minority or someone that’s on the fringe, something about their profile, their demographics, or something goes in to get the loan. So, is every bank going to only be fully automated in its loan decision-making down the road, including a small local community bank in a community, maybe, the community has 20,000 people in it, or 10,000 people in it, and this person that needs the loan is actually, perhaps they’re well known in the community, and people can see what this person is trying to do, but oh, they had a felony, and they fell on bad times, or whatever it may be, and they’re kind of an exception. But there’s a human element there that would say, the bank wants to support this person, even if the risk is high. And I just wonder if we get to a place where there’s no recourse for that. And maybe the issue here is really escalation to level two. You get denied, and now you can appeal, and it goes through a human process or something like that. And maybe that’s a bad example. But—

 

Karim: Yeah, I mean, I think banks are notorious for doing the opposite. But that’s besides the point—

 

Brian: Well, yeah, in a sense.

 

Karim:—redlining in banks and that kind of stuff [crosstalk].

 

Brian: Yeah. So, what’s the training data, right? Because you need the right—

 

Karim: Yes. No, 100 percent. And that’s why we’re back to the ethics conversation. That those things, but at least we can be accountable, and measure things, right? And no, look, I agree. Look, I think the thing is, in all cases where technology comes in, there is an existential crisis that goes on. I think my favorite example is when photography got invented. When photography got invented, the European art scene was in crisis because still life was the measure of how good an artist you were. And now guess what? Any Joe could be good still life artist because you could capture still life. And then guess what? That crisis led to modernism and gave birth to Picasso. And so I think the difference now is that this existential crisis is happening across fields: in music, in banking, in the arts, you name it. We start our book with this next Rembrandt example, as well.

 

And I think that’s thing is, how does the curation bit now show up differently on Spotify, or through other media? So, now, I love music as well, I’m not a musician, but I now have to resort to a bunch of blogs, and a bunch of different people to figure out what new music to listen to so that the algorithm in Spotify don’t bring me back to A Tribe Called Quest over and over again, who I love. And so there’s these alternatives. Now, is it the same as the guy you knew at the record store? Well, maybe that was a good guy for you, but maybe not for me. Maybe he was always recommending death metal over and over again. So, I think this creates opportunities for new forms of engagement, but those things take time. There’s a transition period when society figures out how we do this in a different way, or in a new way. And I think that’s important.

 

Look, one of my favorite examples of this is Peloton as a company. You think about the fitness experience and what SoulCycle did for spin, and how Peloton has a brand new architecture of taking the same components that have always been out there, a studio, an instructor, digital, bring a bike into your home, and then you can scale it in a very different way. I tell you, Peloton, I feel like I have a personal relationship with my instructors on Peloton, even though maybe I follow them on Instagram, but I know these guys inside out and I love doing my classes with those guys. And as a community of people, the community of people is all around the world that I interact with. I do this particular type of training on Peloton called Power Zone Training, and so on, and now all of a sudden I’ve got fellow athletes that I—I’d be overstating it, calling myself an athlete—but fellow workouters [laughs] from around the country doing the same type of training with me, and we train together, and we encourage each other, and so forth. And before, I couldn’t even get my butt into the gym that’s across the street from me. So, I mean, I think there is a real opportunity for designers, for engineers, for entrepreneurs to take what these technologies offer you, and then do something brand new. Will it be the same as what was there before? Of course not. It’s going to change but we want to take advantage of the new things and see how those experiences translate over.

 

Brian: Well, I did want to let you know, there is a weekly, every day of the week, soccer game right in the Cumnock Turf Fields, right behind you. So, you have no excuse. [laughs]—

 

Karim: I know, but that’s the problem. My gym, the gym at HBS, the Shad in our beautiful gym, literally is the next building over, and I’ve been a faculty member 15 years, and maybe I’ve been there, maybe, 50 times in 15 years. When I get inspired, “Okay, I’ve got to go workout.” And then, now having Peloton now in my house in the basement, and then just having the structure has been, at least, life-changing for me. But again, the whole point is that I think that, as these technologies get infused in our lives, there will be an adjustment period, and we hope that this adjustment period leads to better experiences. Will we miss the old things? Of course we will. I mean, that’s how human beings are. But I think we also think about what else does it open up for us? I think we can be positive about it.

 

Brian: Yeah, I’m not casting an opinion. I mean, I work in this space as well, but I feel like—I don’t know who said this, some ethicist said, “Ethics defines the boundaries of what you care about.” And I think that’s a really important question, and then sometimes I wonder whether or not that’s going to become, “Oh, well, we’ll do that in phase two,” because companies are struggling just to even get the basic technology out the door, and I can see it easily being like, “We’ll get to that later,” unless we get a New York Times front-page article, and then we’ll definitely spend some time. You know, it’s just—so—and maybe that also means, yeah, now there’s room for local record stores again because everything else is fully automated, and so now there’s an opportunity for those people.

 

Karim: I think what I would say, also, is that look, I think if you don’t adapt, you’re going to get killed. And so that’s the thing that people get stuck in. You’re going to—that’s what happened to Neiman Marcus. That’s what happened to Blockbuster, as an example. That—and the thing is, also, that consumers aren’t sentimental. I get the demise of the record store. I do. But a majority of customers don’t care. Or they’re happy with the new experience on Spotify.

 

Brian: I would agree.

 

Karim: Right? And also, even worse, I remember, Spotify came out and I think Bette Midler had complained about how low her royalties were from Spotify, even though there was millions of streams. And it’s like, guess what? Bette Midler fans didn’t care. They didn’t start writing out bigger checks or sending her money directly. They said, “This is the new world. Get used to it. And too bad for you that you didn’t negotiate better contracts with the record companies, and the record music keeping all those profits, but that’s a separate conversation.” But the shift in music as a capital good, personal capital expenses that we made—we bought LPs, we bought 8 tracks, we bought cassettes, we bought CDs, then we bought iTunes, and then it shifted to streaming. And in that model, we went from acquiring music goods to subscribing to them. I think consumers didn’t care one bit.

 

Brian: Yeah, you’re right, this is a longer conversation. I didn’t mean to make this into like a music discussion, soo—

 

Karim: No, no, no. But I love it. Music—I mean, just think about the impact platforms and AI and [unintelligible] are having a music. It’s exactly the right conversation for us to have because that’s radically changed the business.

 

Brian: Sure, I mean—but again, I think then you have to ask what you care about. There’s arguments about—Spotify does not have a great reputation with a lot of the music community, despite the fact the consumers really like it. The model of how royalties are paid out. For example, you said you like A Tribe Called Quest, is that what you said?

 

Karim: Yeah.

 

Brian: So, do pay for Spotify?

 

Karim: Yeah.

 

Brian: So, you pay 10 bucks a month, or whatever?

 

Karim: Oh, we have a family subscription.

 

Brian: Okay, so whatever you let’s just pretend it was you, 10 bucks a month. So, if you listen to Tribe Called Quest 80 percent of the time, should they get $8 per month of your money, or should it be spread across the biggest song plays, the Taylor Swifts of the world or whoever it may be, even though you don’t like Taylor Swift. So—

 

Karim: I like Taylor Swift, too, but yes.

 

Brian: Right, but you see my point.

 

Karim: Yes.

 

Brian: It’s not weighted towards the artists that you listen to, which can—and so now you’ve got the touring market is harder, you can’t make any money with streaming anymore, direct to fan actually is a thing. The diehard fans do support through direct to fan purchase and stuff. But I think, again, what do you care about as a business if you’re going to move into full automation? And some of this isn’t really about AI so much, but, you know, anyhow—

 

Karim: No, no—

 

Brian:—so that’s my take on all that stuff.

 

Karim: No, let’s have another podcast about music.

 

Brian: Yeah, let’s do that. That would be fun.

 

Karim: There’s actually a great performer [unintelligible] Sherry, who has done a lot of thinking about it. I don’t know if you’ve seen her.

 

Brian: She’s awesome.

 

Karim: Yeah. So, yeah, I would—you should bring her on.

 

Brian: [laughs]. Totally.

 

Karim: And bring me on it because I love music, I have no skill on it—

 

Brian: That’s okay.

 

Karim:—but we can chat.

 

Brian: I want to jump back to something probably a little bit more relevant to our current audience, and I love that you brought this up. I think this is super missing right now. So, in my own work, I kind of have these two camps. I have my core, which is my old tech world. Software companies, they build commercial software products.

 

And then you have all the non-digital-native companies that are trying to work on their digital strategy. They’re at different places, whether it’s AI or otherwise. The product manager, this role does not exist in and a lot of places. I call it the data product manager. There’s no hub, there’s no central person in a lot of these places, and I wonder if this is why there’s continually study after study shows analytics, BI, AI, the failure rate is super high. And I feel like they don’t always know what problem they want to solve. No one is thinking about the end-to-end experience. The tools aren’t designed around what people want to do. They’re pushed onto employees. Talk about this product—I was surprised to see, I was not expecting to see someone talk about that. I was like, “Yes, finally.” Where did this come from that you think product management in a non-software company is important?

 

Karim: Yeah, look, I think… I mean, even this notion of non-software company or non-digital company, that’s like, which company is not?

 

Brian: I know what you—

 

Karim: I mean, again in the current times, the toast app is everywhere. And so now, all of a sudden, in the last few months, all the restaurants’ menus are online, we don’t have these crazy PDFs to look through, and I can now purchase stuff—restaurants that I couldn’t get into before, for whatever reason, now I can access through the toast app or whatever other app, and get access to them, and so forth. But look, I think the argument in the book is, we can no longer put technology to the IT group, and it just sits in the basement. And technology is infused now in everything that we do, and then companies will need to make choices about their strategies around this. And certainly, the generations that are coming forth—my daughter who’s 16, and her friends, and the generations just before her and after her are natively digital.

 

They’re not going to put up with non-digital processes to interact with companies. And so, this notion that the product manager role, somebody who actually understands both the technology and the business, I think is crucially important, and can do the end-to-end view. What is the consumer experience look like, and what do we need to deliver it—or what do we need to do internally to be able to deliver on that? That role is not well understood outside of the tech business, and it’s crucial. And that requires, again, this intersection of technical skills and business skills.

 

And I think that’s why there’s so many failed digital transformation efforts, there are so many failed AI efforts, or they just stay at the pilot stage because if you truly understand the customer experience, you truly understand what customers want and how your digitally transformed services and products will do that, then you’ll going to drive for a ton of change inside the organization as well. And part of why lots of companies fail at AI projects, a lot of companies fail at their data transformation projects, or the digital transformation projects is because they just think, “I’m going to do Hadoop,” or, “I’m going to do Snowflake,” or whatever the latest buzzword is. And all of a sudden, I checked that box and it’s great. And the reality is, that’s neither necessary nor sufficient for you to be able to be successful in this space.

 

And you need human capital, you need people who can intersect both sides, but have a customer orientation first and foremost, and can galvanize the tech teams and the operations team to be able to deliver on this new experience. But that requires organizational change, and I think the reasons why all these pilots fail to go beyond [pods] is because the companies aren’t ready to do the organizational change needed to deliver on these things. I think the technology part in many ways, I think, is easier part. I think it’s the org change part, which I think is the harder part.

 

Brian: Are you tired of building data products analytics solutions, or decision support applications that don’t get used or are undervalued? Do customers come to you with vague requests for AI and machine learning, only to change their mind about what they want after you show them your user interface visualization or application? Hey, it’s Brian here. And if you’re a leader in data product management, data science, or analytics and you’re tasked with creating simple, useful, and valuable data products and solutions, I’ve got something new for you and your staff. I’ve recently taken my instructor-led seminar and I’ve created a new self-guided video course out of that curriculum called Designing Human-Centered Data Products. If you’re a self-directed learner, you’ll quickly learn very applicable human-centered design techniques that you can start to use today to create useful, usable, and indispensable data products your customers will value. Each module in the curriculum also provides specific step-by-step instructions as well as user-experience strategies specific to data products that leverage machine learning and AI. This is not just another database or design thinking course. Instead, you’ll be learning how to go beyond the ink to tap into what your customers really need and want in the last mile, so that you can turn those needs into actionable user interfaces and compelling user experiences that they’ll actually use. To download the first module totally free, just visit designingforanalytics.com/the course.

 

Brian: One thing that always kind of surprises me is, you hear sometimes the non-digital natives worrying about a tech startup coming around and eating them up. And I wonder, “Well, why aren’t you copying the way they build digital services?” Which typically, there’s a trifecta of product management, product design, and a technical lead, software architect, a engineer, data scientist, someone that represents what is possible. You need all three of those things to deliver good digital services, and if you only have one of them or two of them, the risk goes up of building something no one will use, whether it’s your employees who refuse to adopt this application tool thing that you’re pushing on them, or it’s actually the end customers’ experience. And it kind of baffles me that when we hear about these failure rates, it’s like, “Well, why aren’t you copying how they’re doing it, and not just the technology stack.”

 

Because an API endpoint—“Oh, we’re using Snowflake and Hadoop or whatever,” it’s like, but the customer doesn’t care about whether it’s in the cloud, or whether your storage is on-prem or not, that doesn’t have anything to do with their interfacing with the service and whether they choose to use the recommendation from your model, or they go against it. I’ve heard this, repeatedly, about salespeople that don’t want to use the numbers provided by the model in the contracts. I will negotiate the price myself. And it’s because they weren’t involved in the process, that the tools were not designed around the way salespeople want to do their job, and they weren’t at the table when it was made. And these skills are not the same as building the model and deploying the DevOps and all the other technical stuff. I don’t know. Do you hear that, too? That’s—

 

Karim: Totally. I hear that all the time. And I think that’s because I think these established companies, organizations have a tough time with the change process. And so then they don’t include the designers, include the product managers, include the technical folks together. And you have to basically build a different way, deploy a different way, drive adoption a different way than what you’re used to doing. And sadly, I don’t think many companies, sort of, realize that, and then they struggle.

 

Brian: Yeah, I liked you mentioned the Azure Cloud, and I think they, they dogfooded it. The new leader came in and said, “I want you guys to use this tool,” and they realized just how hard it was to adopt it internally, and so they made it a mission to work on the usability of the service in order to drive the adoption. Can you recount that a little bit better than I just did?

 

Karim: Yeah, I think the whole notion is, more broadly, there is this notion of doing something for yourself first, and then creating a service around that. So, the classic example, of course, is AWS. Amazon had to build a scalable infrastructure for their data centers. They figured that out, and they said, “Oh, now that we’ve built this, can we offer it to other people as well?” So, your company as customer zero is really important. And I think I’ve seen the same thing happen with Adobe, as well, as they built out their Marketing Cloud, their Experience Cloud, which is that they realized that—we talk about this thing called the AI factory in the book, where we think that every company’s going to need [this] AI factory. They had to build the AI factory for themselves, and be the first customer; customer zero first. It’s like, “Can we do our own marketing this way?” And then, “Have we perfected the process?” And once we’ve done that, then I can offer to others.

 

And I think it’s the same thing in the Azure story; if you don’t—and this is old wisdom from Microsoft way back when. Eat your own dog food. Michael Cusumano and David Yoffie discovered this in their work in the 90s, about how Microsoft competed. Eating your own dog food is really important because if you’re not your own users, you’re not going to be able to satisfy customer needs. And HP talked about this when they were inventing oscilloscopes. They’re like, “The next bench.” Your customer is the next person over, and make sure that they actually get to use it or not. Again, I think that the problem is that we don’t embrace the customer journey properly, and we take it for granted. We somehow, we understand our customer models, but we don’t really. And I think this Azure example of them basically saying, “If we can’t use the same services at scale in our own operations, then they’re not worth putting out there,” is the main lesson that I think all of us should be following. Put yourself as the user and use it. Don’t go to focus groups. Don’t show me PowerPoint decks. Show me your use experience, then we can talk about it.

 

Brian: Yep. Yeah, I’m totally with you there. In the design community, sometimes we talk about the ultimate design maturity is an organization that’s willing to hold back a release because the experience is not good enough. And it takes a long time to get to that point where someone said, “Yeah, the text there, yeah the QA passed, but it’s way too hard to use. No one can get to the end of it. No one can do—we can’t achieve the outcome we want. We built an output; we didn’t create an outcome yet though, and so we need to go back and get that right.” I think that’s rare. [laughs].

 

Karim: Yeah, I agree.

 

Brian: I’ve been talking to Karim Lakhani here. He just wrote a great text—I guess not just wrote—what, six months ago, Competing in the Age of AI. Is that [crosstalk]?

 

Karim: Yeah, came out January 7th, so actually it got written last summer. [laughs].

 

Brian: Yeah, yeah. [laughs].

 

Karim: Publishing still takes a long time, apparently. [laughs].

 

Brian: Yeah, yeah. [laughs]. I did want to just transition one little last question here as we wrap up, but, so you’re also the founder of the Harvard Business Analytics Program, and I kind of touched on this earlier, there’s this heavy failure rate in the analytics world with the tools, and applications, and dashboards, and data that they put out, and I’m just curious, are you addressing that in the curriculum for the students, and what change is being made for the young people coming into the field such that we don’t continue this 10-year trend, from the studies I’ve read at least, the numbers don’t really get any better it seems. Are you addressing that in a certain way?

 

Karim: Yeah. So, yeah. So, first, shameless plug: analytics.hbs.edu. That’s where you find out more about our program. And these guys aren’t youngsters, who are taking our course. So, this is a fully online program that’s offered. We call this the three shields program at Harvard. So, it’s the Harvard Business School, the Boston school for Engineering and Applied Sciences—our engineering school—and also the Faculty of Arts and Science and the stats department. Now, a little bit of an inside track for you guys about Harvard: shields are jealously guarded by their schools. They don’t want to give their shields at all, each of the schools at Harvard. And I think only the president, Larry Bacow, has full access to all the shields. Everybody else has to stick to their own shields. So, for us to be able to have a three shields program that I [unintelligible] my own certificates that I gave out graduation, is a pretty awesome accomplishment. And in our marketing materials, and in our PowerPoints we have the three shields. And I remind all of my Harvard colleagues that I’m a three shields program, there’s a few two shield programs, but most are three shield program programs.

 

Brian: [laughs], like Michelin star for Harvard?

 

Karim: Exactly, exactly. So, how did this come about? So, this came about, really, what we realized is that there was a hunger amongst executives to learn both the technical stack: understand statistics, systems, and programming; and machine learning and AI. Understand what the toolkit did and how it worked, but also, its implications for business. And I’ve always been in my professional career at the intersection of technology and business and just been lucky to have been at both sides.

 

But many people, both technical folks, but also, business folks were finding themselves in this thing—like our discussion about, like, duh, which company is no longer a software company? That doesn’t make any sense. And so I think many folks have felt this need. And the alternatives were, okay, you’re going to go do a master’s degree in data science. Okay, that’s going to make a data scientist, that won’t give you the business side. And you can get an MBA, but MBAs gives you a general curriculum. You learn finance, and accounting, and HR, and all that kind of stuff, and then there’ll be a sprinkling of data science, and so forth.

 

So, we said, we’re going to have this new offering, aimed towards mid-career executives, those that are seeing the analytics revolution show up at their doorsteps, and are grappling with how do I do this? And both on the technical side and the business side, and then we’re going to offer it as an online course where you get access to the best thinking and the best knowledge in those spaces, but you simultaneously get access to the tools and learn about the tools, and both what they enable and where they’re limited. But then there are business applications as well. So, the business stack includes strategy, and marketing, and operations, and leadership, and so we’ve really tried to build this very highly curated curriculum. It’s not a degree program; it’s a certificate program.

 

We do all of our instruction through both asynchronous sessions on our video platform, and then also live teaching on Zoom, and the program has been active for two years. We’ve had about 1000 people in the program, now. And some people finish it in nine months, and some people finish it in 18 months, or two years, depending on what’s going on in their lives. And surprisingly, we cover the broad portion of the economy. So, we have people coming from technology businesses, but also from healthcare, from consulting, from consumer products, from manufacturing, you name it.

 

And also even the technical folks that might have specialized a while back, but the technical refreshers are good, but then they start to get the business side of things. Similarly, the business folks get the business side of things, and modern thinking, but also the technical side of things as well. And that’s the program, and it’s been one of my favorite things to have started here at Harvard—co-started here. David Parkes, who’s a former head of computer science at the engineering school is a co-director and co-founder, with me, with the program. And it was fun to sit down with him and then design it with the rest of our faculty, to say, “What would a highly curated curriculum look like that addresses those issues and bring them together?”

 

Brian: Cool, I will definitely [link that up], as well as the book. Is Amazon the best place to grab the text?

 

Karim: Yeah, absolutely. Yeah.

 

Brian: Cool. Yeah, I’ll [crosstalk] that.

 

Karim: And we always are hungry. I’ve become a five-star person. I care a lot about my reviews now on Amazon because of course their algorithms depend on the number of stars you get, so we’re always looking for reviews of the book on Amazon, and however many stars you want to give us.

 

Brian: Awesome. So, type in Competing in the Age of AI into Amazon, you’ll find that text there. Karim, how can people follow you? Are you on social media or LinkedIn? Where do you publish?

 

Karim: For sure. Yeah, I’m on LinkedIn, as well as on Twitter, and I go back and forth between those two platforms on Twitter. I’m @klakhani.

 

Brian: @klakhani, okay. All right. And I’ll find your LinkedIn and pop that in the [show notes] as well. So, thank you so much for coming on Experiencing Data and talking about your book, and AI, and digital.

 

Karim: Yeah. Thanks so much, Brian. I really enjoyed the conversation, and hope to do it again, around music. That’d be fun.

 

Brian: Well, let’s do it.

 


More Episodes of Experiencing Data:

  • Browse past episodes - each episode has a full text transcript and audio
  • Join the mailing list below to be informed when new episodes are released

Subscribe for Podcast Updates

Join my DFA Insights mailing list to get weekly insights on creating human-centered data products, special offers on my training courses and seminars, and one-page briefs about each new episode of #ExperiencingData.