Today I’m chatting with returning guest Tom Davenport, who is a Distinguished Professor at Babson College, a Visiting Professor at Oxford, a Research Fellow at MIT, and a Senior Advisor to Deloitte’s AI practice. He is also the author of three new books (!) on AI and in this episode, we’re discussing the role of product orientation in enterprise data science teams, the skills required, what he’s seeing in the wild in terms of teams adopting this approach, and the value it can create. Back in episode 26, Tom was a guest on my show and he gave the data science/analytics industry an approximate “2 out of 10” rating in terms of its ability to generate value with data. So, naturally, I asked him for an update on that rating, and he kindly obliged. How are you all doing? Listen in to find out!
Highlights / Skip to:
- Tom provides an updated rating (between 1-10) as to how well he thinks data science and analytics teams are doing these days at creating economic value (00:44)
- Why Tom believes that “motivation is not enough for data science work” (03:06)
- Tom provides his definition of what data products are and some opinions on other industry definitions (04:22)
- How Tom views the rise of taking a product approach to data roles and why data products must be tied to value (07:55)
- Tom explains why he feels top down executive support is needed to drive a product orientation (11:51)
- Brian and Tom discuss how they feel companies should prioritize true data products versus more informal AI efforts (16:26)
- The trends Tom sees in the companies and teams that are implementing a data product orientation (19:18)
- Brian and Tom discuss the models they typically see for data teams and their key components (23:18)
- Tom explains the value and necessity of data product management (34:49)
- Tom describes his three new books (39:00)
Quotes from Today’s Episode
- “Data science in general, I think has been focused heavily on motivation to fit lines and curves to data points, and that particular motivation certainly isn’t enough in that even if you create a good model that fits the data, it doesn’t mean at all that is going to produce any economic value.” – Tom Davenport (03:05)
- “If data scientists don’t worry about deployment, then they’re not going to be in their jobs for terribly long because they’re not providing any value to their organizations.” – Tom Davenport (13:25)
- “Product also means you got to market this thing if it’s going to be successful. You just can’t assume because it’s a brilliant algorithm with capturing a lot of area under the curve that it’s somehow going to be great for your company.” – Tom Davenport (19:04)
- “[PM is] a hard thing, even for people in non-technical roles, because product management has always been a sort of ‘minister without portfolio’ sort of job, and you know, influence without formal authority, where you are responsible for a lot of things happening, but the people don’t report to you, generally.” – Tom Davenport (22:03)
- “This collaboration between a human being making a decision and an AI system that might in some cases come up with a different decision but can’t explain itself, that’s a really tough thing to do [well].” – Tom Davenport (28:04)
- “This idea that we’re going to use externally-sourced systems for ML is not likely to succeed in many cases because, you know, those vendors didn’t work closely with everybody in your organization” – Tom Davenport (30:21)
- “I think it’s unlikely that [organizational gaps] are going to be successfully addressed by merging everybody together in one organization. I think that’s what product managers do is they try to address those gaps in the organization and develop a process that makes coordination at least possible, if not true, all the time.” – Tom Davenport (36:49)
- Tom’s LinkedIn: https://www.linkedin.com/in/davenporttom/
- Tom’s Twitter: https://twitter.com/tdav
- All-in On AI by Thomas Davenport & Nitin Mittal, 2023
- Working With AI by Thomas Davenport & Stephen Miller, 2022
- Advanced Introduction to AI in Healthcare by Thomas Davenport, John Glaser, & Elizabeth Gardner, 2022
- Competing On Analytics by Thomas Davenport & Jeanne G. Harris, 2007
Brian: Welcome back to Experiencing Data. This is Brian T. O’Neill and I have the man that needs no introduction. Tom Davenport’s back again. How are you, Tom?
Tom: Great. Happy to be here. Thanks for having me on a return engagement.
Brian: Yeah, yeah. It’s been a couple of years. I think it was, like, November 2019. I went back and was looking at that and it reminded me of something I wanted to ask you. So, if you remember on that episode, I asked you to score how well the data industry was doing at actually delivering value with analytics and data science, and I said, you know, “On a scale of one to ten, where, say, a decade earlier, we were at a one, where were we then at in 2019?”
And so, I’m going to ask you today, where are we now assuming a perfect ten score means data is a natural part of business decision-making, the tools and solutions, and models, like, naturally fit into the course of people’s work and what they’re doing, it’s just normal. Where are we at now do you think?
Tom: Remind me what I said before. I know it was controversially low.
Brian: I will not bias your—
Brian: Answer by telling you, I’ll tell you after though [laugh].
Tom: Maybe… a four. I think we’re getting slightly better, but got a long way to go.
Brian: That’s a pretty big jump because it was actually—you said two maybe two-and-a-half was your answer. So, that’s a pretty large jump actually, right, relatively [laugh]?
Tom: Well, you know, I think now we’re becoming aware of the fact that analytics require—and AI required number of things to be successful. And I don’t know how pervasive the right practices, the good practices are, but I think there’s certainly increasing awareness of those issues now. I think it’s all your podcast, Brian, that’s done it.
Brian: I’m sure it’s having a massive impact on what organizations are doing [laugh]. I mean, I don’t know, I’m just trying to make some noise over in my little corner here, I have the vuvuzela. I’m the only one that has one in the stadium, in the analytics stadium, so.
Brian: [laugh]. But I wanted to talk to you about this whole product orientation for analytics and AI that this conversation about data products—which does seem like it’s growing to me, data product management in particular—the first thing I wanted to ask you what we were talking when we had our little screening call ahead of this was you said, you know, motivation is not enough for data science work. What did you mean by that when you said that?
Tom: Data science in general, I think has been focused heavily on motivation to fit lines and curves to data points, and that particular motivation certainly isn’t enough in that even, you know, you create a good model that fits the data, it doesn’t mean at all that is going to produce any economic value, and many, many more steps necessary to do that. And I think for most—you hate to stereotype, but for most data scientists, I think that wasn’t why they got into data science, to do all those other things. They like fitting lines and curves to data and they’re good at it and they think that should be enough. And if indeed some other things are necessary, that should be somebody else’s job.
Brian: I completely understand that. And so, I want to talk about this product orientation, as we mentioned earlier, but before we do that, I have this hobby of collecting definitions of what data products are. So, I was curious if you wanted to start out with that just to frame this conversation. I know you have a different look on that than, like, McKinsey does. And I think you had mentioned you’d heard my recent definition, on episode 105. So, I want to ground this in what yours is, before we talk about it. So, what are these data products? And is this a new thing, and old thing? What is this?
Tom: You know, I’ve been writing about them for several years now, so it’s a little bit old, and you’ve been talking about them for quite a while. But I think the derivation is these digital native companies. I remember going out to Silicon Valley, maybe five or seven or eight years ago, go out to places like LinkedIn and PayPal and so on, and talking to them about data products and they knew exactly what they were and they were quite interested in them. And I still remember, to write this article, I knew the guy at Google who was kind of the chief statistician, Hal Varian is his name, and I sent him an email, “Hal, how many data products does Google have?” And he said, “I’m not sure exactly, but I’ll tell you how to find out the answer. Go to the Wikipedia page for Google and on that page, it has a list of products and count them and the number you come up with is equal to the number of data products.”
So, it’s kind of a smartass answer, but I think it was true that those kinds of companies out there realized that what they were creating was a set of products that relied on data and also had some capabilities to make sense of the data to make it useful. So, for me, data products are a set of data assets or datasets, combined with the analytics or AI tools to make them comprehensible, useful, understandable, valuable, in some way or other. And I think the product orientation means you know, we’re creating something that is usable, as is, more or less. I mean, you don’t have to do a lot of work to get the answer that you need from the data.
Brian: It seems to imply there’s a higher degree of usability and utility, there’s obvious value that comes in it. As I was reading your article, too, I got the sense that these are either designed for larger audiences or they are more mission-critical, they have a larger significance than, say, a self-service reporting application or something like this. There seemed to be a sense of scale associated with this. Would you say that’s fair as well?
Tom: Yeah, I think that’s true. I don’t really know exactly where the spreadsheet stops and the data product starts, but in the companies that I work with, the data products are semi-formal designation that we’re going to pull all the resources and capabilities necessary to create this thing and we really care that we succeed. I’ve seen companies that say, “Once we designate something as a data product, we get it done and it becomes something that is used.” And that is, as you know, not the case for many, kinds of, data science efforts. The proportion of those that actually get deployed varies a lot, but I’ve you know, seen it go from zero percent to close to a hundred percent. But that zero is closer to the average than the hundred is.
Brian: [laugh]. What is driving this? It’s not like somebody just wakes up and decides, “Oh, I should hire some data product managers. We should have a product approach to what we’re doing.” Is this, like, digital native people bleeding their way into legacy enterprises and dragging along, like, what I would call normal digital software management practices into that work? Or, “Oh, look at the way digital companies are doing it. Maybe we should start copying the methodologies there and not just the technology stacks and things like this.” Like, where’s it coming from? What’s driving it?
Tom: I think there’s some of that from the digital native companies. And there’s also the probably a little more common in the organizations I’ve spoken with later—I think you had Manav Mishra, on your podcast, right, and he came, in part, at least out of the software industry, and software products have been widely known for a while. And so, I think a number of people came from that space. And then there are a few [laugh] data science types who’ve seen the light and said, “We’re not deploying frequently enough. That means we’re creating not much economic value, and so we need something different.”
And I won’t say the person’s name because it’s supposed to be anonymous, an anonymous review—I’ll probably tell him, but this person had written a well-received and popular book on analytics, and he’s writing now a book on what I consider data products, he calls it something entirely—I won’t give away the name, but it involves sort of the business of machine learning. And I said in my review, “Look, it’s great that you’re doing this, but data product seems to be a more common moniker for this stuff than anything else, so you might want to consider that.” But you know, I think that article you referred implicitly to a Harvard Business Review about why your company he needs data product managers or something like that, I wrote that with Randy Bean and a guy named Shail Jain from Accenture. And Accenture, apparently—I didn’t realize this until I talked with them—but essentially things have data products the way McKinsey did in that other HBR article that it’s just data. And they call the things that involve analytics, analytics products. I don’t think it necessarily matters all that much, except you have to be clear in your definition within an organization of, you know, what’s a data product, what’s an analytics product, what’s an AI product, et cetera, so you know, you’re going after the right thing.
Brian: The thing that I’m hoping will stick is that this idea of its product than it intrinsically means it must have value. I mean, I guess you can have a bad—a poor selling product is something, but the idea that it’s a product suggests that there’s inherent value that someone would exchange something of value for it. And to me, that’s the thing I want this audience to really hone in on is that it has to be valuable, it has to be deployed, it has to be useful, and it has to be usable. And if it doesn’t meet those criteria, it’s probably not, it’s just an output.
Tom: A data by itself, I think rarely would meet those criteria.
Brian: And this might be maybe biased by if you’re primarily talking to executive leadership, but I’m curious if this is usually a top-down orientation change for large, non-digital enterprises or is this something that’s kind of simmering in the trenches or middle management’s kind of like, “We need to reorient this.” It needs a lot of support to do it. I mean, it’s hard enough to do product management work, even in digital companies, especially large enterprise software companies. It’s a tough job because you don’t own those resources a lot of the time, so it’s a lot of corralling of getting—I need design resources, I need engineering resources, and I need—you know, you need stakeholders, and you need to talk to sales and marketing, and it’s very much corralling all this stuff. Tell me about that. Like, do you think this is coming top-down and it needs to come that way, or is this something coming bottom-up in terms of orgs trying this work?
Tom: When it’s been successful, it’s mostly coming top-down because as you say, you can’t marshal all of those different capabilities and resources, unless you have some clout. I still don’t think there’s widespread awareness of the need for this from the data science community. I think since our last podcast—maybe, maybe I’d done this already, but I was column editor for something called the Harvard Data Science Review. It’s a new journal that Harvard has established to try to be more like MIT, I guess [laugh]. Interestingly, it’s even published by MIT Press.
But in any case, so I was supposed to be the column editor for industrial and active learning or something like that, kind of use of data science in business and industry. And so, I wrote a piece about deployment and the need for deployment, and I called it a critical aspect of data science. And I’m no longer an editor, I think, in part because I kept complaining about this, but columns usually don’t get peer-reviewed, but these columns were peer-reviewed and two out of the three of the peer reviewers who come from data science background said, “What are you talking about? That’s not the responsibility of data scientists, leave them alone. They’re just supposed to create great models. And if it’s anybody’s job to worry about deployment, it’s not theirs.”
But I was trying to argue that, you know, if data scientists don’t worry about deployment, then they’re not going to be in their jobs for terribly [laugh] long because they’re not providing any value to their organizations. I was talking recently with a guy who heads an analytics program at a school in the Midwest, and he said, you know, “A related trend is this idea of translators, that you need translators in between business people and data science types.” And I said, you know—he was thinking about creating a translator degree program. And I said, “You know, I don’t think that many companies are going to hire people just for the sake of translation. Data product management is much broader objective and something that is more critical, and I think can encompass the translation function.” So, I don’t know if you have thought about that issue or whether you agree or not.
Brian: I love the work the analytics translators, at least the mission as I understand most of them are doing. I love the work and I despise the title. I think it’s not good for them, it’s not good for the org and it’s not good for them because it sounds like something you do after the fact, and it sounds like I take this one language and then I translate into the language of data, so the team could build something and my role is to translate stuff. And that’s not really what a product manager is doing. The product manager has this ultimate responsibility for value delivery.
You’re not just to translate, “They want an AI model, so I have to translate it into that into some machine learning spec and what datasets are needed, and blah, blah, blah,” regardless of whether or not it’s good for them [laugh]. And I know that’s an oversimplification and that’s not what they’re all doing, but it sounds that way, and then I feel like I just this title is not helping anybody, I think, even though I think there’s some valid work happening by people that are doing analytics translation work. So, I try not to say the title because I don’t want to further it, I don’t want it coming up in SEO searches. I just don’t want to encourage that title anymore. I’m really hoping this product management thing, which is a defined skill set, a role, there’s a specific kind of value associated with it, that that will catch on more.
Tom: I think I persuaded this guy who headed the analytics program at the University to switch to data product management. And translator, I sort of agree with you that it’s a noble concept, but it’s not nearly broad enough to encompass what needs to be done. And I don’t think it’s really caught on in that many companies. I did find one bank in Asia that said, “Okay, for every two data scientists, we’ll have one translator.” But there’s just so many other things that need to be done as well to make data products successful.
Brian: Maybe this is selection bias because you only talk to the people that want to associate successes with it, but are you hearing anything about this not working well? I want to be open to the fact that this can be hard and maybe it’s not right for some organizations, I don’t know. But, “Yeah, we tried that, and here’s what happened. It didn’t work.” Or have you heard any negative stories associated with trying this or difficulties in implementing this culture or mindset?
Tom: The one issue that every company that embraces this concept has to deal with is what we we’re talking about before: what do you do with the more, sort of, informal efforts? And you have a lot of analytics and AI groups who’s sort of been, in part, service bureaus sort of taking on all comers. And you can’t really do that with data products, I think. You know, you have to be serious about what’s a data product and what isn’t, and you can only do so many of them at one time. And so, I think you still need to encourage people to do informal data science efforts, maybe citizen data science is what encompasses that.
And not every AI effort has to be a data product, but the ones that are important for the mission of the organization, I think do need to be. And I haven’t heard anybody fail. I have heard of people who did not have a data product orientation, but still were kind of intrinsically doing a lot of the things to succeed with it, and as such at a pretty high successful deployment percentage.
Brian: If it’s working and you’re creating value and delight in your stakeholders are happy and users are happy, then you’re doing it right and it doesn’t matter what you call it, you know [laugh].
Tom: Yeah, I think I agree with you in that the product orientation is a great way to think about it because that means you realize somebody is going to be using that product, or you’re going to have a customer for it, it has to have a good design. Once you put it out into the marketplace, you have to kind of monitor whether it’s being used or not. And if not, why not? And it has a life beyond the completion of the project, it needs to evolve over time if it’s going to be successful. So, I think there are a lot of nice attributes of the product concept.
Brian: You made a good point to which is, you know, for software, product teams, right, especially if you’re, like, at a startup, day one of the project is actually the day that it goes live, but it feels like that’s, like, the end of it, if you’re doing the work. And it’s actually the beginning of it. And this orientation means you have to realize everything till now is, like, a negative day. And then Day Zero hits when it goes live. Now, start counting whether or not you did any work of value, which means you do have to pay attention, you probably have to iterate, you need to be getting in front of customers, the climate changes, whatever it may be. That’s actually the start; it’s not the finish. And I think the way a lot of these teams are set up, it’s framed as the end. Go to the next project now. And that’s a project orientation, right?
Tom: Yeah, exactly. Product also means you got to market this thing if it’s going to be successful. You just can’t assume because it’s a brilliant algorithm with capturing a lot of area under the curve that it’s somehow going to be great for your company.
Brian: Are there any trends either and the leaders that you talk to that are implementing this or if you’re aware of their teams on a more intimate level in terms of just what they feel like, what they look like, oh, I can kind of smell that, that team smells like this because they’re doing this. Like, any attributes or just characteristics you’re seeing with teams that are doing this?
Tom: I think one you probably talked about it on your podcast, but I really liked the way that Manav Misra thinks about this Region’s Bank with the partnership orientation. He calls them data product partners, and they are in very close alignment with business stakeholders. And so, I think that’s one important attribute. Another that we talked about in this Harvard Business Review article is what are the right skills. And generally, I think it’s fair to say, data scientists are not the right people for this sort of role.
There can be some exceptions, certainly but, like, I always hesitate to generalize, but you know, analysts and AI people like numbers, librarians like books, [laugh] you know, it’s pilots like airplanes. So, to get them to do something outside of that is not always easy to do. Now, obviously there’s some librarians who end up managing lots of people in libraries, and so on. So, there’s some people who are going to have the traits that ensure economic value and they’re good at marketing and they’re good at design and they’re good at adhering to schedules and figuring out how to create financial value, but not many. It’s really hard to be a data scientist, it’s hard to know all the things you need to know to be a good one, so why assume that all those other things are going to be packed into one individual? There are no unicorns out there.
Brian: For me, a lot of it beyond being a generally smart person, it’s desire to do this kind of work, which is not, it’s a not well-defined type of work. I find no matter what space—because in the software world, a lot of PMS come out of actually engineering, or they come out of design—now more and more of them are coming out of the user experience design field as well, or they’ve come out of completely different areas, sales or whatever. You can kind of come at it from anywhere, but part of it is learning to take that hat off about where I came from. So, if you’re going to come out of data science, it’s, you might need to let go of the implementation details about which model or algorithm is right, the right way to do it, and you’re now delegating that work to someone else who’s going to own that, but you need to focus on the trajectory in the big picture, and not on all that implementation detail. That can be really hard to be wearing these hats and know when to take it off and just say I can’t run that part. I got to let go of that [laugh] for the bigger cause.
Tom: It’s a hard thing, even for people in non-technical roles because product management has always been a sort of minister without portfolio sort of job, and you know, influence without formal authority, where you are responsible for a lot of things happening, but the people don’t report to you, generally. So, you have to be really good at coordination and persuasion and so on, not just by virtue of the fact that they report to you they’re going to do exactly what you want them to.
Brian: It’s a leadership role, not necessarily a management role, despite the product management title. But you don’t need permission to lead; I think this is a skill that you can kind of choose to develop on your own. I mean, it helps to have, you know, executive support for this, but leadership and management aren’t the same thing and to me, you have to have good leadership skills to do this. But it’s not zero or one. It’s not binary, you can develop that skill over time.
Tom: Yeah. And you know, I think the good news is that business schools now for a while have been educating students who are interested in this sort of thing about product management. And then it sort of extended to software product management. And I think it’s probably going to extend into data product management before long.
Brian: Talk to me a little bit about the teams. You mentioned design at one point. My kind of model for this, at l—you know, [unintelligible 00:23:22] my model. In the software world, kind of the trifecta usually is you’ve got with product management, you have your product management leadership, or just the, quote, “Product role,” you have engineering, and you have design or user experience design if you want to call it that. To me, this three-legged stool is a four-legged stool in the data product world. That fourth leg is usually a data scientist or an analytics person or someone with that data specialization that’s relevant to this that goes beyond what an engineering person would be doing.
Is that also a model you see? And I’m talking about really that core digital technology part of it, not the requisite business sponsors, the subject matter experts, and all the extended circle, but kind of that inner circle? Is that a model that you see happening as well, or not necessarily?
Tom: It can get fairly complex and overlapping, but there’s data science, and then there’s data engineering to sort of wrangle the data in the needed way. And a lot of data scientists have spent a lot of time on that, but that’s becoming a profession on its own. I think Capital One tried to hire or train five thousand data engineers—no, they were machine-learning engineers, which is yet another thing. They wanted five thousand machine-learning engineers last year [laugh], in one year, I don’t think they were totally successful. But so their data engineers, machine-learning engineers tend to be oriented to how do I take this model that’s been developed by a data scientist and scale it, integrate it with the existing technology environment, and that’s becoming a field on its own.
So, it’s you know, there are a lot of subspecialties that are getting developed. And then you have all the, kind of, organizational change management and stakeholder management and so on kinds of things that you put into the outer circle. I’m not sure I’d put them in the outer circle because they’re just as important as the other things. But companies are more likely to realize all those technical—Capital One, I think they do a good job in general, but they have all these technical specialties. I’m not sure they’ve developed a sort of organizational change and stakeholder management specialties, yet. At least I didn’t talk to me about them a year ago or so.
Brian: I guess the framing for this that I have is that there’s an economic delivery component, ensuring that value is created. There is a user component, which because the users are not always the stakeholders, right? Sometimes you’re delivering it for a department head, but the person using it as all the individual contributors, they’re not the same person, right? So, there’s that experience part.
Tom: Right. Or the external customers may—
Brian: Yeah. And then you’ve got the, obviously the technical part of which, you know, I just broadly put them under engineering and data science, there’s all these subsets of that: front-end and back-end and machine learning engineering and data engineering and all those subsets. But it seems to me that combining the data science part and the engineering part into one lump isn’t sufficient. There’s a distinctly different thing about those. And they’re not necessarily mean, you have to have four bodies on every project, a minimum of four bodies. It’s just those skills have to be represented, regardless of whether there’s an individual body for each one.
And I think a lot of times when I talk to teams, it’s like there’s a skill gap here. Whether it’s a person or not, no one’s paying attention to x [laugh]. It’s usually the customer experience, it’s usually the user experience part that I see is missing is that no one’s focused on that. It’s there’s trying to manage the business stakeholders, give them what they asked for and then build the technical part, but then it fails at the adoption piece. When the tech hits the people, that’s when it dives, the nosedive hits [laugh].
Tom: And with it, you know, I was talking about this a lot for the past few days. I’ve been doing a fair amount of research and writing on AI and healthcare, and somebody sent to me a good paper by some ethnographers who looked at AI in radiology, AI use or the lack thereof in radiology. And you know, all those AI models for radiology are based on deep learning and they don’t explain themselves at all, generally, they just say up, “Yep, yep. That looks like cancer,” or, “Nope, doesn’t look like cancer.” In this particular article, they were looking at three different types of radiologists in a large hospital, and one was lung cancer, one was breast cancer, and one was bone age, which I guess you need for kids and human growth hormone kind of treatments, and only the lung cancer radiologists were using it.
You know, this collaboration between a human being making a decision and an AI system that might in some cases come up with a different decision but can’t explain itself, that’s a really tough thing to do. I mean, the surprising thing to me is that even though lung cancer people were paying some attention to what the AI system does. Most of the others blew it off when it disagreed with their diagnosis. So, it’s really hard, I think, to do that, particularly if the systems don’t explain their logic for coming up with a decision.
Brian: The first thing I want to know is whether or not the medical staff on the lung side were included in the creation of this tool.
Tom: Yeah, probably not. I know, they weren’t in the creation of the bone age tool because I think that came maybe from Mass General. I know Mass General was working on one of the articles, says it came from a research-oriented hospital, so you know, it was externally supplied, and so maybe easier to ignore if they didn’t have anything to do with it. But you’re right, that when we’re building AI systems, it’s an age-old idea that you should involve the users in the creation of it. It’s kind of sociotechnical systems 101 but doesn’t happen that often.
Brian: Yeah, I think one thing that I like to reiterate on this show as much as possible this idea of we should be designing the solutions with them and not for them. And this idea is not just because, “Oh, if we design it with them, then we’ll know what they want and it reduces risk.” The other thing that’s happening there is that they’re starting to feel a vested interest in it because they put their stamp on it. They were part of making it. And so, if you get medical staff like that who are routinely, like, checking out the interfaces and like, “Hey, this model came with these wrong things. What will we do here? We need your advice. What is your expertise tell you here?”
Over time, they’re starting to feel like I helped make that thing, and that thing—and we all have a shared vision there. And so, that sense of ownership is also the reason why we do it, it’s not just to de-risk it, but to build a collaborative sense there that we’re all in the same problem space together, as opposed to we just throw it over the wall and then it gets thrown back, and then it’s like, “Where did that come from?” And then cert—the first time it’s wrong, it’s like, “Oh, my God, I can’t trust this thing.” [laugh], you know?
Tom: Which tells you—I hadn’t really thought about this, but it tells you that this idea that we’re going to use externally-sourced systems is not likely to succeed in many cases because, you know, those vendors didn’t work closely with everybody in your organization and, you know, you think about one of the simplest systems I see a lot that people buy externally is propensity models for lead scoring for salespeople. And now you can develop that yourself, but far easier to pay Salesforce a few extra bucks and get it through your Salesforce system. Einstein Lead Scoring, I think it’s still called. I wonder—I don’t know—I haven’t ever seen any comparison, but I wonder if people are much less likely to make effective use of those ranked leads than they would be if their organization had developed them and they’d worked pretty closely with the Salesforce to see what they like and what they would trust and so on.
Brian: There’s always going to be idiosyncrasies and exceptions in every organization. For us, like, “I’ve done this 15 years and this attribute is, like, a leading indicator for us and it’s not in there. In fact, the numbers low on all these results, you showed me. Like, these people aren’t even on the gra—they’re not even in the top cohort. How can this be right?”
And so, I think it’s an uphill battle to do it that way if there’s no explainability in the model or no interpretability in the model, and you’re buying it off the shelf, I think you’re right. It’s the adoption thing. And if I was buying that, it’s like, I would be wanting to figure out what is going to stop this from getting used before buying it, is to really think about the adoption thing and what would be our blockers to that? Because the tech part is probably fine. It probably works really well, but it doesn’t matter. If it’s technically writing effectively wrong, it doesn’t matter [laugh].
Tom: Yeah, you know, it’s interesting, though. So, I just wrote this book with a guy, Steve Miller, from Singapore Management University called Working With AI and it’s 29 case studies. We had 30, but our editor didn’t like the 30th, so now it’s 29. Twenty-nine case studies of people who work with AI pretty closely day-to-day, and not all of them but most of them had some external vendor component, and I must say, we didn’t find too many cases—we tried to really talk to frontline people in every case to see, you know, were they using it? Was it valuable?
There weren’t too many cases of people saying, “Eh, I don’t find this terribly useful.” The one exception was with hamburger flipping. The archetypal low-level service job that some people think robots are going to take over, but there’s a system called Flippy, and I kept going to different stores in California to this company CaliBurger, I think a subsidiary there’s developed it, and I couldn’t find any Flippy stores that were open. Finally, I found one, did a phone interview in Florida, CaliBurger in Florida, I’m not sure what that’s about. And the guy said, “Eh, it doesn’t really work that well for flipping burgers, but it’s pretty good for frying french fries, so we use it for that.”
For the most part, things worked pretty well and there’s several companies that made usage voluntary. And for example, at Morgan Stanley, they have this next-best action system that recommends investment ideas to clients, mediated through the financial advisors. The financial advisors don’t have to use it, but the ones who do use it generally have more success. And I think that’s, in a way, a component of product management is seeing do the people who use this system do better than the ones who don’t. And maybe making it voluntary. You know, it’s not quite as good as involving the users in the development, but it’s better than saying, you know, you must use this system.
Brian: I’m kind of asking a question on behalf of a prospective client, I’ve been talking to which I don’t think this is probably very unusual—
Tom: So, you’re going to share your consulting revenues with me if I give you a good answer, or? [laugh].
Brian: [laugh]. They’re currently zero with this client, so sure, how much would you like [laugh]? This idea that, oh, the state of science is machine learning and error and we have to have this and it’s going to be so great. So, this is a medical insurance company. And the problem is, the data science team is completely in a separate organization from the digital team which really owns the product roadmap, the quote, “Product roadmap,” which is all the digital services that this company puts out.
And I have heard this before where it’s this AI machine learning team actually wants to really be thinking about business value and delivering actual outcomes for the organization, and so they need to have a certain foot in the door there with that, but they don’t control all those resources to do it. And they can be seen as a tax on the digital team, right? It’s, “Oh, that stuff takes forever to do and it’s really complicated.” And it’s, “Oh, you want to build a recommendation engine for this, like, plan selection tool?” Or—I’m just making something up.
But the point is, it’s a significant technology lift to do this work and there, they don’t own that. And I’m curious if you see, eventually, these data science teams are going to somehow merge with digital or how that’s going to work. Because at some point, you’re not going to get value out of this, these relatively expensive data science arms if they’re not able to get their work out there because of internal issues, not so much that the solution wasn’t good, but just the politics of getting it prioritized properly and the rest of the organization understanding what it’s like to work with machine learning and AI, and how it’s different. I’m curious if you have thoughts on that.
Tom: That’s one of the reasons why you need this sort of overarching product management focus. If the product management focus is only in the, sort of, digital product teams, then clearly, that’s not going to work terribly well. But I’ve seen a number of cases, I was doing some work with a vendor of these MLOps systems, which look over, I think, in a way, if done really well, they could be the sort of platform that a data product manager uses to sort of see how things are going and to coordinate things well. But I talked to a number of companies who said, “IT people who are responsible for implementing and then monitoring how systems are working, they don’t want anything to do with the data science stuff. They don’t understand it. They know the people don’t understand the technology. The, you know, production technology they’re using.”
So, there are I think these organizational gaps. I think it’s unlikely that they’re going to be successfully addressed by merging everybody together in one organization. I think that’s what product managers do is they sort of try to address those gaps in the organization and develop a process that makes coordination at least possible, if not true all the time.
Brian: Part of it is getting a sense of a shared ownership of the problem across all the bodies that are involved in delivering it, right? And then here’s the outcome and the success metrics for this initiative that we’re doing and getting everybody to feel a sense of ownership of the problem and not just the delivery component. That’s a fairly mature product orientation, even in software companies. A lot of them are still feature factories, you know, where they think it’s the next thing on the backlog, you just keep pushing out the sprints and building stuff.
Tom: Yeah. I mean, that was Microsoft in the old days, and maybe even still. What percentage of the features in Microsoft Office do people use? I think it was on average, like, 2% or something.
Brian: Yeah, yeah [laugh]. Just last question, I want to ask you about your book here, but do you think this product orientation here that’s happening, is this a natural evolution or this is something like, “No, there’ll have to be a very active, conscious, deliberate choice to do things this way?” Orgs are not going to—you know, enterprise organizations, legacy enterprise organizations are not going to gently slip into this. Or will it happen?
Tom: You know, it seems to be pretty… common already. I mean, if you believe surveys, which I sometimes think exaggerate things. I just did a survey with AWS sponsorship of Chief Data Officers and Chief Analytics Officers and so on, that kind of combination of jobs, and we asked how many of you are using sort of data products and data product management approaches, and 39% of them—it was one of the largest surveys, I think, of Chief Data Officers said that they were. Who knows if they really are, but I think that a no-brainer to say, well, if we’re not getting the value that these technologies can offer us, why not do something different? Product management is not a new idea. Why not apply it here?
Brian: I don’t know how you do it, but you have three new books [laugh]. How do—are you [laugh]—
Tom: Yeah. Well—
Brian: A robot. Did you clone yourself or is this GPT-3 style? Like, what [laugh]—
Tom: That’s for my next book will be written by GPT-3 [laugh].
Tom: I did just write an article in Harvard Business Review about these generative AI systems, and the first paragraph was written by GPT-3. I did reveal that it was. And it wasn’t perfect. It needed editing. It’s been a while since my last book, which I think was 2019, maybe, so three years or so. So, it’s not that great a production rate. They just all ended up coming out at more or less the same time.
The one is Working With AI that I was talking about one, it’s not out yet; it’ll be out in a month or so called All-in On AI about companies that are really aggressive in their use of AI and to change their businesses. And then the third one I mentioned briefly, it’s Advanced Introduction to AI in Healthcare, mostly for, I think, you know, healthcare provider organizations and executives, and they’re trying to figure out what to do about this technology.
Brian: So, it sounds like that one’s for healthcare and the first two books for the same audience? Or do you have a specific reader in mind for the—I’ll just read those, again. It was All-in On AI: How Smart Companies Win Big With AI, and then Working With AI: Real Stories of Human-Machine Collaboration. The same audience for those?
Tom: I don’t know who is the audience for the Working With AI book because what we really need, I think, is some group of people who are responsible for human and machine resources and the digital workforce and the human workforce, and we don’t really have that in most companies. So, I don’t know who’s reading it. I hear publisher MIT Press has said, “Yeah, it’s selling really well.” So, I don’t know who’s reading it exactly. And it’s funny because it’s a collection of case studies with some chapters at the end saying, you know, what we think it all means, so that’s an unusual format.
I think a lot of people just like it because they like, you know, reading about all the things you can do with AI. The All-in On AI book is intended for executives to say, “Okay, what happens if I really engaged in this in a big way?” I wrote a book in 2007 and then I’ve had an updated version in 2017, called Competing On Analytics, and this is sort of, “Competing On AI,” effectively. It’s co-authored with the head of AI at Deloitte, Nitin Mittal.
I think even companies that don’t want to make a strong commitment, they kind of want to see what it would be like if we did, and it’s oriented to, you know, doing something different with AI, supporting a new business model or a new strategy or a new—or operational transformation, not just, sort of, incremental change.
Brian: Well, Tom, thanks for coming on the show. It’s been great to talk to you again. Any closing thoughts? You want to share just about this, the whole data product space, or anything you’d like last words?
Tom: If you—pretty easy algorithm. If your percentage deployment is less than, I don’t know, what do you think 50%, 70%, or whatever, you need a data product orientation. So, that would mean that data products are going to become a lot more popular because we know the percentage deployment of data science models is much lower than that in most organizations.
Brian: Great to talk with you again. Thanks for coming on and sharing your wisdom with us.
Tom: Thank you. My pleasure.