081 – The Cultural and $ Benefits of Human-Centered AI in the Enterprise: Digging Into BCG/MIT Sloan’s AI Research w/ François Candelon

Experiencing Data with Brian T. O'Neill
Experiencing Data with Brian T. O'Neill
081 - The Cultural and $ Benefits of Human-Centered AI in the Enterprise: Digging Into BCG/MIT Sloan’s AI Research w/ François Candelon
/

Episode Description

The relationship between humans and artificial intelligence has been an intricate topic of conversation across many industries. François Candelon, Global Director at Boston Consulting Group Henderson Institute, has been a significant contributor to that conversation, most notably through an annual research initiative that BCG and MIT Sloan Management Review have been conducting about AI in the enterprise. In this episode, we’re digging particularly into the findings of the 2020 and 2021 studies that were just published at the time of this recording.

Through these yearly findings, the study has shown that organizations with the most competitive advantage are the ones that are focused on effectively designing AI-driven applications around the humans in the loop. As these organizations continue to generate value with AI, the gap between them and companies that do not embrace AI has only increased. To close this gap, companies will have to learn to design trustworthy AI applications that actually get used, produce value, and are designed around mutual learning between the technology and users. François claims that a “human plus AI” approach —what former Experiencing Data guest Ben Schneiderman calls HCAI (see Ep. 062)—can create organizational learning, trust, and improved productivity.

In this episode, we cover:

  • How the Henderson Institute is conducting its multi-year study with MIT Sloan Management Review. (00:43)
  • The core findings of the 2020 study, what the 10/20/70 rule is, and how Francois uses it to determine a company’s level of successful deployment of AI, and specific examples of what leading companies are doing in terms of user experience around AI. (03:08)
  • The core findings of the 2021 study, and how mutual learning between human and machine (i.e. the experience of learning from and contributing to ML applications) increases the success rate of AI deployments. (07:53)
  • The AI driving license for CxOs: A discussion about the gap between C-suite and data scientists and why it’s critical for teams to be agile and integrate both capabilities. (14:44)
  • Why companies should embed AI as the core of their operating process. (22:07)
  • François’ perspective on leveraging AI and why it is meant to solve problems and impact cultural change. (29:28)

Quotes from Today’s Episode

  • “What makes the real difference is when you have what we call organizational learning, which means that at the same time you learn from AI as an individual, as a human, AI will learn from you. And this is relatively easy to understand because as we’re in a world, which is always more uncertain, the rate of learning, the ability for an organization to learn, is one of the most important competitive advantages.”- François Candelon (04:58)
  • “When there is an additional effectiveness linked to AI, people will feel more comfortable, will feel augmented, not replaced, and then they will trust AI. As they trust, they are ready to have additional use cases implemented and therefore you are entering into a virtuous cycle.”- François Candelon (08:06)
  • “If you try to optimize human plus AI and build on their respective capabilities—humans are much better at dealing with ambiguity and AI deals with large amounts of data, If you’re able to combine both, then you’re in a situation to be ready to create a source of competitive advantage.”- François Candelon (09:36)
  • “I think that’s largely the point of my show and what I’m trying to focus on is to talk to the people who do want to go beyond the technical work. Building technically, right, effectively wrong solutions is something nobody needs, and at some point, not only is it not good for your career, but you might find it more rewarding to work on things that actually matter, that get used, that go into the world, that produce value. It’s more personally gratifying, not just for the business, but yourself.”- Brian T. O’Neill (@rhythmspice) (20:55)
  • “Making sure that AI becomes the core of your operating process and your operating model [is] very important. I think that very often companies ask themselves, ‘how could AI help me optimize my process?’ I believe that they should now move—or at least the most advanced—are now moving to, ‘how should I make sure that I redesign my process to get the full potential of AI, to bring AI at the core of my operating model?’”- François Candelon (24:40)
  • “AI is a way to solve problems, not an objective in itself. So, this is why when I used to say we are an AI-enabled or an AI-powered company, it shows a capability. It shows a way of thinking and the ability to deal with the foundational capabilities of AI. It’s not something else. And this is why—for the data scientists that will be open to better understanding business—they will learn a lot, and it will be very enlightening to be able to solve these issues and to solve these problems.”- François Candelon (30:51)
  • “The human in the loops matter, folks. For now at least, we’re still here. It’s not all machines running machines. So, you have to figure out the human-machine interaction. It’s not going away, and so when you’re ready, it’s time to face that we need to design for the human in the loop, and we need to think about the last mile, and we need to think about change, adoption, and all the human factors that go into the solution, as well as the technologies.”- Brian T. O’Neill (@rhythmspice) (35:35)

Links

Transcript

Brian: Welcome back to Experiencing Data. This is Brian T. O’Neill. Today, I’ve got one of the leading managing partners from Boston Consulting Group, François Candelon. You run the Henderson Institute. Tell us what this is and what it has to do with AI.

François: Actually, BCG Henderson Institute works on many fronts, and so on, but I’m focusing my own research on artificial intelligence and its impact on corporations and society. As I believe that as we’re entering this new industrial revolution, this new era, it’s probably one, if not the most important topic that we need to deal with.

Brian: Yes, I agree with that. And just for listener context why are we talking today. I saw François’ presentation you’d given against a report, and it’s actually a multi-year study they’ve been doing with MIT Sloan about AI in the enterprise. And the initial article that caught my eye was about last year’s findings about human-machine interaction and, essentially, the companies that are leading in this area and seeing the best value from AI, what are they doing that’s unique, and some of the patterns that you guys saw in that research. So, that was the initial impetus when I reached out to François, and then he told me, “We have a new report coming out, and the new one is about the cultural impacts on organizations around artificial intelligence.” So, we’re going to kind of cover both of these today. And I wanted to first give the mic to you, though, tell me a little bit about the methodology behind these reports. And then I’ll jump in with specific questions, but I wanted to give you, kind of, first chance to set the stage about this whole study.

François: The study is—as you said, every year we have a report that we go through with MIT Sloan, and I think it’s a very interesting approach. Every year with Sloan Management Review, we survey around 3000 executives across the world, across industries, and we try to better understand the impact of AI on the enterprise, or what happens. We have, let’s say, long-term time series, but we are as well always trying to find an [angle 00:02:39]. And last year report was really about, okay, what happens? Many companies invest in AI, but we can see that only 11% of them can find that they have significant financial impact based on their AI investments. So, you have about 55% companies are in some financial impact, but only 11% with strong financial impact. And we trying to understand why these companies were actually getting it.

Brian: So, one of the findings last year—I’m going to try to briefly summarize this, and then I want you to go ahead and, obviously either correct me or add the color to it—was talking a lot about how companies are learning that in order for this to be successful—AI and what I think is primarily machine learning initiatives—they need to be designed in the last mile with the customers and users that are using them for humans to learn from the machines, for machines to learn from the humans, effectively, we have to design for this interactive counterpart mentality and assistive aid mentality, as opposed to we’re trying to fully automate or replace. And there’s a time and a place certain kinds of models, et cetera, are appropriate for that type of work, but that was the thing that caught my eye was that these companies that are very problem and need-focused, and had designed these solutions around this human-machine interaction. So, could you tell us a little bit more about this, if you’d like to share some specific examples that you heard about in your study? That I’m curious to hear.

François: Yeah. So, I think that what was interesting in that, we tried to understand what would make companies successful in using AI? And what we find is that the likelihood for you to be successful if you have, let’s say, already hired data scientists, you have good, let’s say, infrastructure, you have, let’s say, AI strategy, actually, you have less than—you’re around 20% likely to get significant financial impact. If you are going to the next level, let’s say, using both being balanced between production and consumption, if you try to use AI, not only for reducing cost, optimizing cost, but as well to try to go into and get additional revenues, through personalization or [unintelligible 00:04:55] that’s much better, but you’re around 40%. What makes the real difference is when you have what we call organizational learning, which means that at the same time you learn from AI as an individual, as a human, and AI will learn from you.

And this is relatively easy to understand because as we’re in a world, which is always more uncertain, the rate of learning, the ability for an organization to learn, is one of the most important competitive advantage. And if you do that, you are above 75% likely to get significant financial impact. And—but I would say that the financial impact is almost a byproduct. What is important is this notion of mutual learning, this ability to learn and therefore to be more agile in this uncertain world.

Brian: Yeah. So, you talk about learning, and learning, particularly if we’re talking about the point at which humans and our machine learning—and effectively we’re talking about software applications, some type of interface between human and the software is occurring—that assumes that we have some level of trust and some level of adoption because if people aren’t even willing to pay attention, not even to open up the application or look at the spreadsheet that has the new price forecasting editor, or the next best action report, or whatever the heck it is, you’ve already lost the game. And I find that with my audience, and this could be—maybe this is selection bias based on the clients that I work with, adoption is still a challenge.

So, building the technology is not the challenge for a lot of the data science and analytics teams that I work with who are largely responsible for these initiatives; it’s getting buy-in and trust from the users that this new way is a better way. This is good for you. Eat your vegetables, please. That’s not always happening. I’m curious, did your study learn something about this, about breaking the trust barrier and how we figure out how do we make this stuff trustworthy, usable, useful, and valuable to the people it’s for?

François: First of all, I think that you’re perfectly right. What we used to say at BCG is that we are the 10/20/70 rule. So, what does it mean? It means that it’s the effort we believe that is required if we want to actually implement AI and be successful.

And 10% is around the algo. Of course, it’s important but—and you can get better algos and it’s critical, but it’s not that difficult, right? 20% is around the infrastructure with data and so on. What is really critical is about change management. And this is a 70%.

And to give you some examples on how to create trust, of course you need to have humans working with you and be involved when you [unintelligible 00:07:52] stuff, but what we found—it was more in the second report—that if you deploy AI and for companies that are able to achieve a better level of decision, better quality of decision—so when there is an additional effectiveness linked to AI—then we have people will feel more comfortable, will feel augmented, not replaced, and then they will trust AI, and you will—as they trust, they are ready to have additional use cases implemented and therefore you are entering into a virtuous cycle. And I’ll give you one example, with Pernod Ricard the global French, global one in spirit—

Brian: Yeah, was that on purpose? I noticed the—

François: Yeah, of course—

Brian: —French liqueur example—

François: Of course.

Brian: —in there, François. [laugh].

François: Yes. You know, you know… this is my own bias. You know, actually, they developed a great product, great algo to support the salespeople, to help them identify which customer they should visit. And they were a little bit concerned there would be some backlash, and it was the other way around because as I said, these people felt augmented and not replaced. And by doing that, none of them, even the veteran, they don’t want to stop and go back and live without AI anymore.

So, I think this is a very important element, and in my opinion, especially with [narrow 00:09:18] AI, what we need as a company to optimize is the system, human plus AI. I believe that if you are trying just to optimize, let’s say, AI [with 00:09:31] all what it can do, then you will be back to the modern times of Charlie Chaplin. But if you try to optimize human plus AI, if you build on their respective capabilities of each of them—humans are much better at dealing with ambiguity and so on; AI are dealing with large amount of data—if you’re able to combine both, then you’re in a situation to be ready to create a source of competitive advantage.

Brian: Yeah. You actually segued right into this question because I think literally pulled out a—Pierre-Yves Calloc’h, I think is the Chief Digital Officer of Pernod Ricard, if I recall correctly, and I wanted to ask you if this example, if I recall reading this correctly, effectively they engaged the salespeople in the process of designing this tool, which was to help them decide, who should I call? Next, which stores are more likely to purchase a product, or whatever it may be, it’s to reduce the effort of them wasting time calling people or contacting people who don’t want to buy something right now, is this process of involving the users in the design of the solution, was this a repeating trend that you heard from these companies?

François: Yes, I think that you need to use—when you develop these algos, you need to work in an agile mode. And so to have your own MVP. And you need these people to come back to you because if you add their judgment to your training, let’s say, training set and so on, to come back to you and both on the [whether 00:10:59] you will use it and developing their judgment, including their judgment, that’s absolutely fantastic. And I give you one more example, another French company [Rexel 00:11:10], and they’ve been able to do that. They were trying to develop, let’s say, an algo, that was giving the next best action to their sales force.

And they have a very complex [unintelligible 00:11:22] more than 1 million SKUs, so it’s very difficult for everyone to know it. And they will, looking at that and understanding that the young generation was using it while the veterans were a little bit more reluctant, but at the end, they were able to [embark 00:11:40] the veterans because the veterans were saying, “Okay, that’s great that we have these two because thanks to it, the young generation doesn’t ask us so many questions.” And it was funny to see that the veterans were more willing to support and train the algo than to train their colleagues. But at the end, it was great and it became a training opportunity for the newcomers.

So, it’s really something that we can see because we should never forget that the worst day of the AI algo is day one. So, we need, of course, to make sure that we embark in this process, not saying, “Okay, this is just the perfect solution at the moment,” but, “With your support, collectively, we learn more with each other.” And one more example may be on this notion of mutual learning. I went the other day with the head of a trading floor in a bank, and the guy was telling me, you know—and it was not a French bank—

Brian: [laugh].

François: —it’s great to see how my traders are learning from AI and AI are learning from the traders because of course at first, AI was learning from the traders because they put some sensors to see which data traders were looking at to make the decisions, and so on, and to avoid noise, then—so the AI was learning from the traders—then AI was learning on his own, gathering millions of decisions, and so on. But then traders were learning from data, a little bit like humans or Go players were learning from AlphaGo, which is that the AI algo was opening new paths, new ideas, new recommendations, and based on that, it was opening the mind of the traders. And then they were taking decisions in a new way. And you went back to the cycle, that said, the virtuous cycle was then AI was learning from them, blah, blah, blah. So, I think it’s very important for everyone to understand what’s happening there, this notion of mutual learning.

Brian: Yeah. I’ve heard of this happening in other domains including medicine as well, you know, where a machine learning model finds a connection between two disparate things that seem highly correlated to some outcome or prediction there, and it might be something that no one’s actually studied, like, “Oh, brushing my teeth is related to my ankles swelling.” And it’s like, “Why is this correlation here?” And there was actually an example on this show—it wasn’t bad; I forget what the specific thing was, but when they went out to the medical community to talk about what the model had found, they found out that there were actually some researchers working on this, not from a machine learning standpoint, but this hypothetical connection there, and now they had some data to actually say, machines are finding this and human researchers are finding this. So, I think that point on idea generation as something that could be a positive outcome from this is a really interesting concept.

So, one thing I’ll say as I was reading the current report—and maybe this is not survivorship bias, it’s skewed rather rosy a little bit in terms of some of the stuff I’ve heard compared to the overall climate that I hear from a lot of people that I talk to. And I wanted to pick on a particular—not pick on an example, but ask you about the CBS example. So, I think it’s Radha Subramanyam, this leader asked their team to take 50 years of KPIs and validate that the KPIs that executive leadership was using to measure success, or measure forward momentum, they asked the data team, are we actually tracking the right measurements here? And I found that to be a very forward-looking perspective which is, my job is to captain the ship, but I’m not sure if I’m looking at the right dials or not. It takes a very forward-looking leader to push that problem down to the people below them and say, “Could you figure this out for me?”

I don’t hear that a lot. I think that’s the kind of work data teams would love to do because it’s rewarding work, it’s strategic work, it’s meaningful work. Is that an exception? Are you hearing more of this kind of like, turn the keys over? [laugh].

François: I would say to be frank, that it’s more an exception. I think it means that you have a confident leader and someone who is ready to say okay, we’re thinking that AI is supporting him, is not a danger of making him obsolete. And I give you an alternative, let’s say, an opposite element. When we’re working for a telco and using a very good, let’s say, approach that we have with very good algos that help you have the next big [unintelligible 00:16:27], reduce churn, increase, let’s say, upsell, cross-sell, the traditional stuff. And with this, you’re actually—telcos are usually working with campaigns. Okay? They push it.

But with this algo, basically, you can personalize the offer. So, you’re in a continuous campaign. Your offer the—you propose the right offer at the right time to any customer. And we had fantastic pilots. And the guy, the Chief Marketing Officer, told me, “Yes, but you know, with this, I don’t want to implement it because I know how to run a campaign. But then if I do that with it, what will I do? I’m becoming obsolete.”

And it is true that you have a very different way then of organizing your business because everything is done by the algo, but the role for marketers is to become very creative, to have new ideas, to check new things, to train, to be the continuous training, to create a new training set that will continuously improve your algos and test new things, and so on. And the guy was feeling that he was obsolete. And I believe this is one of the key issues or the key topics I’m trying to work on at the moment, and it’s not an easy one, which is, what I call the AI driving license for CxOs. Because CxOs are now in positions that they don’t know about AI. They don’t understand what the full potential of AI can be, and they don’t understand—and they feel that they are obsolete, so they don’t know how to deal with these elements.

And I call that an AI driving license because it’s a little bit like what you do when have your driving license. You can drive a car, but you don’t know what happens in the engine, or at least I don’t know. And I think this is, let’s say, an analogy I try to use, or a metaphor because this is probably the most important element for CxOs. How, despite all limitations, to understand enough about AI to, on one hand, build an AI-driven or an AI-powered company, make sure that I understand the changes that I have with AI, let’s say, for instance because of the risk-mapping—we all know all about responsible AI with a reputational risk and so on—how to design my organization to make sure that I have human plus AI working together, and to what extent does it change my work, and then to make sure that you are maybe augmented yourself, as the guy—the leader from CBS to make sure that they’re augmented and they leverage AI to take better decisions.

Brian: Yeah, well, props to her. Radha if you’re listening to the show, I think it’s awesome. And maybe I’ll reach out to her and have a conversation because I think that’s a very, very forward looking and confident leader to do something like that. In terms of the AI knowledge of leadership, one thing that I’ve—and I picture this as a tennis ball going back and forth, or just a basic rally, it’s we would like a machine learning model to make better pricing. And you get the data team, or the data science team, often receives a somewhat vague request, and so they try to ask them questions about it, and it basically becomes, “Well, what do you want? We can probably build that but I don’t know what you want.” “Well, what can the AI, what can you guys do?” It’s kind of like, “I need a menu so I can order something.” “Well, I don’t know what you really—what do you like to eat?”

And it’s like, they don’t know what’s possible and the data team wants the problem handed to them neatly so that they can work on the solution. And so the issue there that I see is, whose job is it to figure out the problem space there? Is this a CxO leadership lack of knowledge about how—and it’s not that they need to become experts in AI or they’re going to become data scientists, but the capability is there to even know how do I ask a good data science question to my data team such that they can focus on the implementation and the creation, and not so much on figuring out what is it that we need? What’s our strategy? There’s a gap here, and I’m curious if you’re seeing this, and it feels like this tennis ball going back and forth. Like, “Tell me what you want.” “Well, tell me what you guys can do.” “Well, I don’t know what your problem is.” “Well, let me know what’s possible.” [laugh]. It’s just, like, back and forth. Where are they supposed to meet? Like, who owns the problem definition? [laugh].

François: I would say this is largely the reason why BCG and—or AI [arm of 00:21:00] BCG GAMMA, this is one of the reasons why we are actually developing extremely well. Because this is exactly what you said, is this notion of integration and ability to have people who were at the same time able to deal—I would not say people; to have teams that are at the same time able to deal with business issues and with business tech issues. And this is where the notion of being—to having an agile team with both characteristics becomes critical. I don’t think that you can do it otherwise because you don’t understand each other and you won’t. But it’s true that this is one of the reasons why I believe it’s important for data scientists to try to put themselves in the shoes of the business people, to try to reach a certain level of business understanding of their industry. They won’t have to be an expert, but to have a kind of an overlap to be able to have a dialog between CxOs or, let’s say, business leaders on one hand and technical teams, is absolutely critical.

Brian: Yeah, I agree.

François: [crosstalk 00:22:06] that.

Brian: Yes, I fully agree with that. One thing I wanted to ask you in terms of—and I don’t know if your study actually looked at this or got any insights from this but I’m curious, did you see any correlation between the types of teams working on the solutions and the level of impact, either culturally or financially, there? And I’m specifically asking about, say, the IT slash data team, chief data office, something like that, versus the digital side, if the enterprise has a digital division? Was there any correlation between who’s, kind of, running these efforts to deploy AI in the enterprise. Or companies that had a digital department tended to show better results, or whether they formally did or not? Any correlations here? I’m just curious how the digital arm and the data organization, if there is one, are playing together and impacting these results.

François: I think that what is important is to make sure—so first of all, I believe that today the notion of digital teams without AI is something I don’t fully understand because I think that it’s now really every—this is why when I use AI, it includes all digital stuff because you have always algos that can help improve and so on, so AI, you can tell me yes, but AI is a tiny part of digital, or you can say basically, you have an AI layer all around. So, but I think that, and I’m not sure I will answer your question specifically, but—

Brian: You’re a consultant you’re not supposed to. No just kidding. [laugh]. You’re speaking my language here.

 

François: But I’m trying to—what we try to do is to make sure that, let’s say, AI is able, let’s say, or AI teams are really at the core of the business, so I think that you need—the real element is a dialog between the AI team and the business teams, and to try to have these AI teams, because they understand what AI is about and what can be done as well, they try to give a culture on AI to… AI unskilled. So, there is a need to upskill people. And what I’ve seen in several companies was the fact that you had, let’s say, the digital teams that’s using, let’s say, big data, that were using data warehouses, you know, data lakes and so on, closer to the business because of what had happened over time, and then relying on an AI team that was more advanced, and so on. And I think that the transition and making sure that AI becomes the core of your operating process and your operating model becomes very important. I have seen that, I think that very often companies ask themselves, how could AI help me optimize my process? I believe that they should now move—or at least the most advanced—are now moving to how should I make sure that I redesign my process to get the full potential of AI, to bring AI at the core of my operating model?

And of course, we cannot move to the second pass without having done some use cases, brought some use cases at scale and so on, but this is a movement we see. And I have the feeling that we are at a moment when the difference, the gaps between the company that are embracing AI versus the ones who are just playing with it, or the ones that are saying, “Okay, it’s not for me,” is really expanding. We’re maybe at the moment when the S curve is really developing.

Brian: Yeah, it’s interesting what you said about digital not including AI as part of its fabric or its fiber, its being—I think that’s an interesting perspective. There’s just a common, I feel like I’ve noticed that a lot of the—particularly in very specialized domain areas—data teams see themselves as very distinct from a digital arm. Like, “We’re not software, we’re not digital. We’re, like, this hyper-specialized area.” And kind of like, “That’s someone else’s responsibility.”

And sometimes it’s like, there is no one else out there, so if you don’t figure out how to help the business with this work, it probably will die. Op—and I don’t use the word operationalizing machine learning models because it sounds like something you do after you make something instead of designing it in from the start to be properly operationalized, but someone has to own that or the initiative will die. You can’t design without the end-users in mind and how they’re going to bring this into their decision-making and usage. So, I’m curious as an identification—self-identification of the teams like, “Someone else’s job. Not my job.” Just kind of a reflection there, I guess, because I think digital teams tend to see themselves as very horizontal and spread across the business. [laugh].

François: I think, and maybe the people listening to us won’t be happy with what I will say, but I’ve seen maybe, maybe many data scientists that were more interested and attracted by the complexity, the beauty of the problem they were solving, instead of saying, “Okay, how can it be useful to my company?”

Brian: Yes.

François: And I’m not blaming them for that. I’m just saying that this is what they do and maybe they need to have in their company a chunk of their time dedicated to that, but at the same time, they need for a significant part of it to make sure that they are creating impact for the company.

Brian: I’m one hundred percent with you. I mean, I think that’s largely the point of my show and what I’m trying to focus on is to talk to the people who do want to go beyond the technical work because I call it, “Building technically, right, effectively wrong solutions.” Nobody needs that, and at some point, not only is it not good for your career, but you might find it more rewarding to work on things that actually matter, that get used, that go into the world, that produce value. There’s actually—it’s more personally gratifying, not just for the business, but yourself. [laugh].

François: Yeah, and not just because you will have great impact, and so on, but because you will learn from others.

Brian: Exactly.

François: And you will become even stronger in what you do. And I’ve seen many companies—and I’m referring here to a telco again, and sorry because I work a lot with telcos—and I don’t apologize for that. It’s just because—

Brian: [laugh]. Just the French ones, though, right?

François: —[crosstalk 00:28:29] will be telcos.

Brian: [laugh].

François: Not only French companies; this one is an Indian telco. And really they were becoming—because of what the great opportunities they were proposing to their people in saying, “Okay, when you redesign the network, or network using AI, basically you can not only optimize cost, maximize revenue, but you will have a societal impact as well.” They were becoming the best alternative for data scientists who wanted to stay in India—or I have some other examples in Southeast Asia—and there were position—and in China, as well—for the ones who wanted to stay there, they were really the best option to continue to develop data science and to work there.

Brian: Yeah, yeah. I think when there’s plenty of work available, you have your pick, and things like mission, and value, and things that go beyond getting a paycheck and all that start to become more relevant. And I’m hearing the same thing where people do want to have an impact that goes beyond just getting the model 78% accurate, and that’s the end all be all of my work. That’s not what I me—I am seeing a change there too, as well, which I think is really good.

Earlier, we’re talking about this kind of problem-space, owning the problem here, and there was an anecdote that I remembered in last year’s report. I don’t remember the name, I think it was a chief data officer of a mining company in Africa and he talked about, “We don’t have AI teams. We have problem-solving teams.” And I love that bit and this is very much a product and design type of mindset where ownership of the problem is pushed down to a team. And it’s like, “We own retention,” or, “We own”—whatever it may—I don’t even know what the context was, but I loved that. I was curious if you have any insights on that story, if you can expand on that a little bit, this idea of owning the problem, whether or not it needs AI, or maybe there’s a regression model or a traditional dashboard, or whatever. Like, “We don’t know what tools needed yet, but we’re owning this problem and we will help you solve it.”

François: Yeah.

Brian: I love that.

François: You know, I’ve been in BCG for a while; it’s 30 years. When I started, we didn’t have laptop. So, and we were—so but, what I can see the fact that the problems are always the same. So, we did at the beginning, where do [unintelligible 00:30:44] or, let’s say, calculator, then we add Excel spreadsheet, then we add SQL, and then—and now we use AI. And that’s great, but at the end, the problems we’re facing are always the same in the business.

So, AI is a way to solve problems, not an objective in itself. So, this is why when I used to say we are an AI-enabled or an AI-powered company, but this is not—it shows a capability. It shows a way of thinking and the ability to deal with the foundational capabilities of AI. It’s not something else. And this is why for the data scientists that will be open to better understand business, they will learn a lot and it will be very enlightening to be able to solve these issues and to solve these problems.

Brian: Yeah. I think the marketing gloss of saying, “We’re an AI powered telco,” I think there’s a lot of eye-rolling that happens within the professional data science community. It’s just like, “You want to say that. But then, like, when it comes to down to the wire, you’re not ready to accept some of this stuff.” And I think someone that’s able to close the gap between a stakeholder saying, “We want to use AI for pricing, we want to use AI to optimize our network, we want to use AI to improve our supply chain,” it’s the person that steps in and helps figure out what does that mean here? Like, what are we actually trying to do?

And then maybe the recipe is a machine learning model, or maybe it’s this other thing, but it’s helping unpack that because there may be an opportunity to use AI there, but someone needs to play that negotiation game. It’s not a game, but helping to surface the unarticulated needs, surface the problem, then talk about different solutions and all that, I think that’s a role that just continues.

François: And because the question you get is, “Okay, I want to have the most advanced options to solve my problem.” And I’ll give you one example, with Repsol because I think that another point that is important to have in mind is that it’s not just for, let’s say, AI-native companies. Even traditional companies can make it, if they have the right leaders, the leaders that embrace what you just said which is, “I want to have the most advanced way of solving problems.” And Repsol is the Spanish oil and gas incumbents, therefore, it’s a traditional company, in a traditional industry, in a traditional country—I have a Spanish grandfather, so I can say it—

Brian: [laugh].

François: —and so, what you have here is with this element is that they’ve been launching more than 200 programs, AI and digital-related, and in three years, they were able to optimize to improve by two the production time. And therefore because they were using hundreds of millions of sensors and data for predictive maintenance, [unintelligible 00:33:34] it’s becoming too hard, so you should move that way, and so on. So, down to having hundreds of thousands of personalized offers in their gas station was a significant impact because they got more than—it was the equivalent of adding 5% of additional gas stations, so which is huge for them as there are constraints, as they are the incumbents. So, I think that the Chief Digital Officer of [Valera 00:34:00] was telling us, “Okay, but now AI is part of this stuff. And AI is the cornerstone of our transformation programs.”

And this is something that we found in the last report, that when you look at companies that say, “Okay, how bold should I be?” The fact that if you are bold trying to have at the same time, a large number of initiatives embarking it, it’s actually very positive. So, instead of saying, “Okay, I should do just one pilot,” or, “I should do one use case end-to-end,” it’s maybe a way to start, but once you have people, at least at the executive committee level, who are convinced that they need to go with it, go for it.

Brian: Yeah.

François: And this is the best way for you to get in to leverage AI and to be not only with financial impact, but with the cultural change that we can see and that we detail in our report with MIT SMR this year.

Brian: Yeah, yeah. François, this has been a great conversation, and I do want to ask you where people can get in touch with you, but as kind of a last question, is there a question I didn’t ask that I should have or a final thought that you’d like to share with our audience about being successful here with AI?

François: I believe that one of the key issues for AI is the word intelligence because it is something that is scaring people. They have the feeling that they will get replaced. I think that AI is a great opportunity to get augmented and that, as leaders, we need to think about the system human plus AI.

Brian: Yeah.

François: I would say that, for me, if we’re able to think in terms of productivity, in terms of all of that, and leveraging this great system, then, let’s say, progress will improve.

Brian: Amen. I’m fully with you there. The [laugh] human in the loops matter, folks. For now at least, we’re still here. It’s not all machines running machines. So, you have to figure out the—

François: And it will [crosstalk 00:35:51].

Brian: —human-machine interaction. Yeah. Exactly. It will not—so it’s not going away, and so when you’re ready, it’s time to face that we need to design for the human in the loop, and we need to think about the last mile, and we need to think about change, and adoption, and all these human factors that go into the solution, as well as the technologies. So, Amen to that.

François, great talking to you. Where can people be in touch? What’s the best way? Is it LinkedIn? Or tell me how to get in touch with you.

François: LinkedIn is perfect.

Brian: LinkedIn is perfect. Okay, we’ll drop that in the show notes. Thank you again, for coming on, and doing this work, and publishing it so can all check it out.

François: Thank you for your invitation.

Brian: Yeah, it’s been great talking to you, so thank you again.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe for Podcast Updates

Join my DFA Insights mailing list to get weekly insights on creating human-centered data products, special offers on my training courses and seminars, and one-page briefs about each new episode of #ExperiencingData.