Discount launch pricing for the Data Product Leadership Community ends Sep. 30!
Participate in our live monthly Zoom webinars (recorded for on-demand access), connect over a 24/7 Slack, and get introduced to a new peer 1x1 every 2 weeks. Join my free Insights mailing list to get subscriber-only pricing before Oct. 1.
Get the coupon then apply for membership

$1M spent on a predictive model/data science w/ $0 value and no user engagement?

I recently connected with a data scientist at a reputable consumer brand who used to work at a very large ecommerce company. I asked him about why he connected with me, and why one of my mantras—"outcomes over outputs"—resonated with him. He replied, "A lot of companies (including) don't know what to do with their ML models and results. And communicating those to non-technical audience who make most of the budgeting decisions."

I asked him if there was a particular example/incident that was on his mind. He went on:

At [past ecommerce company], we were able to predict fraudsters/abusers when they placed an order, but the business wouldn't take any action because they needed evidence of a prior fraud. ​For example, a brand new account (customer) ordering a phone from the US website using a Taiwanese IP address would be flagged by our model, but the business would honor the order anyways. 

It would come back as fraud 2 weeks later.

They could have done grauduated enforcements or added friction.

And it would come back as fraud 2 weeks later.

They could have done grauduated enforcements or added friction.

****

This project was 6-8 mos. long.

What happened here?

Well, for one, they likely spent millions on this. Let's say 7 months of work, and a likely involvement of ~10 employees with annual salaries of $150k USD/each. I have no idea if that's actually right, but it feels reasonable for this company.

Right there, you're at an $875,000 data science and engineering expenditure for this project, not including any costs beyond salaries. 

There's a lot to unpack here, but one thing I noticed immediately: a prototype of this system could have easily been designed, without building any model, to help ascertain the actions the business unit/sponsor of this project would take.

While I don't know the reason "why," this happened, I do wonder if they talked to the dept/stakeholders in the fraud area to find out:

  • Did the team who made this understand "who is the recipient of this? how will the department take a fraud flag and react to it?"
  • "What information would make you choose to react, or ignore, a fraud claim?"
  • "Why do you care about finding fraud?" (what's the incentive in place to "catch" fraud? is it about finding it, getting the funds back, or what?)
  • "What is hard about dealing with fraud claims in your job the way you do it today? What would make it easier/better for you?"
  • "What concerns about a machine-driven system to detect fraud?"
  • "Can you show me, using this example UI we made, what you'd do next?" [UI shows a fraud/alert]
  • "What types of fraud would be most important to catch? High-dollar items or high-volume, low dollar ones? Is this question even relevant Ms. Stakeholder?"
  • "If this fraud detection system hit a home-run in its first release, what would that look like to you? How could we measure that?"

Empathy -> Probing Questions -> Prototype -> Evaluate ->  Learn -> Build/Redesign/Tweak

You need very little "data science" to do any of this.  It's not stats, modeling, or analytics work.

It is mostly design, design thinking, and empathy.

It saddens me to hear stories like this. Not only does it waste money, but you end up with more technical debt, an analytics/data science team who likely isn't thrilled their work "didn't matter" or get used, and a leader/director who may have created a great technical output, but did not deliver any value or outcome to the business.

A successful design here would likely have determined the failure points and "delighters" early, such that they could be worked into the solution. By focusing on the experience, it would have informed us that this solution may have taken more than just making the model. Ultimately, the model has to be deployed into an interface and/or user experience that is useful, usable, and meaningful to the end audience. It seems to me that the human elements were the linchpin in this project's success. Not once did this person mention to me that anything was wrong with the model/tech.

You have to align people + user interfaces + engineering + data + modeling + incentives. 

UX and good human-centered design helps us glue all of these variables together so that we deliver a meaningful outcome, and not just an output, model, or UI.
Was it not desirable?
Was it hard to use?
Were the incentives not aligned?
Did they not routinely involve the right cross-functional team / stakeholders and realize priorities/needs changed?

All of these questions are part of making the machine learning model and data science part successful. A successful human-centered design would glue this all together, or even reveal early enough that it may not be worth doing the project.  

If you don't want to spend 6-8 months on a project with this type of result, I teach a bi-annual Seminar and just released a self-guided video course version of it called Designing Human-Centered Data Products. You can now download the video and written supplement for Module #1, free. It was designed especially for leaders in data science, analytics, and technical product management —people like you—who are tasked with making valuable data products, and I've made the curriculum as easy as possible, immediately applicable, and hopefully fun. I also guarantee it will help you create more useful, usable, and indispensable data products that turn 🤷‍♀️🤔 into 👍🙏🙆🏻‍♂️.

 

Photo by Dev Benjamin on Unsplash

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

More Free Insights: