Discount launch pricing for the Data Product Leadership Community ends Sep. 30!
Participate in our live monthly Zoom webinars (recorded for on-demand access), connect over a 24/7 Slack, and get introduced to a new peer 1x1 every 2 weeks. Join my free Insights mailing list to get subscriber-only pricing before Oct. 1.
Get the coupon then apply for membership

Humans – The Weak Link in your ML / AI Strategy?

AI promises many things.

One of these things is the potential to reduce the LOE required by humans to do certain types of work, such as analysis. In fact, not only does it reduce it, but it can also go well beyond this, achieving the types of analysis that no human could do.

If your team could just find those statistical patterns, you could optimize spend, save on operational costs, detect anomalies, inject intelligence, know who to market the latest widget to, and the like. Stats. Data. Analysis. Tooling. It takes big budgets, lots of data wrangling, and infrastructure to do this.

Whack the mole.

The thing is, you can get all of the tech right (assuming you don't just codify in ML code the existing biases and problems in your data or org), but you will still fail if the technology requires humans to properly make decisions with it, and integrate it into the business.

Many places aren't ready to fully automate and remove humans from all the decision making. Even the Central Intelligence Agency is focused on augmenting decision making with machine learning right now, not on automating everything with ML.

So we're back to the humans being a variable—a big one—in the success of the technology. AI or otherwise.

Just because you can model it does not mean you should, or that if you do, it will guarantee that the organization will unlock the promised value—because there's still the human engagement part.

This part is likely not in your tech stack, but perhaps it should be—if you're going to look at the human factors as part of an IT investment.

Design asks us to ask, will the humans use it? Trust it? Value it? Understand it? Are they motivated? What's the cost to them personally (or to external communities they care about) if it's wrong?

The last mile—where humans get involved—may be where your AI will sink or swim.

So, how much of the $5M, $50M, or $500M in your AI strategy will be spent on ensuring the human part of the system would not break down, even if the tech parts are just fine?

There is no magic data viz or package on Git that will do that for you, but good design might help.

***

If you're a data science, analytics, or technical product leader looking for a step-by-step process on how to create useful, usable decision support applications with your data, my Seminar—Designing Human-Centered Data Products—will teach you skills you can immediately apply to your work, so that your data outputs start to produce business outcomes

 

Photo by Thomas Stephan on Unsplash

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

More Free Insights: