John Cutler is a Product Evangelist for Amplitude, an analytic platform that helps companies better understand users behavior, helping to grow their businesses. John focuses on user experience and evidence-driven product development by mixing and matching various methodologies to help teams deliver lasting outcomes for their customers. As a former UX researcher at AppFolio, a product manager at Zendesk, Pendo.io, AdKeeper and RichFX, a startup founder, and a product team coach, John has a perspective that spans individual roles, domains, and products.
In today’s episode, John and I discuss how productizing storytelling in analytics applications can be a powerful tool for moving analytics beyond vanity metrics. We also covered the importance of understanding customers’ jobs/tasks, involving cross-disciplinary teams when creating a product/service, and:
- John and Amplitude’s North Star strategy and the (3) measurements they care about when tracking their own customers’ success
- Why John loves the concept of analytics “notebooks” (also a particular feature of Amplitude’s product) vs. the standard dashboard method
- Understanding relationships between metrics through “weekly learning users” who share digestible content
- John’s opinions on involving domain experts and cross-discipline teams to enable products focused on outcomes over features
- Recognizing whether your product/app is about explanatory or exploratory analytics
- How Jazz relates to business - how you don’t know what you don’t know yet
Resources and Links:
Quotes from Today’s Episode
“It's like you know in your heart you should pair with domain experts and people who know the human problem out there and understand the decisions being made. I think organizationally, there's a lot of organizational inertia that discourages that, unfortunately, and so you need to fight for it. My advice is to fight for it because you know that that's important and you know that this is not just a pure data science problem or a pure analytics problem. There's probably there's a lot of surrounding information that you need to understand to be able to actually help the business.” - John
“We definitely ‘dogfood’ our product and we also ‘dogfood’ the advice we give our customers.” - John
“You know in your heart you should pair with domain experts and people who know the human problem out there and understand the decisions being made. […] there's a lot of organizational inertia that discourages that, unfortunately, and so you need to fight for it. I guess my advice is, fight for it, because you know that it is important, and you know that this is not just a pure data science problem or a pure analytics problem.” - John
“It’s very easy to create assets and create code and things that look like progress. They mask themselves as progress and improvement, and they may not actually return any business value or customer value explicitly. We have to consciously know what the outcomes are that we want.” - Brian
“We got to get the right bodies in the room that know the right questions to ask. I can smell when the right questions aren't being asked, and it's so powerful” - Brian
“Instead of thinking about what are all the right stats to consider, [I sometimes suggest teams] write in plain English, like in prose format, what would be the value that we could possibly show in the data.’ maybe it can't even technically be achieved today. But expressing the analytics in words like, ‘you should change this knob to seven instead of nine because we found out X, Y, and Z happened. We also think blah, blah, blah, blah, blah, and here is how we know that, and there's your recommendation.’ This method is highly prescriptive, but it's an exercise in thinking about the customer's experience.” - Brian
Brian: My guest today on Experiencing Data is John Cutler who is a product evangelist at Amplitude Software. I have been really enjoying John's commentary on Twitter and some of his articles on medium about designing better decisions of work tools. If you're in this space and you're trying to figure out, "How do I get into the heads of what our customers need? What types of data is actually important to track?" Especially, if you're looking at longer term outcomes that you want to be able to measure and provide insight on, I think you're going to enjoy my conversation with John. Without further ado, here's my chat with John Cutler. All right, we're back to Experiencing Data, and today we've got the cutlefish as your Twitter handle is known, right is it cute-l-fish or cutlefish?
John: We're going to go with cutlefish, not cute-l.
Brian: That's what I thought. John Cutler is here from Amplitude Software, which is a product analytics company, and I wanted to have John on today, not because he is cute necessarily, but because I've really been enjoying what you're espousing about customer experience, and particularly, product management. Which for some of our listeners that are not working in tech companies necessarily, there's not really a product management kind of role explicitly by title. But I think some of the, as you will probably account to, the overlap between design, user experience, and product is sometimes a gray area. I think some of the things you're talking about are in important in the context of building analytics tools. Welcome to the show, fill in, make corrections on what I just said about what you're doing. You're a product evangelist at Amplitude, so what does that mean and what are you up to over there?
John: Well, we're still trying to figure out the evangelist part because I don't necessarily sell or evangelize our product, I think our product is great and I like to say it sort of sells itself. But what I'm really focusing on is helping up level teams, now that could be like our internal teams, our customers, but largely to just prospects and teams that have never even heard of Amplitude. What we're really looking with this role is to do workshops, provide content, I do these coaching sessions with just random teams, so it's like one hour coaching sessions. But generally trying to fill in the blanks, I think a lot of times people think, "Well, I'm just going to purchase this analytics tool or this product analytics tool," and suddenly it's going to answer all our questions and everything's going to be fine. But what they don't quite realize is that you really have to tweak a lot of things about how you work as a product development team to really make use of the great tools that are available.
There are amazing tools available. I believe Amplitude is one of them, but there is so many good software as a service products to help product teams. But really at the end of the day, it's about the team also being aligned and things like that. I really try to take a broad view of what it will take to help people make better products with this role.
Brian: Yeah. Can you give an example? I think I know where you're going with this, but give an example of where someone had to change their expectation? You need to change the way you're working or let's figure out what's important to measure instead of just expecting. I think you're alluding to like, "Oh, buy our tool, we know what the important analytics and measurement points are that you should care about and we will unveil them." Instead it's like, "Well, what's important to track? Does time on the site matter? Does engagement in the application matter? Does sharing matter? What matters, right?" Can you talk about maybe where there was a learning experience?
John: Oh, absolutely. I think maybe a good way to describe this as well is a lot of the learning, a lot of the questions begin way before the team is unwrapping the problem, unraveling the problem. I'm not sure this answers your question exactly but I think we could lead into something more specific. But imagine you're a team and someone says, "It's the second half of 2018, what's going to be on your roadmap?" You think about it and you know what you know and you've heard customers tell you things, and the CEO of the company has subtly but not so subtly hinted he'd really like to see X or she'd really like to see X. You put together this roadmap, and at that point once you've got people thinking that those solutions are the right solutions, and you force that level of convergence, there's not a lot of... measurement will not save you at that point, you've already committed at that point to deliver those things in that particular setting.
One example of a practice that might change to further or amplify the use of measurement would just be not making... committing to missions, committing to move particular metrics that the company believes are associated with mid to long-term growth of the company, and commit to those things instead of committing to build features. An example, a real world example, maybe for someone's effort, maybe what you're shooting at before is do they shift from same time on site was important to something else? But for a lot of these teams, it's shifting from build feature X to something like shortening the time it takes for a team to be able to complete a workflow. That's the big shift for that. It's nothing-to-something that makes sense, not necessarily even something-to-something.
Brian: One of the things we talk about on the show is designing for outcomes instead of designing outputs.
Brian: Because it's very easy to create assets and create code and things that look like progress. They mask themselves as progress and improvement, and they may not actually return any business value or customer value explicitly. We have to consciously know what the outcomes are that we want let alone measure them. Do you run into the problem when you... If you're coaching someone and getting them into this mindset of designing around an outcome and building your sprint or your next, maybe it's even a strategy for the next six to 12 months around outcomes? That the important things to measure are not quantifiable in the tool? Do you work yourself out of a customer sometimes because the tool can't actually measure what's important? Does that ever happen?
John: That's a great question because I think that I do a fun exercise with people, which is called let's predict the success of a relationship. We start with this activity and we just we forget about what we think is possible to measure and we just start mapping our beliefs. The team will say something like, "Well, I think that they shouldn't have arguments." Then someone will say, "Well, yeah, but it's not just," and maybe they're talking about their own life like, "Well, we argue a lot, but we resolve our arguments pretty, we become stronger once we have the arguments." Then the team will sit there and go, "Huh, okay." It's not just about the number of arguments, it's ability to resolve your arguments.
John: We keep playing this game and we map our beliefs out to predicting these things, and some of these things we have more confidence about and some of these things we don't have a lot of confidence about. Some of these things we strike and we get this big messy network of nodes and edges on the wall and that's what we start working with. What's really, really interesting is that we actually, as a company, there's almost always some percentage of these things that we can contribute to in terms of what they can instrument in using our product. It's not like...we would much rather our customers map the universe of things and acknowledge some things that might be difficult to measure or they're just beliefs at the moment, they haven't figured out how to measure them. Because really what Amplitude is very powerful at is doing behavioral analytics about these long standing customer journeys through products and those types of...
Anyone who's done a 15-table join and tried to communicate it to other people in your company and then tweak it and have people collaborate with it just knows how painful that is. That's the type of pain that we solve. But back to the particular question, all the coaching really centers around mapping all the beliefs, and we're usually confident that there are ways to measure some percentage of those things using our product, and that's fine by us.
Brian: There's almost like a meta-question, right?
John: I like, I'm meta, yeah, I got it. I'm there with you.
Brian: You're like analytics, you're an analytics product and you talk to your clients about what's important for them to measure. But then at some point, you have to know what's important to measure to know that your customers are getting the value.
Brian: Is it directly...are you interested in what they're setting up to measure and then that becomes your measurement? Do you piggyback off that or do you... How do you justify that the sprint or the epic we worked on last quarter provided business value? How do you...?
John: Yeah, that's amazing. Yeah, we definitely dogfood our product and we also dogfood the advice we give people usually first. To give you an example like in 2018, we had this North Star Metric called "Weekly Querying Users", WQUs. That seemed about right and we did some analysis and it looked like, "Well, for increasing WQUs, it's probably going to mean this and this and it's going to be some early indicator that our monthly recurring revenue is going to keep going up", etc. But there were obvious problems with that and we saw that. And as 2018 went along, we started to look at it more, and for any SaaS company, there's a point at which your expansion within existing accounts starts to be really, really important in terms of percentage of revenue that you're in. We thought, "Well, is that metric, can you hand WQUs to any new team member and say move that or move something that you think moves that," and then be 100% confident they're going to make good decisions? It broke down after that.
What we did is we shifted to weekly learning users. Now a weekly learning user is not just someone querying, because anyone who uses one of these tools knows you could just sit there and query all day and not get an answer. In fact, querying more might indicate that you are not getting an answer. Not like doing anything with it. A weekly learning user is actually someone who shares some piece of digestible content whether it's notebook, whether it's a dashboard, whether it's a chart, and they share it. We actually have this North Star, which is weekly learning users, we believe these three inputs drive weekly learning users and those are activated accounts. They need to know what they're doing, they're broadcasted learnings, which is the ability for the user to attempt to broadcast some number of learnings, and then a metric that is a consumption of learning metric which is the broad consumption across the organization of that particular piece of learning.
This is all sounds really heady, why would we go to all these lengths to do this, and Weekly Querying User sounded good. But to us this really encapsulates a strategy. I think that that's an important thing that a lot of people from pure analytics backgrounds or who are used to sitting with a queue of questions and answering those questions are maybe not used to the idea of moving towards a cohesive strategy as expressed by a number of metrics and the relationships between those metrics. That's something that we really encourage our customers to do, it's not data snacking. It's not like, "Oh, I got this itch today so I'm going to answer this question." That took a lot of work to come up with that, and we're confident about those relationships between those things.
But more importantly, it helps any new team member like all you need to do is show a skilled product manager or a skilled designer or a developer even and say, "This is our current mental model as described by the relationship between these things. Where do you want to slot in? What do you have in mind?" That's really, really powerful. I don't know if that roundabout way of saying we take this really, really seriously.
Brian: If I can sum this up, and I'll need you to repeat part of it, but you have monthly querying users, so what I take that to be is I, the customer, using, paying for the Amplitude software, a querying user means I went in and I looked for content or I literally used a search interface to probably look up an analytic or some stat. You moved away from the number of people doing that and how often they're doing it as a measurement of your company's success to this three-stage kind of thing that I heard included sharing some knowledge. But can you repeat what those three grains were?
John: Oh, yeah, sure. The North Star is what we call "Weekly Learning Users", so WLUs. Those are users performing the behavior of interest, which is sharing, distributing some piece of content. Then we believe there are three inputs that explain that metric or three inputs that we really focus on. One is that the accounts are activated, which are meaning that does this account just have a minimum number of people doing that? The next one is broadcasted learnings, which is me, "is the initial attempt to broadcast the learning?" Then consumption is the actual long tail consumption of that particular learning. Let's say it is a story like I sign up with Amplitude, no one's really using it all because we haven't really onboarded and we haven't really instrumented, we haven't done any of that stuff. Okay, well, then we get that done, so we get just that we've activated, we have at least a certain number of users learning, some amount.
I'm in the tool, I'm in a notebook that is really interesting that I'm putting together that tells a story with data, very interesting about the mission that I'm working on. I attempt to invite people to that notebook or get them involved, that's the broadcast. Then, finally, the consumption of learning would be the accumulated interactions with everyone with a notebook. If that sounds too complex...
Brian: Got it. I don't know, I-
John: But the whole idea is for people listening and I think especially folks, designers and other folks is that their experience with analytics might be something very simple like "what percentage of people used feature?" Or something. What they're not getting is the context, the relationships, and what I'm describing here, there's amazing belief networks, there's causal relationship diagrams, there's just simple stickies and string on the wall, whatever you want to call them. But we're describing our beliefs as it relates to the data, and I think that, that's really important. For some background too, I'm not a data scientist, I've been a product manager and a UX researcher and that's been my focus for a long time. It's not like I'm a pro at this stuff, and even for me, though, it grounds me in what I'm working with and makes my analysis a lot easier.
Brian: I imagine you may have some, not resistance, but when you're working with quote data people or analytics people or data science people in your staff, in Amplitude, are there routine things that you wish they would hear that would sink in or problems that maybe they're not aware of that you think they should be like, "We need to look at the problem differently." Maybe you encapsulated that and that's why you have this three stage thing as a reaction to the data snacking mentality, which is "What data do we provide? Great, they have it, now they can eat it." Is that their reaction to that or are there other things that... I'm thinking of our listeners, we do have data scientists and analytics type people, and I'm wondering if you were to work with them, it's like, "Here are the things that I want you to think about here to get our head a little bit out of the tech for a second and into the decision support mentality." Anything, what would you espouse or advocate?
John: That's a great question. I think I can answer it a little bit with a story. I was the PM for search and relevance at Zendesk, like support software. My background is not in information retrieval or the guts of search but very, very early on working on a team with very, very talented people, data scientists, data engineers really, at the end of the day. One thing that I very much advocated for is we needed to be able to get everyone in the same room, we needed to get the people who were experts in what I would just call the actors, the support agents, or the support managers, or the the person trying to get help on their Uber app. There's experts in that, there's domain experts. There's also people who are experts in the surface area, the surface, like the interface. There's people who are really, really good at searching or finding information on mobile. There's people who's very good at finding information on, in our case, like the support agents view in their web browser.
Then you had our people who are really smart and creating data as it related to search and they were great at data engineering, etc. The main thing that I noticed was that there's just a silo-ing, and the people on my team were just craving, craving to be sitting next to someone who understood these other things really well. I think that for a lot of listeners it's probably you know that, you know that from a first principles angle, you're like, "Well, I know that there's a bigger picture here." I know that just in our case of searching like we knew that raising the mean reciprocal rank of a search term, we are searching it, where does the person click? Do they click on the second item, the fifth item. In theory, raising that would make a difference but when we look more broadly, it really didn't relate to deflection of tickets and things like that.
Our traditional metrics, the way we were measuring success is locally related to search. If we broadened our horizon to what makes a difference for the human beings out there who need their support tickets resolved or the support agents or things, that perspective was so helpful. What I would say to the folks on listening, it's like you know in your heart you should pair with domain experts and people who know the human problem out there and understand the decisions being made. I think, organizationally, there's a lot of organizational inertia that discourages that, unfortunately, and so you need to fight for it. I guess my advice is fight for it because you know that that's important and you know that this is not just a pure data science problem or a pure analytics problem. There's probably there's a lot of surrounding information that you need to understand to be able to actually help the business.
Brian: Sure, and you're echoing sentiment I had a Data Center from the Broad Institute on, he was mentioning how much he's like, "My work is so much more powerful when I have a great domain expert with me who really knows the space." We met over music, I'm a musician as well and he was trying to explore creativity in the context of jazz. He's a enthusiast in terms of music, he's not a musician, but he's an enthusiast so he understood some of it but he didn't have the lingo. It's just interesting when you look at someone working in that space trying to answer a question about like, "How does creativity work in jazz?" They don't have all that domain lingo. Being on for a change, being the domain expert, it was fascinating for me to be on the other side because usually I'm the hymn advocate, even though I'm not a data scientist, as a designer and a consultant, we deal with this all the time.
It's like, we got to get the right bodies in the room that know the right questions to ask. I can smell when the right questions aren't being asked and it's so powerful so I totally agree with you on the need to provide that bigger context sometimes so you don't just-
John: Jazz is just a mistake played more than once, right?
Brian: Yeah. Oh, there's tons of them, there's no wrong notes, just bad choices.
John: It's very easy for them to create the model for that. You're just making a mistake and play it more than once.
John: Then you go back to the top.
Brian: Exactly. Well, even that, like play the head again. Well, what's a head? Oh, okay. Well, it's just one form of the tune and they cycle through it and play chorus. Well, what's a chorus? Okay, shit. But even having that, you can imagine that on the business client, this was like a fun side project he was working on. But you can imagine that in a business context where you don't even know what you don't know yet about it yet. I hear this as happening, they're still in the, especially, in the non-tech company space, the more traditional companies that are, "Oh, we have 100 years of data and let's go, we need to go buy some data scientists and throw them at this pile of data and then magic will come out the other end."
John: Oh, I think that that happens in tech companies, too, though. I think that that's the number of data scientist friends who've been hired in is like some large effort. Then, one, they're like, "Yeah, and data engineering was the actual problem." Okay, we spent our first year there just going around in circles on solving that problem, and then, yeah, the number of friends I have who've been frustrated by that dynamic, even in tech companies, I think it's a pretty common, more common everywhere than we would think.
Brian: Tell me a little bit about, so we've been talking about the analytical part of all this, the quantifiable parts largely but you have a UX research background as well. We talk, on this show, we talk about empathy, we talk about the needs to go talk to people to ask good questions, to ladder up, get into all that. How does that fit in? When you're working on an analytics tool, can you fill us in on your approach to qualitative research and more the soft, mushy stuff that UX people deal with?
John: Yeah, and it's interesting. For context, I'm not a UX researcher at Amplitude but I've done that in prior environments that required the chops. But in talking to teams and doing it, I think so many of the basics apply in the sense that you're really... Not to overuse the jobs-to-be-done stuff, you're really, really trying to understand what decision this person is hoping to make. You're really trying and then what impact that decision has on the rest of the organization and who is involved in it. I think anyone who's done this knows that even as a UX researcher, if I do like a co-design activity with customers related to anything analytics oriented, it's just, "Oh, we're going to do an Excel mock up or you know." Anytime you get customers involved with that, it's so easy. If either side, and I've been on both sides of this, it's so easy to forget what you were trying to do.
I think that has a lot to do with the exploratory aspect of data in general that we have a gut instinct that if we just saw this stuff organized like this, then it would somehow be valuable for something we have to do. I think that for, and I don't know if it answers the question, but I think it requires the same chops but also understanding that people just have a hard time, users have a hard time talking about what they are looking at and what they're hoping to get out of the data when they're looking at it over and over and over. I think that really, it really you have to use all the tools in the tool shed. To give you an example, there was... I don't know if you've done these things too, I'll do these exercises where it's like, "Okay, we're revamping the app, it's just going to be this mobile browser with three numbers on it."
That's it, that we're not going to have all these fancy charts, we're not going to have all this stuff. And three numbers and then one piece of narrative advice, like "Consider this or do this." I love activities like that from a UX researcher standpoint when I'm working with people because it really, really forces them to just get out of their own head to think about it. That's like a common trick and you probably have a lot more. But, yeah, I don't know if I answered the question but it's a lot of the same tools. But I think also you have to really... It's a job environment, they're making decisions, they're hiring these analytics to do a job. But then with this added layer that I think that people are just incredibly, they find it incredibly difficult to talk about the numbers that they're looking for.
Brian: When you say it's difficult for them to talk about it, are you talking about their digestion of what's on the screen or their expression of what's important to them to actually find out? What do I actually want to learn about? Is it...
John: Both really, and that's the thing that I think just makes it doubly as hard. It means that if you show them something, and I think that we can all relate to it too, like any of us who have been shown some mock or some prototype of information on the screen, you can see your gears turning. You're having to process it and where did this come from? Can I trust it? What is it? We see that all the time just in Amplitude, it's people... Our understanding of how people experience some of these querying screens that we have, when you actually ask them to just talk through what they're thinking about as they move through it, it's just it's so complex. Data trust, where is this stuff coming from, data over time, their challenges with certain visualization techniques, even if it's "the right technique" like, "Well, I just need a radar chart." Just like no you don't really. But that's how they've been anchored or whatever. It's just complex. I don't have a fancy answer, it's just complex.
Brian: What you just told me reminds me of you had mentioned you do this exercise, and I'm wondering if it's the same exercise that I've done as well with analytics tools, especially, in the context of monitoring applications. There's some system that's monitoring stuff and it's supposed to advise you on what should I do next or what happens with something like this? It's like "instead of thinking about what are all the right stats to do", it's "write in plain English like a prose format what would be the value that we could possibly show", and maybe we can't even technically do it today. But it's "express the analytics and words like you should change this knob to seven instead of nine because we found out X, Y, and Z happened. We also think blah, blah, blah, blah, blah, and this is how we know that, and there's your recommendation." It's highly prescriptive but it's an exercise in thinking about the customer's experience.
How close to that can we get to it, where I don't have to infer from charts or whatever the date of this format is, how close could we get to something that prescriptive and then try to work backwards from that. We probably can't get right to that full prose. Is it something like that where you jump to this conclusion, like value conclusion or something like that?
John: Yeah, and I do a couple of these like that, one is if I have an Alexa or if I have a tube of crackers or whatever I'm like, "This is the interface now." You can ask Alexa, that's your interface. This is a beautiful future world where you just have your smart person, your smart assistant to do these things. Yeah, similar type of, I think, what it does is it creates just enough dissonance to snap people out of just immediately trying to unravel the visualization, which can be I think all of us do that. I think that that's our instinct whenever we look at something like that.
Brian: The default next question is how should we visualize this data that we've captured? That's the itch that we may not be the one to scratch?
John: Yeah, but I think that's also what we can test with, that point, when we've got that need to fill, that's when we can try multiple approaches, I think to see that. That's my experience, there is that point at which you need to you go back to the drawing board. Although, I would say that depending on the subject, the user in that case or the person you're working with, some people are really, really good at just the co-design aspect. I don't know your experience with that, but it seems to have a lot to do with what the people do each day and how they think about visualization and stuff. But I've done co-design sessions with people who the next step was, "Well, let's start thinking about, let's start drawing, let's start doing some other things to do it." I think that depends a lot on the background of the people that you're working with.
John: That's interesting. I wish Spencer and Jeffrey were here to answer because they're the founders of the company. But I think that it's funny how products have their history about them, so Amplitude, for example, it was a Y Combinator. The founders didn't go to Y Combinator, they had this fancy voice app or something that they were working on, and this was actually just their effort. They were like, "Well, we kind of had this app," and they surveyed what was available and then just said, "We really need, there's a thing, it's a little different. It's like an event based measurement thing. We really want to instrument this app and know whether people are using it or not." That was the founding story, it wasn't their key thing. A lot of the early customers were folks from Zynga or Facebook or other places that had moved on to other startups and then they wanted something that helped with the 90% of product questions that they had around retention and engagement and complex behavior patterns. Does this behavior predict this or is there a relationship between these things?
That's the founding story, these discerning teams that had a fair amount of autonomy and were tasked with working in these environments and that they wanted a product that they could do that with. When I'm thinking about what I would change as the newcomer to the company, now maybe five years on, was it, yeah, or six years or seven years on, I think it's what they're starting to do now, which is interesting. This notebooks feature to me is just so, so, so good and it gets away from a traditional dashboard. But with a notebook, it's very similar to a data science notebook, you can weave this story and this narrative and you can make the charts live and you can communicate it and you can do those things. As a product manager, that is pure gold to me, and it's just we've started to do those things. I think that the answer would be more of what they're really digging into now, which is around this learning user concept and how do you create stories with the data to motivate your team and keep everyone aligned and things.
I think if it hadn't existed and I joined a year ago, I would have been like, "Oh, you're missing this little element like the actual part that integrates it into day to day product development." But they've just started doing that now, so they stole my answer.
Brian: Nice, and just for people that don't know, tell me if I got this right, but the notebooks for people that aren't data scientist, it's effectively a collection of both quant data like maybe charts or tables, stats, data collection that you guys have put into some visualized widgets or whatever it may be insights plus qualitative stuff like my commentary on it. Like "Why do we care about this? Well, design is currently tracking these metrics because we're running a study on dah, dah, dah, and we think we can move this up" and that's a proxy for this other thing. You can provide all this context in that storytelling mentality so that when someone new comes in, they're like, "Why do I care about time on site or whatever the metric?"
John: Exactly, and that's the huge thing. One thing that we learned, we're in this business of teams getting going and it's like it's so easy to get to the point where you've instrumented your products and any new person joining your company can't make heads or tails of anything. It's like you've got all these events, are these duplicate events? We've invested a lot of time in this taxonomy feature, which helps manage your taxo- It's way, way, when people try to build this stuff in-house, they just forget about all that stuff. Like, "Oh, it's just events, it's semi-structured information, we're going to put it here and then we're just going to run queries on it." But all that's really, really important, so back to the notebooks thing, one of the biggest use cases we've seen in notebooks is people using them to onboard people and orient them with all the available analytics that and metrics and things that are being recorded. That's actually a really good testament to show that need.
Brian: They use it to actually show how they use Amplitude at the-
John: Right, it's pretty meta.
Brian: Wow, that's awesome.
John: Yeah, we see them do that or even some of them use it for training like, "Okay, let's start with this idea that we've got this whole universe of users. Well, how would we segment those? Well, here are the key ways that we segment." Okay, that we've gone down one layer, and so I think that that's kind of cool. But, yeah, for people who don't know about these data science notebooks, it is a mix of qualitative, quantitative, you can embed charts that are live or you could embed point-in-time charts, you can make comments, and you can do various things. I think for a lot of people who don't do this for a living, they get intimidated and it's not, a lot of the stuff is not rocket science, but it's just annoying to have to go to someone in your company and say, "Hey, can you spend like three or four hours just explaining our information to us."
That's really hard to do, so these notebooks help with that particular thing. I think that type of stuff is really the future of moving away from just very, very stayed dashboards and things like that.
Brian: Right. I don't know if there's much in terms of predictive or prescriptive intelligence in the tool, does the tool provide that as well or is it mostly rear view mirror analytics?
John: It's interesting you say that, so we have this new feature called Impact Analysis, and so in Impact Analysis you are able to go from day zero of a particular use of a particular feature and then see the impact that it has on another set of things. We give some statistics and we give some other values in there. So we're middle of the road moving to more and more complex questions. But one thing that our team realizes that anything... To prevent people from making bad decisions or making poor statements, you need to be so, so, so careful about presenting what you're actually showing if there's a correlation between something or even implying that there's causation without doing the background on it. We're not completely rear view and we're in this middle ground, but we're also going to go on record and say we're predicting what this value's going to be in six months.
Brian: Right, and the reason, and not just the hype of machine learning, blah, blah, blah, that's not my main reason for asking was going to lead into my next question, which was do you struggle at all with the expression in the tool of the evidence that backs up any types of conclusions that you're showing? Do your customers care? Well, how did you guys arrive at this?
John: They absolutely care, and so like one of the... We spent a lot of time in the ability, in Amplitude, any data point that you see, usually, if you hover over it, there's a message it says, "Click to inspect," or you can create a cohort off of that or you can see the paths to that particular thing. What we really made this effort to do is exactly right, is that people... Working at two analytics companies now, Pendo and now Amplitude, data trust and people being able to unravel what that number means in a way that makes sense to them seems like one of the massive limiters. It's just that thing that it's best laid plans start, that's the entropy that exists with these tools as people use them more and more. There's just it gets messier, a bunch of hands, a bunch of people are playing around. At least with Amplitude, they try to make a really big effort to like if you want to understand why that number is there and what is behind it, we try to make that really easy.
John: But we could always do better because in my mind this is the number one difference between the more data snacking approach like "it kind of looks interesting, that number," something that you can really pin your business on, which I think is what people... That's the dream of all this, but then once people start to ask good questions really, it really challenges the tool.
Brian: John, this has been fantastic chatting with you, I really appreciate you sharing this with our listeners. Do you have any parting wisdom or anything you'd like to share with people that are maybe working more on the tech side or the data side of the thing and the vents and they're trying to, "I want to produce more use, whether it's reports or actually software applications. But we're trying to provide better stuff, more engaging, more useful..." Any closing advice you might give to someone like that?
John: I'm going back to what we were talking about from the UX research angle is that I think that in this area, there's so much temptation to any one of us who've done this is that there's this constant push and pull between customizability and then this promise of preemptive insights like smart system, it's intelligent, it's doing these things. Then so how prescriptive are you? Is what you're presenting and actually helping someone to do their job. I think that it's probably reflective of my learning at Amplitude is that really going to human centered design, like really thinking about if the person is able to effectively do their job and really able to answer the questions that they're answering. I think that what happens is all of us want that, but then we hit this wall and we start to get really some conflating information from users and we start to...
Then we're like, "Well, okay, we're just going to let them find what they want to find. I think that, that exploratory type of research should be something that's possible in these tools. In fact, I think that leads to asking some of the best questions when users can do that. But I would really hope that people don't abandon the idea of being really patient and seeing if before they just throw their hands up in the air and will say, "Well, we'll just make a query builder and that's it, that's it." Like really seeing if that thing can solve the problem. I don't know if that makes sense, but I think it's something that's really been on my mind a lot lately.
Brian: Yeah, I talk about sometimes like with clients and people in this space about knowing whether or not you're producing an explanatory product or an exploratory product. It doesn't mean you can't necessarily have some of both but there's a big difference between the value, like in your case, I'm guessing a lot of these people really want some explanations when they tell us about what we can do to make our software better. They're not there for fun, but they might run across some things they didn't know were possible which begins the questioning. But if you put all the effort on them, you're just shifting the tool effort over to the customer. You're making it much harder for them to get the value out at which point they may abandon or quit. It's not just knowing are we explanatory or exploratory or at least there's this feature or there's this outcome that this goal that we're working on, the sprint. But just being aware of that I think is part of the challenge.
Like should they be able to walk away with... I should be in the six to 19 apple's range, whatever that means, like, should I be able to walk away with that level of clarity or not? I don't know.
John: I think that it's also something like, that's interesting you said that, because a lot of features that we're experimenting with, one thing that Amplitude does is anytime you... We built an undo feature, so we try to make it really easy to go really deep and then just back out really gracefully. It's like infinite, every version of the chart as you work on one is saved. You can back out of it. There's a lot of features like Save As or you're built like you could go to someone else's chart, and if you have some idea of where you want to take it, you could edit it. But you're not editing their chart, you're editing a copy and you can think about it. But back to that point is I think that there's many things that you can do to encourage, that you can juggle those needs concurrently for having definitive things and then also encouraging exploration. We've found that with our product as we experiment more.
One, I just told you about it, like the ability to telescope into a metric and then do more exploration around it. That didn't exist before and then we were like, "Oh, well, how about when you hover over any data point and you allow them to inspect that or explore that?" I would say that there are ways to accommodate both at least from our perspective and what we've learned.
Brian: Right, and I think there's always some of both of that, and I don't think most people are going to take everything on its face value. But I hear what you're saying. One of the things I've been recently working on is a UX framework for this called the CED framework, just conclusion, evidence, and data. It's not necessarily a literal expression of "Where should the screens go? What goes on every screen?" But the concept that when possible, if the tool can provide conclusions with the second tier of being the evidence by which the tool or application arrived at this conclusion. Level three might be really getting into the raw data like, "What are the queries? What was the sequel that actually ran?" Or whatever the heck it may be, there's times when maybe that data is necessary early on a customer journey. It may just be, "We need to build trust around this stuff." We can't be totally black box, but we don't actually expect people to spend a lot of time at the D-level.
We really want them to work in the C level, but it'll take time and evidence is required sometimes if you're going... Especially, I got to go to the boss, I can't just tell him it's 18, we should be at 18, not 12. It's like, "Well, how did you arrive at that?"
John: We find a lot is the instrumentation rigor is like that's one of our big problems to solve really is there are these products on the market that do just try to record everything for it. There's a lot of entropy there and there's a lot of issues. They're very fragile, in some ways, so we as a company definitely believe in explicitly instrumenting these events. But at the same time, you'd be amazed how many product teams... There's this thing called a user story, you write a user story that's from the user's perspective, what are you trying to do? Now you would think that like, "Okay, well, we'll tack on to the acceptance criteria for any story that you'll use a noun and a verb, and you'll get these properties and you'll get these things. Integrating instrumentation on the product level, not necessarily like, "Okay, we're instrumenting how our servers are working or anything," but just, "What did the user do?"
That's still relatively new. People who've worked in environments that just do that as second nature that, okay, they're in another thing, but we find that companies even need to change that approach. You've mentioned your CED thing like what's interesting is that extends to the UX of instrumenting. It's pretty interesting from that, it's you're the user trying to draw some conclusion, you're doing these things. But it's almost like service design, in some sense, because you need to design the approach to even instrumenting this stuff. It makes your head hurt sometimes.
Brian: Yeah, all this stuff makes my head hurt. But that's why we have conversations, hopefully, we're knowledge sharing and it's like giant aspirin conversations or something, I don't know. But I found this super useful, thanks for coming on the show. Where can people follow you? I know I found you on Twitter. I forget how but what's your [crosstalk 00:47:23]-
John: Twitter is good, I've installed a Stay Focused app to prevent more than 20 minutes a day on Twitter. But you will find me eventually there. I write a fair amount on Medium and it's pretty easy to find me there.
John: If you just type in "John Cutler product", I have about 400+ posts on Medium. Some are better than others but-
John: ... yeah, that's the best way for right now.
Brian: Awesome. Well, I will definitely link both of those, your Medium page and your Twitter up in the show links. Man, John, it has been really fun to chat with you here. Thanks for coming on the show.
John: Cool. Yeah, thanks for having me. Yeah, awesome.
Brian: Yeah, super. All right, well, cheers.
John: Cheers, bye-bye.