Today I’m chatting with Cole Swain, VP of Product at Tomorrow.io. Tomorrow.io is an untraditional weather company that creates data products to deliver relevant business insights to their customers. Together, Cole and I explore the challenges and opportunities that come with building an untraditional data product. Cole describes some of the practical strategies he’s developed for collecting and implementing qualitative data from customers, as well as why he feels rapport-building with users is a critical skill for product managers. Cole also reveals how scientists are part of the fold when developing products at Tomorrow.io, and the impact that their product has on decision-making across multiple industries.
Highlights/ Skip to:
- Cole describes what Tomorrow.io does (00:56)
- The types of companies that purchase Tomorrow.io and how they’re using the products (03:45)
- Cole explains how Tomorrow.io developed practical strategies for helping customers get the insights they need from their products (06:10)
- The challenges Cole has encountered trying to design a good user experience for an untraditional data product (11:08)
- Cole describes a time when a Tomorrow.io product didn’t get adopted, and how he and the team pivoted successfully (13:01)
- The impacts and outcomes of decisions made by customers using products from Tomorrow.io (15:16)
- Cole describes the value of understanding your active users and what skills and attributes he feels make a great product manager (20:11)
- Cole explains the challenges of being horizontally positioned rather than operating within an [industry] vertical (23:53)
- The different functions that are involved in developing Tomorrow.io (28:08)
- What keeps Cole up at night as the VP of Product for Tomorrow.io (33:47)
- Cole explains what he would do differently if he could come into his role from the beginning all over again (36:14)
Quotes from Today’s Episode
- “[Customers aren't] just going to listen to that objective summary and go do the action. It really has to be supplied with a tremendous amount of information around it in a concise way. ... The assumption upfront was just, if we give you a recommendation, you’ll be able to go ahead and go do that. But it’s just not the case.” – Cole Swain (13:40)
- “The first challenge is designing this product in a way that you can communicate that value really fast. Because everybody who signs up for new product, they’re very lazy at the beginning. You have to motivate them to be able to realize that, hey, this is something that you can actually harness to change the way that you operate around the weather.” – Cole Swain (11:46)
- “People kind of overestimate at times the validity of even just real-time data. So, how do you create an experience that’s intuitive enough to be decision support and create confidence that this tool is different for them, while still having the empathy with the user, that this is still just a forecast in itself; you have to make your own decisions around it.” – Cole Swain (12:43)
- “What we often find in weather is that the bigger decisions aren’t made in silos. People don’t feel confident to make it on their own and they require a team to be able to come in because they know the unpredictability of the scenarios and they feel that they need to be able to have partners or comrades in the situation that are in it together with them.” – Cole Swain (17:24)
- “To me, there’s two super key capabilities or strengths in being a successful product manager. It’s pattern recognition and it’s the ability to create fast rapport with a customer: in your first conversation with a customer, within five minutes of talking with them, connect with them.” – Cole Swain (22:06)
- “[It’s] not about ‘how can we deliver the best value singularly to a particular client,’ but ‘how can we recognize the patterns that rise the tide for all of our customers?’ And it might sound obvious that that’s something that you need to do, but it’s so easy to teeter into the direction of building something unique for a particular vertical.” – Cole Swain (25:41)
- “Our sales team is just always finding new use cases. And we have to continue to say no and we have to continue to be disciplined in this arena. But I’d be lying to tell you if that didn’t keep me up at night when I hear about this opportunity of this solution we could build, and I know it can be done in a matter of X amount of time. But the risk of doing that is just too high, sometimes.” – Cole Swain (35:42)
- Company website: https://Tomorrow.io
- Twitter: https://twitter.com/colemswain
Brian: Welcome back to Experiencing Data. This is Brian T. O’Neill and today I’ve got Cole Swain on the line from Tomorrow.io. You guys call it ‘Tomorrow IO,’ right?
Cole: Yeah, Tomorrow IO. [Tomorrow dot io 00:00:43]. It’s all based on whoever’s talking.
Brian: Yeah, exactly [laugh]. So, you’re the—again, you’re VP of Product over here. Tell me a little bit about the work you’re doing. Give people some context for what it is. And then I’ve got some questions for you. We’ll jump in.
Cole: Yeah, Brian. So, you can think of Tomorrorow.io as it’s an untraditional weather company. The easiest answer—a lot of people ask me and I typically will just say, “Oh, we’re a weather company.” But it draws a lot deeper than that.
You’d think of us, kind of, reinventing the space in three key categories. The first is different ways of observing weather around the world. The second is new ways of modeling weather in the private industry today. And then third is, how do we deliver insights that actually communicate the forecast in a way that’s meaningful to our customers, that’s contextualized relative to what they need in terms of business insights, instead of just a raw forecast of numerical data that’s kind of the traditional forecast you’re used to today.
Brian: I think that last facet that you talked about is probably the most interesting one for us to talk about today. A business or an organization thinks that they want weather, right? But actually, they don’t probably really just want numbers, humidity, rain, precipitation. That’s not what they probably really want, right? It’s always this outcome that’s downstream. So, walk me through that process of getting behind what someone thinks they want to what they actually want. I know you guys had a little journey here.
Cole: Yes. You know, we use the term insights; it’s a ubiquitous term nowadays in industry, but that’s effectively what we’re aiming to achieve with our customers. And you’re spot-on in terms of what they’re used to. And even deeper than that, they’re used to being actually told by meteorologists in kind of a summary report of what’s expected to happen in a really broad swath, on a regional basis.
But you have to remember that these customers have very large networks of assets that they have to care about. They’re either dealing with road networks or big large areas or a bunch of different point locations all across the world that in essence, altogether, as a dispatcher or a network operator, you need to be able to canvass everything that’s going on to drive a single signal of what it is you have to pay attention to. And often for our customers, you know, weather is a chaos factor. It’s something that changes the operation for them. It’s not something that makes it easier for them.
And typically, without weather, things are kind of going hunky-dory, but once that gets introduced, it’s how do we react? And most people that we see today are very reactive in this framework. And so, the way that we’ve approached it is working with our customers to better help them define what it is that they care about, what is it they’re watching for, what it is that they need to do in response to a particular weather phenomenon, and giving them tools to equip them to be able to program that into an application to then monitor for that impact across that large array of assets and network to be able to signal to them in a variety of different visualizations and mediums of distribution for them to be able to actually take action and be proactive.
Brian: Can you give a concrete example of a type of customer? You don’t have to use their name, but I’m thinking like farms, I don’t know, delivery supply chain—like, delivery networks, anything that’s operating on the road, but I don’t know if that’s right. Give me an example of a company that buys this service and what are some of the really practical use cases that they’re doing with Tomorrow.io?
Cole: Yeah, so our customers are pretty wide-ranging across many different industries. I’d say in the aviation space, we have Delta, JetBlue, United are using our products. We have Uber using our products. We have National Grid using our products. Even the US Air Force is using our products. And a variety of different automotive companies and logistics and supply chain companies that in essence, what we’re doing is we’re creating awareness for them.
Now, we’re not going to talk a lot about—today—the kind of observational side of the constellation of satellites that we’re launching, due in a few weeks, or the modeling side of our business, but for the most part, when you think about it, where we’re going to be talking about, that ability to drive insights from our product, it’s an awareness factor. And many of our customers come to us asking about accuracy, but the reality is, if we can tell you that something’s going to be on the table three days in advance, that’s just as valuable.
A use case that’s kind of personal to me that drove some of the identification of what we’ve built so far is actually I grew up with a father who runs a pavement maintenance company up in New England. And obviously, you know, pavement maintenance gets washed out based on precipitation. If they’re doing a job, it’s a sunk cost; all the materials get washed away. He has to reschedule the job, but it’s a seasonal business, so has to reschedule the next season because he’s already booked up for that entire summer. On top of that, he cares about the speed at which his pavement dries, which is a derivative of temperature, dew point, humidity, wind speed, solar radiance, cloud cover; it’s like six different parameters across.
On a given day, he’s got, like, 20 to 30 different job sites that is running the job at and it’s hard for him to be able to even predict how fast his pavement is going to dry because just to look at this confluence of weather that together gives him that answer. And he would care about this because say he’s doing, like, a Walmart parking lot and he tells them it’s going to take four hours for the parking lot to finish, but instead, it takes nine hours for the parking lot to finish because it was just slower than anticipated, that’s five hours that customers couldn’t otherwise park in that parking lot, and the customer is not too happy. So, we give him the ability—and him representative of any kind of operator—to come into our tool, define what it is that he cares about around pavement drying time or simple insights around just precipitation, and automate alerts outward to his crews to understand what they need to be able to take to that job site, to understand how to communicate to the customer what to expect, and basically operationalize weather in a way that they otherwise couldn’t prior.
Brian: Thinking about this tool from a practical standpoint, that in this case, I think it was your—was it your dad that runs this company? His domain is concrete. And someone else’s domain could be something—I don’t know, growing crops; someone else could be driving trucks. Is this something where you had to factor in a way for them to input some data about their domain such that the alerts and the experience they’re getting is relevant to their domain? Because I can see this being very abstracted, which is like, well, our tool can handle every possible domain situa—or maybe it can.
Tell me about that whole experience so that they’re not having to do the mental math of taking six parameters, and then translating that to, like, “All I care is dry time.” Really, ultimately, it’s this, will it dry fast enough or not? But I have to interpolate it, like so do they teach something? Do they input data into the tool and then you create a model around, like, concrete for them? And for the drivers it’s around, I don’t know, fleet speed? I don’t know, time-to-distance or something?
Cole: It’s such a great starting question. Because, you know, as we came up with these ideas, it wasn’t what we anticipated to be one of the largest friction points, but quickly became it once we started to see customers really interested in it. But they would say, “Yeah, well, what do we do now? Help us get there.” Right?
And so, you know, Shimon, our CEO, came up with some really great tactics on a templatization standpoint for us to be able to prioritize at the beginning, in terms of creating a database and a framework that allows us to learn from each customer that we’re building with and really use that as leverage to be able to teach other customers within the industry of what they’re seeing, to help them come to that conclusion. And we were lucky with our first few customers in that they had protocols defined internally—it perhaps was why they were drawn to us otherwise—and it was more of a material way for us to digitize those protocols. But some of our customers, it takes a decent amount of time with them to learn about their historical datasets to bring it internally. We have an ML ops team that’s actually building insights.
Take, for example, something like a power outage forecast that’s not so simple as just being able to create rules and that’s a lot more training in terms of taking historical business event data. And so, there’s different layers of the insights that we’re delivering for our customers. And it’s all built into how engaged they are with us and how much they want to grow with us to be able to actually get to that level of sophistication.
Brian: On a practical level, does that mean, if you’re serving either one, let’s take the airline model, right—you tell me if I’m wrong—is this about routing? Is that primarily what they’re interested is routing? Or is it about supply-demand, which is like we’re going to need another plane over here because they’re not going to be able to take off here so we need to make sure we have equipment and staffing over here instead to handle that load? And then once they have that, does that mean the second airline that comes in, you’ve already built in something which is the plane routing model, and so you just plug right into that? You’re getting alerts that are already specific to that; you don’t have to teach the software that ultimately I just care about routing, putting my equipment and my staff at the right place at the right time. That’s what the ops person, I’m guessing, wants to do. Is that how that works? Is it—or are you, like, custom build it each time a customer comes in with their concrete domain or agriculture, or—is that how that works?
Cole: You know, I’d love to say it was 80/20. I’d love to say 80% of the time, it’s seamless and it’s always super repetitive. I’d say it’s more, like, the domain of 70. But it’s really the decisions of our business where we’re targeting. So, take that aviation use case, for instance.
There are those scenarios where they want to be able to make better decisions around routing and actually build, maybe, something that’s unique to them because, you know, at the level of aviation, we’re talking about something that’s a very regulated and safety-oriented industry, that they’re not necessarily going to listen to an enterprise kind of solution and what they think is best for their organization. And they’re going to do their diligence and QA to make sure that the rules fit the bill that they’ve set forth for their business. But as you get deeper into the aviation arena, think about just the icing protocols, which is, for those who might not be aware, it’s just if it’s very light freezing rain or any type of precipitation other than rain in an area of an airport, you have to apply a particular material to an aircraft to avoid it from developing ice on its wings, et cetera. And there’s a ton of different nuance to the weather that’s happening that’s very repeatable in that arena, that we do find our ability to kind of push it from customer to customer. And so, it’s just all about the quality that the customer is prepared for and the level of risk that they’re willing to take on by adopting our templates versus blending their own insights and intelligence into our platform. But we give them that capability from the start.
Brian: Oh, got it. So, for example, there’s a deicing template, it’s literally a deicing alert that says, you know, the following planes will need to be deiced on this schedule based on this forecast, something like that, that they’re not having to interpret it like, “Here’s an alert. The temperature is going to be this.” And then they have to imply, “Let’s see, is that at the number for deicing? Yes or no? Oh, it is. Okay.”
Brian: I see. Okay.
Cole: And then just full capability to kind of customize it on your side—
Cole: If you want to. Yeah, exactly.
Brian: What’s the hardest thing about designing a good experience within that? Designing a valuable product someone wants to pay money for in that space?
Cole: It’s untraditional. It’s not what people are used to when dealing with the weather. And what we often find, too, is that many organizations aren’t actually prioritizing kind of this concept of climate security or weather intelligence where they have kind of just accepted that weather is what it is. And it’s not necessarily something that’s going to change, or there are tools out there that can help us actually kind of take control of it and get ahead of it and actually use it as an advantage to our business.
And so, the first challenge is designing this product in a way that you can communicate that value really fast. Because everybody who signs up for new product, they’re very lazy at the beginning. You have to motivate them to be able to realize that, hey, this is something that you can actually harness to change the way that you operate around the weather. And in terms of small nuance, like, user experience, and things of that nature, the product is predicting the future. That’s its essence, that is effectively what we’re doing.
And creating a means of blending, you know, this is a prediction, it’s not a guarantee, creating a mechanism that allows them to distill down and come to their own conclusions by signaling to them what we’re expecting to occur and supplying it with things like confidence and the ability to drill down into that information. And even just in the real-time data, people often underestimate the world’s ability to understand what’s happening right now as it relates to weather. In the middle of the Atlantic Ocean right now, very few people have any idea what the temperature is because there’s nothing out there to tell you that. And so, people kind of overestimate at times the validity of even just real-time data. So, how do you create an experience that’s intuitive enough to be decision support and create confidence that this tool is different for them, while still having the empathy with the user, that this is still just a forecast in itself; you have to make your own decisions around it.
Brian: Are there any big things you’ve had to change, assumptions that were wrong? I’m always curious about the learning part, like what—maybe you thought, “Oh, they’re going to love this.” It’s like, “Nope.” [laugh].
Cole: Yeah. Yeah, yeah. So, like, the insights domain, I think when we launched it, we thought we could just put the name of the insight at the front of the alert, or that we have like this kind of Gantt Chart-style visualization that we show that shows, like, all of the expected events that you’ve programmed that you want to be able to see coming up. And, you know, I think we wrongfully assumed that just by telling somebody, “Secure equipment between this time range,” that that’s going to be enough intelligence for them to be able to say, “All right, great. Yep, I’m going to go ahead and go do that.”
And it wasn’t the case, upfront. They’re not just going to listen to that kind of objective summary and go do the action. It really has to be supplied with a tremendous amount of information around it in a concise way—that’s arguably the hardest part of the experience—and be able to blend it into, like, a ramp-up to the situation with all of the possible context. And that’s what’s taken us the time. And the assumption upfront was just, if we give you a recommendation, you’ll be able to go ahead and go do that. But it’s just not the case.
Brian: Help me paint the picture of this. So, I’ve got a Gantt Chart in my head. You did some forecasting stuff. I get this alert on, I guess, my dashboard and it says, “From 11 to 1pm, I should be securing my equipment.” That was the old version. What’s the new version, the new experience? What was the change that you made to that?
Cole: Being able to drill down more into the information. And knowing that yes, we were trying to abstract away the core numerical weather data that comprises that insight, but more specifically, that it’s never enough and everybody needs to explore in their own right to come to their own decisions. And so, that becomes—I mentioned it earlier—but a signaling mechanism as opposed to an action-oriented mechanism. It’s something that carries with it a package of data that you click into and have layers below to be able to understand the situation. And spending more time on our side on the modeling side of things, to be able to create mechanisms that actually profile our confidence in the scenario. And that’s where it’s really started to change the way that we think about the product. And we have some really exciting releases that are coming down the pipe that further amplifies that kind of mindset.
Brian: It’s very interesting. So, I have a framework that I put out called the CED; it stands for ‘Conclusions, Evidence, and Data,’ with conclusions in your case being something like, “You need to go secure equipment.” But there’s two other facets. There’s the evidence part. And what a lot of teams get wrong is they’re shoveling evidence at people and there’s no conclusion, there’s no prediction, there’s no action; it’s just starting with evidence.
This is where things often begin to go wrong or there alre—they’re just wrong from the start. Because you’re putting all the interpolation and the inference about what should I do about this on the human brain instead of on a computer, right? So, I liked that you’re talking about this and it’s interesting to me that they want to go into this evidence and understand that. Do you think that’s because it’s expensive, or it’s time-consuming to go secure equipment and I really want to make sure that’s the right thing before I go do it? Or is it about, well, we’re not really sure how accurate your thing—you know, the prediction is? Like, tell me what’s behind the desire there? Is it, if I get this wrong, like, my ass is on the line [laugh]? Like—
Cole: [laugh]. Yeah, yeah.
Brian: What—tell me about some of those motivations behind wanting to look at the data behind the evidence.
Cole: You’re spot on, Brian. There’s different sensitivities to different workflows that somebody has to deal with. In one case, perhaps it’s something having to do more along the lines of safety, and actually, it’s something around life and death, perhaps. And so, being more open to receiving alerts anytime a threshold trips to scenario like that because it’s just so sensitive, you have to know anytime there’s even a potential for it to occur, versus something that’s more along the lines of what you just described, like, just securing equipment, which of course, has safety implications, but it’s a bit more haphazard and not necessarily something you have to hear about every single time that condition trips. And so, these sensitivities is part of the dance that we do with our users, and understanding for each one of these insights or events that are coming their way, what does it mean to you?
And it’s taken us time to be able to create mechanisms that can parse that through the product. And in some cases, it’s more conversational with our customers, in terms of really getting to a point where they can understand and take care of it themselves. But it’s absolutely blended within the product. And the ones that are more sensitive, certainly require more drill down. And time. Time to make these decisions.
What we often find in weather is that the bigger decisions aren’t made in silos. People don’t feel confident to make it on their own and they require a team to be able to come in because they know the unpredictability of the scenarios and they feel that they need to be able to have partners or comrades in the situation that are in it together with them. And some of these big decisions really can be about a four-hour discussion that’s happening across the organization as it’s evolving on what we’re going to do. How many flights are we going to delay? How many flights are we going to cancel, which impacts you, Brian, it impacts me, I’m trying to get to where we have to be. And they know it’s not small decisions. So, they have to have as much equipment at their disposal to understand the full context of the situation. And we’d like to think that we can equip them with that.
Brian: I’ve done a lot of work in this kind of alerting space and stuff, so I’m always thinking about things like, “Oh, the predictions changed. Oh, all of a sudden, it’s disappeared. It doesn’t say ‘secure equipment.’ We’re in the middle of a three-hour meeting about securing thousands of pieces of equipment and the dashboard doesn’t show anything now, what the F?” And someone’s in a—you know, like, the [crosstalk 00:18:26] job it is to, like, do this.
You’re laughing. I’m guessing you’ve had some experience with this about, well, how long does that persist and then what if it changes? This is temporal, right? This has chan—it’s a dynamic thing that’s changing. Talk to me about that whole experience. Like, how did you think about how long does it last? And what if it goes away? What if it gets better [laugh]?
Cole: What if it gets better? Yeah—
Cole: It’s—yeah, it’s a topic that comes up often. And we have mechanisms to mitigate it, but in the end, you’re actually manipulating the data in a way by being able to control that situation. Because if you’re trying to solve for the user experience at the dilution of the true data that you’re trying to give, there’s a balance between that. And there’s a lot of mechanisms that allow customers to be able to come to their own conclusions, but in the end, everything there—and you know, I’ve had this conversation with customers so many times, and it just comes down to, at least in our domain, an empathy for the reality of the data that you’re operating around.
And if something’s going to disappear—let me backtrack for a moment. It’s actually rare that something as significant, like a large-scale weather event that has a lot of impact, will just go away. And if that does happen, there’s something fundamentally wrong with the models that we have to explore because these are large-scale synoptic events that are moving through space. It’s the smaller things: “Hey, you told me to do action XYZ, and all of a sudden you told me not to do it and it was an event that was supposed to last an hour.” Well, we have mechanisms that allow you to apply things like duration to the insight that will only show it to you if it’s expected to last for three hours and it’s been occurring over the course of the past few forecasts, for instance. But it’s definitely not something that was intuitive right off the bat or we expected. We’ve heard quite a bit from customers.
Brian: [laugh]. Yeah, I can imagine you said something on our first call that we had about qualitative feedback, either really validating it or not, or something. And I don’t know if that was in contrast to quant feedback from users, but can you unpack what you were saying there?
Cole: It might be an unpopular opinion out there. I’ve been doing this for a while. And I’ve seen all of the great fruits that can come from having an incredibly tailored understanding of what becomes an active user, and when are they most expected to churn. And in the B2C world, it’s really easy to get to—not easy, but it’s more opportunistic to come to those conclusions. In the enterprise world, it’s not as easy, especially when you have a product that’s really wide-ranging across a variety of different verticals and many different personas and your intent from the beginning has been to be able to create a very horizontal solution like we’ve done.
I see, kind of, with the remote world, what’s changed around us, a product manager is able to be in so many different places and leverage so many more insights than ever before, with the ability to have, like, a simple solution, we use Gong, which is recording a tremendous amount of sales calls that we have going on and transcripts that allow us to connect feature requests to our backlog. And our team is, we don’t necessarily have to be on every sales call; we can spend our time at the end of the day, with an hour, watching maybe two, three sales calls from that day or customer conversations and put it at a faster playback speed, or our ability to kind of create a Loom video that records questions that we want to send to a customer with a diagram that’s just a few mock-ups. And instead of making it perfect so we have to wait for a meeting to have with them, we can send a two-minute Loom video asking them questions that overviews the concept, and they get back to us or comment on the Loom video. Or what we see with ChatGPT right now, right, to be able to, on the fly, construct a survey by use of ChatGPT and create something really valuable in a short amount of time, or take a transcript and parse out insights that you’re getting from that transcript, and start to build a bigger composition of data that you otherwise would have missed.
And to me, there’s two super key capabilities or strengths in being a successful product manager. It’s pattern recognition and it’s the ability to create fast rapport with a customer: in your first conversation with a customer, within five minutes of talking with them, connect with them and, much like you do Brian, be able to prompt them along and keep the conversation going. And maybe not even ask questions, just make some statements that just trigger them to be able to actually unpack their problems. This information is so much more valuable than the stuff that we’re able to get from so many of these kinds of product-oriented tools that are putting a canvas on top of our product telling us that we need to be able to understand user cohort A is behaving a particular way B and therefore you need to be able to get them to do action X by the time that they’ve activated from the time that they signed up. There’s different use cases for both, but this more former domain is where we’re finding most of our success in roadmap building.
Brian: Sounds like you’re saying you’re actually sending out sketches and ideas, low-fidelity designs over Loom. You send over, get qualitative feedback, attitudes, opinions, that kind of thing. Is that what you’re saying?
Brian: I love that.
Cole: Yeah. And just making sure that we’re on customer calls in the onboarding process so that they know who we are. It’s worth our time to build out those conversations, even as we’ve scaled—you know, we can’t do it for every customer, but key [unintelligible 00:23:24] accounts—so that, you know, when we have an idea, Dave from customer A, we can just send a Loom video. Perhaps we have a Slack connect channel set up with them and we can just have an interaction with them there.
And it’s building this community that we can draw on that’s really empowered our innovation. And it’s just a different tack than what I’ve seen. You know, I think a lot of people talk about doing this, but we really take it pretty seriously here. That’s our goal is to be able to have more qualitative conversational insights and people that we can put stuff in front of more often.
Brian: What’s the biggest challenge that you have here? I’m curious broadly, but I also wanted to ask about the fact that you’re horizontally positioned, right, you don’t have a vertical industry specialization, it sounds like, with the product. Talk to me about what’s challenging about that.
Cole: I remember when I started product management, I would go to a customer and I would sit in front of them and I would ask, “Hey, what do you want? What’s interesting to you?” And you know, “Give me a list of features you need and”—
Brian: AI man. They want AI.
Cole: [laugh]. [crosstalk 00:24:20].
Brian: Because everyone else has it.
Cole: That’s—[laugh]. Yeah. They’re hearing about it. And you know, back then, like, a really common theme was just screen real estate and how can you centralize my products? And I wasn’t aware of this. I wasn’t aware that that’s, like, a common theme. And people were just telling me features that they had from other products that they thought were great and how could you give me that and this all together in your product?
And I was like, “Great, fantastic. I have a list now.” And I’d go to another customer. I’d say, “Hey, what do you think about this list?” I wouldn’t even ask them questions. I’d say, “Which ones do, like, the best?” And all of a sudden you’re building in a direction that’s super vertically integrated to that exact type of customer profile that you’ve actually built a rapport with.
And what you didn’t do is recognize the patterns because you’re just fixated on what you learned from the beginning and you’re forcing those learnings amongst your other customer clientele. And so, what I think has evolved, even at Tomorrow.io, we really dabbled in this arena at the beginning because our customers were very aviation-focused. Our founders have a strong aviation background, being pilots. In the arena of talking with these airlines, we almost went down that path. And I remember it like the back of my hand, conversation that we had with our executive group and our founding team, addressing this dilemma and where we were going.
And really, we took the likes of a variety of other solutions, you know, Microsoft Excel, Airtable, things of this nature had done it tremendously, and from there, it became not about how can we deliver the best value singularly to a particular client, but how can we recognize the patterns that rise the tide for all of our customers? And it might sound obvious that that’s something that you need to do, but it’s so easy to teeter into the direction of building something unique for a particular vertical, which we’re doing more so of now. We’ve built a canvas for the most part that’s horizontal and more or less in the past year, we’ve started to create vertical teams to build on top of that platform. But it’s not an easy dance. I think our biggest challenge has not been finding opportunity; it’s been choosing the right ones throughout our kind of tenure as a business. Not sure I answered your question completely, but I hope it provided some insight.
Brian: Designing a great platform that’s so good, someone wants to pay to use it, particularly when it’s [abstraction 00:26:26], right? It’s not an airline weather product. It’s a weather product which can handle airlines, but also concrete. And I understand the idea of, like, maybe the airlines never thought about the concrete guy’s needs, but actually there’s relevant business things in the airline world that the concrete stuff is actually relevant to; they just haven’t thought of it that way. I get all of that.
There can also be a friction, though, when you’ve had to abstract all of these concepts, right? Because now it’s, out of the box, it’s no longer great for airlines, it’s great for everybo—but well, theoretically, it’s great for everyone. From a sales strategy, you know, I was just—I think it was Bob Mason who was just on he was a venture capitalist, and his whole thing is, for enterprise plays, like, watch out for the platform play, you know, at least from a go-to-market standpoint. You actually do want to double down in a vertical area and nail that first, and then over time, you might start genericizing the platform, abstracting it out, and then you can go after other industries.
You know, I’m curious, like, what your take is on that, having—it sounds like you started airline-centric, and then did you broaden out? Or were you broad from the start and you just had to de-airline everything [laugh] as a product, take that bias off of every single feature? Like, was it broad from the start or did you actually go after those airlines first and then begin to kind of abstract the product?
Cole: Yeah, it’s really that. It’s really we started in the aviation arena and it allowed us to create a strong MVP that put a footing in the ground. I kind of make it sound like we started from the horizontal approach from the beginning, but really, I would say it was about a year-and-a-half of core aviation feature development that, in doing so, it wasn’t until about like that half of the second half there that we started to realize it and started to homogenize a lot of what we had built and seek out different use cases because we were finding that this thing is repeatable.
Brian: The typical software trio, right—this isn’t unfamiliar to you, I’m sure—product management, engineering, design user experience, right? Pretty common. I have this kind of model or thing in my head for data products, which is there’s a fourth leg of the stool, which is data science or some analyst or somebody with a domain or data specialization there. I’m curious in terms of your teams, your bands that come together and work on product features and functionality, are there always these four representatives there? Or is it just the three or maybe it’s something totally different? What’s that formation look like?
Cole: We are both a science company, we are a hardware company, and we are a software company. We’re launching a constellation of low-Earth orbit satellites. Matter of weeks, the Pathfinders will be going up that are going to provide kind of space-based radar coverage around the world at an hourly refresh rate, which is just mind-blowing to the industry as a whole. We’re also launching new forecasting models in the private arena that aren’t just your traditional government models that a lot of other companies are just repackaging. And the nuances of understanding how those pieces to our business are as much of a value-add, arguably more of a value-add than a lot of what we’ve talked about so far in terms of kind of driving product and insights, takes a really strict mindset to make sure in how we build that we’re prioritizing the science of our product in a way that we’re continuing to deliver the most accurate and capable product and continuing to communicate the brand that is who we are, that we are effectively reinventing the core weather space that brings them into the conversation.
I would agree with that kind of trio of design, product, engineering, absolutely, but I’d say about 50% of the time also, we have a scientist in the fold. I would say we could use more time spent with data scientists internally. I think, as a business we just have some improvements to the down there. We’re also bringing in our customer team and our solutions team, when it’s right, at different cycles of the product development process. And more recently, over the course of the past year-and-a-half, we’ve done a lot of work to be able to create lines of communication between the engineering management group as well as the product management group upward to the go-to-market groups so that the product team isn’t just that sole centerpiece that it had been for so long and that discussions can involve any different piece of the puzzle, and they know each other in a way that they can really foster conversation independent of a product manager to learn amongst each other and help prioritize the right way to get things done. And it’s something that we found to be really successful for us more recently that we just didn’t focus on prior.
Brian: Is all of the model, the science and the model, and the really deep weather domains stuff happening after the traditional trio has kind of figured out what it is that they want to design, then you go and bring in the modeling? Because the modeling can be very technical, it can be very long, it can be lengthy projects—and maybe your case is different, I don’t know—or are they starting it and the risk with starting it is, “Oh, that’s really nice, but I can’t unpack this prediction the way I need to. They need that evidence to believe it. They’re not going to secure equipment if they don’t know where the dew point number came from.” “Oh, well, I didn’t know that the model needed to do that. We used this whatever netwo—you know, deep learning thing. And it’s black box and it doesn’t tell you how it came up with that.” “Well, okay, well, that’s nice, but no one’s going to use this feature now because they don’t understand how you came up with that number.”
Talk to me about that dance between, you know, what the data scientists need to do, which is—and that work is always sometimes experimental, right? You don’t know if it’s going to work, you don’t know if you can make that prediction until you jump in and do that work. What comes first?
Cole: The nuance for us is that the team of scientists that are building these weather models are people who have lived weather for such a long time and they know this industry so well. And the uniqueness about weather is that, you know, we’re innovating in a space that has a lot of roots and has a traditional way of doing things and a traditional way of visualizing data and a traditional way of communicating data. You know, you have to be careful about how far you part ways with that because there’s sophisticated users that that’s what they’re used to and almost would be, for the lack of a better term, insulting to be able to do it a different way and make it seem like you’re better than any other way. Because a lot of time has been spent in coming up with these conclusions of how to communicate this information because it’s so important, right, to just consumers around the world or just people around the world.
And also, there’s nuance in the data products that we deliver. You’d be surprised how hard it is to detect a difference between precipitation intensity versus precipitation accumulation versus precipitation probability, right? Three different ways of interpreting a single data product. And that’s just one example and we have so many different parameters that we’re delivering. And it’s often that the scientists are the ones that understand what it is that the data product is actually able to deliver to the end-user that we might be prioritizing, so as we bring the use case of the problem to the table, they’re actually not only building the models, but embedding themselves in being able to actually understand what’s the right data product for us to build for that end use case.
Because it’s different to predict the amount of lightning flashes—excuse me, lightning strikes—that are going to happen within a X-mile radius versus just predicting a single lightning strike. That’s a fundamentally different solution. So, we insert them in different places of the workflow depending on the solution that we’re building. And as we built a really horizontal platform that is now in a world where it’s more of a pipeline that we can now ingest new types of data products and leverage the tooling that we’ve created on top of this platform and it’s not so such a thorny process to get through anymore, they’re becoming much more critical in that process because we can deliver new data products faster, if that makes sense.
Brian: One thing that keeps you up at night, your biggest challenge here, working designing a data product like this that stands out from everything else? Or just being VP of Product?
Cole: I’d say it actually plays a tune of some of the stuff that we’ve been talking about so far. What I mentioned earlier about being super-qualitative and customer conversation-oriented, it’s hard to—I see so much feedback and so much insight from the world around us about product-led growth topics, which we’re absolutely doing on the API arena side of our business, and a lot of other, kind of, data-oriented product mechanisms, that our product is not action-oriented right now, so much. It’s an observational product where you are coming in and you are seeing something, and then from that you’re extracting an insight and then you’re leaving and you’re going and doing something with that information. And it’s really hard for us to capture so much so what our users are doing with that information. And so, it makes it hard for us to be able to quantify the ROI, in some cases, for particular businesses and understanding the level of influence that our product is having to their operation.
And so, what kind of keeps me up at night is this arena of this black box of information that we don’t necessarily have about our users. And we’re working on, right now, creating mechanisms that can obtain it. But for the most part, there’s also an answer as to why I said earlier that we’re not so quantitative about the way that we define active users and things of that nature is just because we’re missing a piece of the puzzle for our customers and exactly what that next step is. There’s a lot of conversation we have with customers. We have a good hunch around what it is, et cetera.
And so, that’s kind of one answer the question. The other one is just, when is it too much to become too horizontal versus not becoming verticalized enough? And how do we focus on building for new markets? Because there’s just such a tremendous opportunity around us in weather. And how deep do we go in new market development versus customer engagement?
Obviously, the priority being the customer, but our sales team is just always finding new use cases. And we have to continue to say no and we have to continue to be disciplined in this arena. But I’d be lying to tell you if that didn’t keep me up at night when I hear about this opportunity of this solution we could build, and I know it can be done in a matter of X amount of time. But the risk of doing that is just too high, sometimes.
Brian: I mean, in a way, it’s a good problem to have, right, when it’s [laugh]—
Cole: Yeah. Yeah.
Brian: Too many possible use cases and buyers out there that are potentially interested. So, that could be a good thing, too. I wanted to give you the last word here in just a second, but I kind of have one last question here, which is, is there something you would have done differently if you were starting your role over, when you came in the door here? Anything that you would have changed?
Cole: And in the theme of this podcast, I think it’s the right answer, too: respecting the importance of clean data from the beginning and organizing it with intent from the beginning. It was clear the questions that we would be asking about our product. It was clear about the insights we’d want to derive. Being able to slow down and prioritize implementing analytics in the right way, if I could do anything different, it would have been that from the start. Because we’ve paid for that a little bit and had to go back and do a lot of technical debt changes and product debt changes.
And we’re getting there right now, but I don’t think that I personally respected the power it would have on our business by not prioritizing it early because we’re just kind of running through the motions to define product-market fit, to figure out how we can actually create a solution that can scale. And everything back then just seemed like it was a small problem relative to the bigger problem we were trying to solve.
Brian: Did you ever think about just asking people whether or not, like, did you secure equipment? Do you allow the product to take feedback in from users such that you can track whether it’s predictive accuracy as it translates to action being taken in the real world? Is there anything like that in the product?
Cole: So, we have the ability, when you receive an alert, we have a feedback mechanism for you to be able to come into that alert and acknowledge that you received it and that whatever you did actually occurred, et cetera. We have paths to further build that out though. So, the roadmaps exciting in that arena.
Brian: Nice. Where can people learn more about your work, follow you? Are you active anywhere?
Cole: We do a lot of content on our website. I expect to be building a lot more content from my kind of arena moving forward. But right now just track Tomorrow.io. I think you’ll learn a lot about me through what it is that we’re building at Tomorrow.io because that is taking up the majority of my time. So, I guess that’s the best mechanism to see where we’re going.
Brian: And if people want to reach out to you, is LinkedIn the best place? Or what’s the best way to get in touch with you?
Cole: Yeah, I’d say Twitter, actually. I’m getting more active on Twitter, actually more recently. @colemswain is the handle. Been posting a lot on there actually, more recently, so might be the best spot to go.
Brian: Awesome. @colemswain on Twitter. VP of Product at Tomorrow.io. Tomorrow dot io if you guys want to check out the website. Thank you so much for coming on Experiencing Data. This has been great.
Cole: Yeah, Brian. Have a great day. Thank you.