Bill Bither, CEO and Co-Founder of MachineMetrics, is a serial software entrepreneur and a manufacturing technology leader. He founded and bootstrapped Atalasoft to image-enable web applications which led to a successful exit in 2011 to Kofax. In 2014, he co-founded MachineMetrics to bring visibility and predictability to the manufacturing floor with an Industrial IoT analytics platform that collects data from machines. This data is used to benchmark performance, drive efficiency, improve equipment uptime, and enable automation.
Today, join us as we discuss the various opportunities and challenges in the complex world of industrial IoT and manufacturing. Bill and I discuss the importance of visualizations and its relationship to improving efficiency in manufacturing, how talking to machine operators help add context to analytics data and even inform UI/UX decisions, as well as how MachineMetrics goes about making the telemetry from these machines useful to the operators.
We also covered:
- How improving a customer’s visibility into CNC machines helped reveal accurate utilization rates and improved efficiency
- How simple visualizations make a tangible difference in operational performance
- Bill’s model for the 4 different phases of analytics
- Mistakes Bill learned early on about product dev in the IIoT analytics space
- What Bill learned from talking to customers that ended up identifying a major design flaw his team wasn’t aware of
- The value you can glean from talking to customers
- MachineWorks’ challenges with finding their market fit and aligning their product around customer’s needs
- How MachineMetrics has learned to simplify the customer’s analytics experience
Resources and Links
Quotes from Today’s Episode
“We have so much data, but the piece that really adds enormous value is human feedback.” — Bill
"Simplicity is really hard. It takes time because it requires empathy and it requires going in and really getting into the head or the life of the person that's gonna use your tool. You have to understand what's it like being on a shop floor running eight different CNC machines. If you've never talked to someone, it's really hard to empathize with them." — Brian
“In all the work that we do, in adding more intelligence to the product, it's just making the experience simpler and simpler.” — Bill
“You don't have to go in and do great research; you can go in and just start doing research and learn on the way. It's like going to the gym. They always tell you, ‘It doesn't matter what exercise you do, just go and start.’ ...then you can always get better at making your workout optimal.” — Brian
“It's really valuable to have routine visits with customers, because you just don't know what else might be going on.” — Brian
“The real value of the research is asking ‘why’ and ‘how,’ and getting to the root problem. That's the insight you want. Customers may have some good design ideas, but most customers aren't designers. ... Our job is to give people what they need.” — Brian
Brian: On my chat today, with Bill Bither, the CEO of MachineMetrics, we talked about UX in the world of industrial IoT. MachineMetrics has a great platform for monitoring the run rate, cycle times and status of large CNC and industrial manufacturing equipment. This is a challenging space for user experience, and Bill's going to talk a little bit about how MachineMetrics goes about making the analytics and data coming off of these machines useful to the operators and people that need this information. So, here's my chat with Bill.
Brian: Hello, and welcome back to Experiencing Data everyone. Today I have Bill Bither on the line from MachineMetrics. Bill, welcome to the show. You're the co-founder and CEO. Is that correct?
Bill: Yes, I am.
Brian: That's awesome. You're IoT space. Right? And you're going to talk to us about user experience around monitoring CNC machines primarily, is that right? Can you tell us the types of equipment that, that MachineMetrics is monitoring, and a little bit more about your background in business?
Bill: Sure. Yeah. We are a machine monitoring, manufacturing analytics platform, and we really focused on CNC equipment. These are like, leads, mills, grinding machines. Basically, the machines that make metal parts, and industries like aerospace, automotive, medical device, and others. My background is, I'm a degree in mechanical engineering, but also a serial entrepreneur. I started my career in aerospace manufacturing, then this is ... MachineMetrics actually is my third startup. This one is really focused on really data from machines, and leveraging that data to help companies drive production, and their efficiency through the machine data that we collect.
Brian: Sure, and I'm curious if I could translate to maybe some of the people listening that aren't as familiar with this, if I understood correctly, the primary value here is and things like predictive maintenance, knowing how efficiently the these machines are working such that they can be tuned or addressed or longevity, cost savings, quality, output, those kinds of things. Is that correct?
Bill: Yeah. I mean, to be honest, it's even simpler than that. Just to frame a problem a little bit, if you walk through a shop, a factory, you'll probably notice that there is a lot of expensive equipment, machines, they could be custom equipment or CNC machines, like plastic injection molding. These machines, they can cost hundreds of thousands of dollars, sometimes millions of dollars. They collect a lot of data, but this data is not really being leveraged to make decisions.
Bill: Often, the operators that are on those machines, they have a limited view into what those machines are doing, but it's just that the data isn't really being collected anywhere. These machines aren't really networked. There's a lot of opportunity to use that data to: one, improve visibility on the shop floor. One of the questions I like to ask people in manufacturing is, “What do you think the average utilization rate is of a CNC machine?” I'll ask you that, Brian. What do you think that the average utilization rate is of a CNC machine?
Brian: Do you mean in terms of like, for every business hour, how many hours is it actually doing ?
Bill: Yeah. It's 24/7, I would like say, last year, for a typical machine that might cost $300,000. What would be the utilization rate that you'd expect that machine to run? It's a pretty big investment, you have to pay thousands of dollars a month in financing that machine. What are you going to do?
Brian: I would assume that they would want something that expensive running as much as possible to keep it going. I would say maybe they're shooting for 80-90% utilization, just so it's producing value? I don't know. What is it?
Bill: It's 29.7%.
Bill: There's a huge opportunity for improvement. The reason for that is, there's not a lot of visibility into when those machines are operating, and also often you'll see machines that are down for maintenance due to some issue, or they're not being stopped properly. Having that data in real time on the shop floor to keep those machines running, meet their goals. What we've seen is just by having a real-time dashboard, that shows the production of those machines, we're improving efficiency by more than 20%. That's just the first step. The very simple, low-hanging fruit, in manufacturing is just improving visibility.
Brian: Wow, that sounds like such a major gap in the core product itself. Although I can see, right? This is a great example of thinking about the UX, from the perspective of an entire business and not just machine number seven, right? Because, even if you stuck a screen on machine number seven, and it gave a readout of what the current status is, "Is it running? It's 89% done with the current job." That doesn't help you if you're 400 feet away in a different part of the factory floor, right? So, that summary is just as important ... You can't really solve that with just plopping a screen onto every CNC machine that comes along, you need some kind of roll up for the business to see overall what's happening. Am I correct?
Bill: Yeah. This is why I like ... What our customers like to do is they'll install big-screen TVs that are hanging from the ceiling, and it will show the entire cell of machines. Each machine is represented by a tile, and the color of that tile indicates, not just the status of the machine, but if that machine is meeting its goal or not. What we'll see is we'll see a production manager that might be responsible for a group of machines. At the end of the day, they'll look up at that screen, and they'll see all green. Some of our customers will take a picture and post it to Facebook and say, "Hey, this is a good day. All my machines are green."
Bill: Those simple visualizations can really make a very tangible difference in performance. That's where we started with MachineMetrics, but you also mentioned, "What if they're not even on the shop floor?" If you have a VP of manufacturing, they're really trying to get a sense of what's happening, where, historically, have those machines performed compared to where they are today? By collecting that data, we give full descriptive analytics solutions that can identify trends and problems in their manufacturing, and they can make better decisions.
Bill: What we actually provide is, I look at manufacturing analytics as four steps. There's descriptive analytics, which is really focused on what's happening now and what's happened in the past. There's diagnostic analytics, which is really going very deep into the sort of the detail of our case that the detailed machine conditions and understanding why the problem is occurring. Then you have predictive analytics, which is really focused on identifying when a problem might happen in the near future. Then prescriptive analytics, which is not only identifying what might happen, but, "This is what you need to do to prevent that problem from happening."
Bill: What we've done is that while the low-hanging fruit has been in the descriptive and just improving the visualization, we're leveraging that data. We're applying machine learning, and our customers themselves can configure triggers and rules to notify the right person at the right time to take action and to make change or prevent a problem from happening. That's where we get more into the predictive analytics and predictive maintenance that you referred to earlier.
Brian: From a design perspective then, do you think of the process of designing these experiences, for example, in the last one you talked about, and this is typical, right? There's a recommendation being generated from the data, but there could be a life cycle involved with that such as, "Did I acknowledge the alert? Okay, yes, I did. Did I schedule some maintenance on the machine? Do I need to tell your system that I did do maintenance on the machine so that your data adapt to that information and say, 'Okay, something happened here, so when we correlate this again in the future, we should know that was a service date, that's not an anomaly.'" That's the difference between an experience and thinking about, "What's the UI? Oh, it sends out an email, and an alert with an icon that says, 'Your machine is gonna fail in the next twelve days.'" The whole experience is bigger, so can you talk to us about how you go about designing those experiences particularly when there's multiple steps, whether it's prescriptive or not as fancy as that, just that end-to-end experience. How do you guys think about that and approach it?
Bill: It's actually really hard. UX is challenging. One example would be, we rolled out a machine-running algorithm that detects anomalies, and it's for a particular segment, a particular type of production, a higher production environment, particular type of machine. We've rolled this out, we're trying to figure out, "Is this really providing value for our customers?" We have ... This is running on thousands of pieces of equipment. We had a lot of trouble because we'd have to go through every one and look through the data. Was this a real problem or not? We end up calling the customers and find out that we've actually saved one customer from causing thousands of dollars of damage to their parts, detecting a tool breakage. One of the pieces of the UI that we didn't have is that feedback mechanism. We can send an alert to say, "Hey, there's an anomaly", but we never really built in the feedback loop to say that, "Hey, did this actually prevent a problem?" This is actually a recent feature that we added, so on our run map, I can't say that we actually built this out yet. It's the whole mechanism to provide that feedback loop, so it actually trains our model.
Bill: Now, not to get too tactical, but this is an unsupervised machine learning algorithm. In that case, we don't need to train it, but there's other opportunities for supervised algorithms. What's really interesting about our data set is we have so much data, but that piece that really add enormous value is that human feedback. Knowing that a failure actually occurred, sometimes we can't get that from the machine data itself, we need the human to tell us that a problem occurred. That's one example of a multi-step. I'm not sure if that's exactly what you're asking about, but that's what came to mind.
Brian: How did you know it was multi-step? Is it because you got feedback that something was wrong or you talked to the customer about, "What would you do with the alert?", and then they gave you some feedback and you guys adjusted the product to that? How do you inform that process of getting it right, so to speak? Again, from that customer experience, right? Because ultimately, they probably just care about, "Do I need to shut this thing down and call the repair guy or can I keep it running?" I'm guessing a lot of their decisions come down to that. Is it that simple or not quite?
Bill: Yeah, it's not quite that simple. Often the messages that we send out are ... we don't really know for sure whether we've really detected that there's a problem, we have to learn from that. We're constantly talking to our, calling our customers and trying to figure out if we have to tweak the algorithm and in that case it's a manual process for us to pick up the phone and determine if this is enough or do we need to take it a step further in our product. Today, we're really young start-up. We basically haven't really brought on professional product managers until very recently. It's a lot of, sort of my CTO, myself, we're the ones that are really tied in to the customer and asking them these questions. Since then, we've built out, we're building out, a product management team and it'll be a little easier for us to spend that time talking to our customer and getting more immediate feedback.
Brian: Sure. I'm curious, in my experience working in this space, sometimes you get into issues with ... especially if you're doing predictive maintenance, everything tends to be based on historical data and looking at patterns and sometimes you get into issues around seasonality where there's a life cycle or something's on a 42 day average cadence and then something changes. Do you guys have to deal with anything like that? Or is it more like if they do maintenance on the machine, you reset it, score it zero or maybe 100% and you can only go down so you don't really have to think about calendar cycles or anything like business cycles or anything like that? Is seasonality related to any of the work you do? Is that a challenge or not so much?
Bill: I think the biggest challenge for us isn't so much seasonality, but it's the fact that most of our customers are changing over their jobs, the parts that they manufacture all the time. We have to reset every time that they're making a new part.
Bill: So it's particularly the case for our discrete manufacturing equipment. Sometimes you get these custom manufacturing lines that they just make one system, that's it. It's a little easier to build out that predictive maintenance algorithm on that because nothing ever changes. Maybe now you're looking at the seasonality or humidity and things like that to get that quality.
Bill: For us, it's a little more challenging, just because, like I said, the biggest variable is the actual part that you're making. It could be a different material, complete different geometry. What we have to do is we actually ... In a lot of cases, a part will be made three months ago and it'll be made again last week and we have to compare the differences and sometimes you see that the anomaly might be that something has changed over the course of the last few months on the machine itself. Looking at the data, you might be able to provide just enough information to the operator of the machine or to the technician to identify a particular problem. We might not be able to understand that as much as our customer, but by giving them the information, "Hey, your cycle time has changed significantly over the course of the last few months," that could be enough information to identify a problem.
Brian: Got it. Thinking about this from an experience standpoint, even the... Okay, today I'm making aluminum cylinders and then tomorrow I'm making, I don't know, a strut for a car or something, right? So the mold's different or the programming is different for the machine, etc, etc. Do you let them reset that as part of the experience? Does the system, for example, maybe it starts sending out crazy anomalies the first hour that it's using a new pattern and then they go in and give feedback to the system and say, "Oh, no, no, no, no, I'm not doing cylinders anymore, I'm doing struts", then through the tool, they reset that? Or is that all work you have to do behind the scenes? They have to tell you, "Hey, we're doing struts starting in April. Can you get ready?" Can you talk to us about that experience?
Bill: For modern machines, we can ... we typically get the program that's running. We know as soon as the program changes that there's something different being manufactured. So that makes that easier so we can reset it. For older equipment where we're just using sensors, it can be more challenging because you see that pattern change. We actually have algorithms that are in the background that can actually detect that, "Oh, okay." We call it regime change. We're now doing something very different. We're not relying in that case on the operator to tell us that it's changed. We can, we have a testing interface that optionally can be installed right next to the machine that provides that human feedback, but we try not to rely on that, that manual feedback if we don't need it.
Brian: Got it. It's surprising, in my work with analytics and a little bit of IoT as well, you rarely see just the simple ability to drop a note, or an annotation somewhere to inform ... Especially if you're in a team situation. Can you speak to that? Maybe that's not even the right tactic, but I've seen scenarios where there's just ... Some of the information is in the heads of the people, and it's not in the data. It's qualitative in nature, like, "I looked at this, the cycle time was just fine, I don't know why it's reporting this. So for me this is false positive." They want to annotate that somewhere in there so the next guy doesn't start at step one when he comes and looks at it. Do you think about any of that? Is that relevant in your space or not so much?
Bill: It actually is very relevant. So, I was alluding to before. We have an operator interface, which is a simple touch screen. The way that's used is if, for example, if the machine is down, we don't always know why the machine is down, so we rely on the operator to tell us why. Essentially they'll ... As opposed to having the operator just thinking about, "I need to inform the system that this is what happened to the machine", we'll ask the operator, "The machine is down, can you tell us more about it?" They're able to type in what it was that was a problem and we also have the ability to categorize that downtime. Each of our customers can, they can create their own categories so that they know what their most common reason for downtime actually is.
Bill: This actually leads into the user experience. By doing that, one of the things that we found out is that by giving our customers the ability to create their own categories, the number one reason for downtime ends up being, "No operator available." The problem with that, is that doesn't really answer the question of why that machine is down, it just means that the machine was down for a particular reason, and there's typically one operator for four or five machines, so by the time that the operator comes around to that machine, it's down, well, "Okay, I wasn't there to fix it", that's not really the root cause.
Bill: What we're really working on from a UX point of view is to be ... Really is to just ask the quest ... just to give a yes or no. Leveraging the data, we're working pretty hard at this. We can gain understanding. We think that the machine's down because of there was a tool change, but we don't know 100% for sure, so we ask the operator just yes or no. "Hey, was this a tool change? Was that why this was down for the last 30 minutes?" We want to do that because it'll make it easier to operate, all they have to do is say, just tap "yes" or "no". They don't have to think about, "What category? How do I categorize this?", instead of thinking, "Oh, I wasn't there at the machine so obviously it was 'operator not available'." We can get better information as to why that machine was down because we rely less on the operator having to think about, "Well, how do I categorize this?" or, "What annotations, what notes do I include so that my manager can really understand?"
Brian: Sure, sure. In this case are you saying that the telemetry coming from the hardware doesn't indicate whether there's a fault or whether or not the machine is not in use and it can't distinguish between those and so you require that human feedback? Maybe I misunderstood.
Bill: Because we have thousands of different types of equipment, different faults, we know what the fault is, the machine will report that, but we don't often know why was that machine in fault? That's where the operator can really help us add more context. We don't want to give them free reign as to just, "Hey, tell us what happened." We want to basically carve out only the reasons that it could be. If it's a fault that seems to be related to a tooling issue, then we just ask them about the tooling and that's it. We frame the reason into a minimal amount of options.
Brian: Got it. Because some of it's, "Well, that's nice to know, but that's never going to affect the quality of the product." It just becomes noise.
Bill: No, I think it's just that if you provide too many choices, then at the end of the day, it's not going to be used. It needs to be super, super simple. That's really what we've learned by having this operator feedback. You need to simplify it as much as possible. In all the work that we do, in adding more intelligence to the product, it's just making the experience simpler and simpler.
Brian: A lot of companies, they ... I'm not saying you do ... they pay lip service to simple. To me, simplicity is really hard. It's not even that it's hard, it's just, it takes time because it requires empathy and it requires going in and really getting into the head or the life of the person that's gonna use the tool to understand what's it like being on a shop floor running eight different CNC machines. If you've never talked to someone, it's really hard to empathize with them. I'm curious from your experience talking to these operators and all of that, is there a design change or a UX change that you guys made based on that feedback that was particularly memorable that you could share? Like, "Man, we never would have thought to do it that way, but when we found out, this guy has seven screens open at once and he can't do 'X' and he can't hear anything and he can't hear the ding, so we changed it to a notification." Something like that. Do you have any stories that you could just-
Bill: Sure. There are so many. I mean, it's clear that you have to talk to a customer to really understand how they're really using your product. We're a lot of software developers, and we can't begin to understand how it's actually being used on the shop floor until we actually go there. One in particular, we actually supply these tablets for our customers, and our Samsung Galaxy tablets. And they went through a change in the aspect ratio actually changed on the tablet. We had no idea, we never really took a close look at the impact that that had. So we started deploying these tablets that were widescreen versus, not as wide.
Bill: There was an area of the product that was really used a lot, which basically would indicate how far along in their work order, have they... how many parts have they completed character goal and it became so small, that the, especially the older operators, they couldn't see it. And without us, and they don't really have the direct connection fast. Usually, it's the manufacturing engineer or the production supervisor. It was only when we went and actually visited, that you can see the operator sort of squinting to read, how far along they were, that we understood that we never really tested that new aspect ratio with a customer. So we had to redesign the UI, just to support. I mean, essentially an older workforce that couldn't see as well as that the young engineers could see.
Brian: I'm curious, was that part of your routine visit? Or were you there for a different reason and noticed this? How did you... ?
Bill: It was just ... I think it was, in this in this case, it was one of our customer success managers that noticed that they were... typically, we'll actually onboard our customers remotely, but sometimes if it's convenient, or if it's a larger customer will go on site, and that's when we discovered this.
Brian: We talked about this a lot on my list about going and talking to people partly because you don't know what... you don't even know to ask about, you wouldn't even probably ask them about resolution on the screen, because it's just not even in your head space. And that's part of the reason why it's really valuable to have routine visits and talking to customers, because you just don't know what else might be going on. Even if you go in there with a script, right? And a bunch of questions, there's always going to be surprises. And sometimes there's really great information to be had, to inform your design just by doing some observation. So I think that's great that you guys are doing it.
Bill: That's why I'm so excited that we just brought on a new VPN product, and we're really focused more on product management. It'd be interesting six to 12 months from now, after having an actual product management team, how much more impactful that'll be on our product itself, by having a whole team that's focused on going inside, talking to customers, and getting feedback. And having that on a daily basis, iterated back into the product itself.
Brian: Sure, sure. Yeah. now, I'm sure you'll probably see an impact on that just with increase velocity. And that's one of the things, you don't have to go in and do great research, you can go in and just start doing research and learn on the way. It's not binary, right? You can just start doing it, and you'll get better at doing it over the time. A lot of it's just watching and listening. And so, I'm always an advocate for doing it and not worrying about doing it well just get it started. That's the hardest part, it's like going to the gym, right? They always tell you, "It doesn't matter what exercise you do, just go and start." That's usually the biggest hang up, and then you can always get better at making your workout optimal, so to speak.
Bill: I agree 100%. Sometimes it's hard as you're starting out, with just three of us to start now we're almost 50. Well, at some point during that three to 50 people, like when do you really start going to the customer as a regular basis, right? So, we did it right at the beginning, we built the product, we might have gone back for it a couple times, but you really need to build that into your workflow, into your product development workflow. And I can tell you that for a while there, we didn't do that as well as we could have. And so, if I had to do over, I would have, even without a product manager, I would have made sure that somebody was responsible for going to the customer, even though it feels like it's a lot of extra work, it's definitely worth it.
Brian: What was the pain that made you decide, "I'm not doing that, that way again?"
Bill: Oh, you mean, why didn't we go to the customer enough? What was the pain?
Brian: No, no. What I heard from you is, "I'm not going to maybe start up a new product that way, again, without going and doing that earlier."
Brian: What pain did you go through to learn that? Was it like, "Oh my God, we just spent five months of engineering on the wrong thing"?
Bill: Yeah. We get a feature request from a customer. And then, yeah, we'll spend a few months building it. And then when we actually put it in front of them, we realized that what they wanted, it was something different. So we didn't spend enough time without really going through showing them wireframes, and like, "Is this going to be useful?" And we've wasted a lot of engineering time on that.
Bill: I'm sure that we could have probably be ... if we really did this properly, we would probably be six months ahead of where we are right now.
Brian: Wow, six months? That's a long time. In startup world, that's expensive, but it's the learning, right? You got to go through that process to see it and, yeah.
Bill: I think it's adding, I think it's common, right? It's often, that's why companies pivot, right? They realize that they don't have it quite right, you're reiterating, you're getting the product right, the product market fit. And those that can do it faster end up further ahead.
Brian: I think part of this is like and maybe, I don't know, I'd be curious about your thoughts on this. But the fact it's a lot faster these days to build a product to get going, like the software engineering life cycles are so much faster, that in a way I feel like that actually does not drive people to go do more research, because there's this perspective that, "Oh, we'll just change it we'll just get something out there and then we'll change it." I feel like that... the fact we can do like, you don't need to build an authentication, a login system. You could go to get and check out some code, and you've got a full blown authentication system and the day whatever." But when you can do it really fast, it feels like there's always going to be time to adjust.
Brian: And most of the time, what I see with engineering companies is, they're really just doing more and more incremental development, they're adding the next feature on, they rarely go back and actually do the iteration until there's a major catastrophe, or there's a serious revenue impact, or there's some kind of crisis, at which point, it's like, now you have all that technical debt that you have to kind of carry along or adjust. It's really hard to build the new right features on top of that, and a lot of it, it's just it's cheaper to do it. To look at paper, and to do it without going into code too quickly. I don't know, what do you think?
Bill: I think that, that might be their perspective, and that's probably the perspective that we had up until recently. But then you look at how what features are actually being used. There's another one that I can think of, where a customer had this very specific requirement, they wanted this dashboard laid out a certain way. And like, "Hey, that sounds actually pretty straightforward." It sounds like I think others would want this. But instead of going to a bunch of other customers, and asking them like, "Let's build it."
Bill: So, we went and we built that, and looks nice, took a month or so of development. And then, we launched it and then realized that nobody else is using it. We didn't talk to enough customers, we only talked to one. And that month could have gone into building a higher priority feature, that would actually be used by all and most of our customers. So, by not going through that process — this is why I say, all this adds up, I think six months — it's not that difficult to see how something like this can really cause you to go down the wrong path. You need that customer feedback continuously.
Brian: Sure, sure. And one thing I would say to people who are listening, from a research perspective, well again, I applaud getting in there and starting to have the conversations. One of the key things, I think is really important to getting the design right, is to not go in there and think that your job is to find out what people want or to get the specs for what they're asking for written down, so you can go off and provide that. The real value of the research is asking why and how and getting to the root problem, and defining that problem really well. That's the insight you want to get. They may have some good design ideas, but most customers aren't designers. So you have to know how to interpret when they say, "On the dashboard, I really want on the left side, I want to see cycle time, and on the right side I want to see ..." And it sounds like they're giving you the answer. They're giving you the recipe, but you can end up creating all of that stuff that they asked for, and then have no engagement with it. Because they're not really actually aware of what the right design is for them. They're just aware of their own world. And so, my recommendation is, look at that as a learning experience on what the problem is. It's all about informing that. And not necessarily giving people what they asked for. Our job as designers is to give people what they need. And most the time if you get that right, and it's valuable, they will love it. Because it may not be what they expected, but they will probably love it if you're making their life better. I don't know, my little rant on that.
Bill: That's 100% agreed.
Brian: Jumping to a little different topic here, I'm curious, Is there a gap between the data and telemetry coming off of these machines, and the domain, and the language of the floor operator or the person overseeing operations? Is it kind of like two worlds? Or do they really speak, every to limit, every different data point and metric, they know what everything is, and they know these machines in and out. Because a lot of people don't... a DSLR camera, for example. A lot of people don't know what half of those things, they don't know what 'F' stop is, and they don't know how to interpret some of this stuff. They just want to take good pictures. So I'm curious, can you talk a little bit about the data that's coming off? And have you guys had to kind of design an easier system on top of this, because the telemetry is difficult? Or is it pretty straightforward, the types of data you get and the operator understanding of that domain?
Bill: I guess, it depends on who we're talking to, I mean, an operator. There's certain data items that they would fully understand, and it would make sense, and it helps their business. But out of the 108 items that we're collecting, it might be like five or six that are interesting to them. But then a maintenance manager might be another set of items. And then, the executive, I don't even care anything about the machine, you just want to know what's the performance of the machine. So you really need to develop a system that provides useful information to the right person. And that can be tricky if you're trying to build out like one product user experience, and that's where you have different aspects of the product, different jobs that you're helping to automate, segmented in your application. An example of that is, we help machine builders, basically, improve the service of their customer's equipment.
Bill: So with MachineMetrics, we're actually streaming this data right from their machines, so that when the machine has a problem, they'll call up their machine builder or the service provider. And we've specifically added these very detailed diagnostic parameters that I have no idea what they are. They're literally all these like numbers and but for the service manager, they know exactly that, "Okay, if this machine is down for this alarm, and, and diagnostic 101 is bit set to zero, that means that the solenoid, so we'll send out a solenoid." And that's type of information that's really hard to get to, one without talking to the actual user, which will be the service provider, but that information also wouldn't be sold to anybody else.
Brian: Got it. And I'm curious, are you tracking ... so if there's the ... who's a maker of a CNC machine? I don't know who makes the CNCs.
Bill: So like, Mazak for example, they're a maker of a CNC machine.
Brian: Got it. So if there's like the Mazak, Alpha One machine, are you looking at all of the Alpha Ones across all the MachineMetrics and starting to learn about the telemetry off these machines, and maintenance cycles? And using the world's data on those that you're collecting to inform one customer? Or does it only look at that one environment, that one factory and it doesn't know about all the other Mazaks? I think you said that's called?
Bill: Yeah, yeah. So you just hit on what makes MachineMetrics so valuable. We are connected to thousands of machines across hundreds of different companies. And we're able to learn from that same Mazak machine that's installed in many different companies, we can aggregate that data or learn from that data, and then understand what the failure modes are. Normally, if you were just at one plant, you wouldn't have enough data to really build enough information. So let's say, developer machine learning model, but because we were connected to hundreds of these machines across different companies, that we're able to learn so much more then provide that back to our customers with real value.
Brian: I would think those manufacturers might be ... you might have a secondary products here. I don't know about the information you're collecting. Maybe I don't... did those companies have call home, technology inside them already?
Bill: No, No. . They don't ... the ones that do, is architected in a very different way where you need to like VPN onto the network to get information. In our case, we have an edge device that can connect over the company's WiFi or cellular, and it's just over an encrypted SSL connection. So an HTTP connection. So in that case, you don't have to VPN in and that information is sent to the MachineMetrics cloud and, and we are working with machine builders, because this is information that they don't have themselves, that we're able to help our partners build better equipment, with the information that we're we're collecting.
Brian: What's the experience when you buy this product? Sometimes we talk about onboarding, and I haven't talked to my subscribers and clients about, there's also a phase I call the honeymoon phase. So, it's past the setup and getting going. But it's in that early phase, like in your thing, it might be, we need to collect enough historical data in order to actually start giving you something beyond just current state, which again, sounds like a big value, just getting current state of the machines was actually a big win. But can you talk to me about what that experience is like? And do you design MachineMetrics as a product and service that begins with unpacking the ... It looks like you have some kind of WiFi device or an edge device. Do they like cable that up or do you guide them through that? Or is it all like self-service where they know all the machines to plug in, and they gotta run Ethernet and WiFi? What's that whole onboarding to honeymoon look like?
Bill: We'll start with the actual machine integration. As you might imagine, there's many different networks out there, some of our customers, they haven't even wired up their machines, or their wireless is bad, or it's literally just somebody's brother's son, went and ran network cable and didn't really know what they were doing. Believe it or not larger facilities than you might think, and we come in and network is a mess. And those are pretty difficult integrations. And one the reasons why we kind of went with cellular and WiFi, so that we can avoid a customer having to run network, but half our customers can install MachineMetrics themselves.
Bill: We provide the instructions and the videos, and we encourage that. So, they have this very simple device, but they do have to ... it's powered by the machines, they have to open up the electrical cabinet, plug it into the machine and put in the screwdriver. So it's not nothing. But the other half, we actually said we have a field integration team that will go on site to install this device, or some of the more the older, more complicated machines that needs an electrician to figure out. We're continuing to work on improving that. And we really want to get to a point where all of our customers can install themselves. But that's really hard. So, once the machines are actually reporting, streaming data, we’ll onboard them, we have a customer success team. And we'll go through ... usually, it's like it's three one-hour sessions of onboarding, to tie into their other systems. And so, they can actually how to use the product effectively. And then usually, they're really excited about that. They'll go a few months and what we typically do is we'll start with sort of benchmarking phase. So, we'll just start streaming data for a while. And then once we actually visualize that, you can see the increase in machine utilization, often it's 10, 20% increase just by showing the data.
Bill: But then, after a while, after a while, it's that they've already experienced that increase, you need to kind of show them the next piece so that so you feed them, "Okay, so now let's get into setting up notifications that you can be more proactive when there's a problem." I think part of what, what makes a SAAS product so unique and so valuable is that, we're continuously improving our product. And so our customers are looking for that, they're paying a subscription every year. So, they want to see something new every time they write that check and that can keep them excited. But you're right, there is a honeymoon period. Right? That first that they're all so excited. And sometimes that if you don't get the right team engaged, that can cause problems down the road, where you can you get disengagement.
Brian: From an experience standpoint, are there things that you've learned about that you're like, "I had no idea we needed to spend time on that, but that's actually really important." And we're losing we're losing customers or drop off, or something like that. And it maybe it wasn't even super technical, or didn't have any to do with AI or machine learning, but it was revealed, did you learn anything like that?
Bill: Yeah. So, despite the fact that our customers can go into the reports, and on surface, all this information, so that the alerts, they really need their hands held through that process. And we need to continuously go back, that's why we have a customer success team. It's kind of like a free consultant in a way where they'll actually go through the data, identify problems and opportunities, and present that to them in like a quarterly business review. And that has been very, very effective. So, that's where we need to be continuously engaged. If you don't do that, then the customer could become disengaged, and occasionally, we'll have a customer that, if we didn't have the right team involved, we did a bad job onboarding when it comes up for renewal, they were charged.
Brian: Got it. Do you think that's because, is the goal to take those findings that are currently being hand delivered with a consultant, or your customer success person, and get them into the product? Like, "Oh, we really need a quarterly like Nest, sends out your, "How did you do this month on your electricity rate?" Does that inform the product or do you see certain things, as just like, we're always going to want a human to go do this. And even though we could use software, we're going to still do with the human?
Bill: We're always trying to productize. But the problem is that you have some ... each customer going to have such different problems, so you have do that across the board. And we're a product company, so we're not looking to make money in professional services. But despite that we found that like our data science team, we brought them into the sales process earlier on, so that they can understand what some of the problems are, and then on the surface, the data and unique ways. And that actually ends up driving our product, because we start to see some commonality across customers, and then we'll look at products. But before we were doing that, it was like, "Hey, it'll be great to add this feature, if we put it on the roadmap. And it never, we never get to it." But by having the data science team actually, being able to unsurface that information more quickly. It is work, it's manual, but it's extremely valuable. So we found out that ends up being more of a product driver than anything. Is this — what we're learning from the data science team.
Brian: So what overall, then just as we get kind of towards wrapping up here, is there like a general thing that you find the most challenging that you have to kind of keep an eye on in terms of design and just overall user experience for these floor operators, and their managers and all that? What's hard to get right?
Bill: Well, I think it's hard because we're a technical company. Yeah, we're a young startup, we're really good at designing sort of, really cool features. And some of these are really engaged, younger workers. But trying to build a product that serves all of our users is really difficult. And I think that, that's where you really sometimes have to go against sometimes against your own gut feeling. "Okay, this is amazing. Go with it." That was me all the time. Just like get this done as fast as you can. Like, no. Pause, talk to the customer first. And I find myself like constantly arguing myself over like, "No, that's right. Let's do this right, let's talk to the customer takes more time." I mean, that's a lot harder than it sounds.
Brian: No. It takes discipline, all right? And over time, I bet you can start to feel like we know what's best for them. We eat and drink this stuff all day long. But there's still that check sometimes that you need to go and a lot of it comes down to being again, empathetic and taking, taking your assumption hat off, and asking questions that are not biased and leading especially, if you have an idea how it should work. I always try to go on with those and ask the questions as neutrally as possible. If anything like now, my age and experience, I like to go in and try to find a fault in my assumptions. So I'll ask questions to negate what I think the solution might be like, "Do you ever have this problem?" And I'm almost hoping they'll say, no, to remind me that we have to go do this. Sometimes they surprise you, sometimes they don't. And they validate that you're on the right track, which is great. But I think you're right, you have to fight that urge, especially when you know the domain really well. And you may know those machines or the data better than they do at some point, but having that open dialogue is critical.
Bill: You're biased. And that's where bringing on sort of fresh eyes — what we just did. I'm excited to see the results of that because I think I've become too bias and too confident in what we're doing is right.
Brian: Yeah. Cool. This has been great. I'm curious, do you have just from your experience, this your third startup and now you're in this kind of IoT and analytics space? Do you have any recommendations or advice for people that are working in this space, perhaps with monitoring tools or hardware software, with analytics? Any parting words of wisdom?
Bill: Yeah, I would say that the industrial space, it provides the biggest opportunity for a startup company to go really big, but not really fast. It's a slower moving industry. And one of the reasons why there hasn't been a ton of investment in this space, is due to that fear that it takes longer for a company to mature. And we're seeing that, we started in 2014, we raised our Series A last year and 2018. But, wow, we're really picking up steam now. And I think that, that's one of the challenges is just being able to have the fortitude to push through and get to that point where you can really gain momentum, it takes a little longer. And in IoT, there's a lot of components that you have to get right, and you have an industry that's a little slower to adopt.
Brian: Just to kind of close that thought out — is that because that your MVP, your Minimum Valuable Product is so large, just to get to something of value, or it's not so much the product is hard, but the initial sales are difficult because of the culture or like what specifically was it that was hard?
Bill: Well, I think it's more of the industry itself. Manufacturing is the biggest industry in the country, produces the most amount of data but has the least digital penetration. And that's because, let's face it, it's the last industry to really adopt big data. And so I think it's just ... it really ends up being the industry itself. But the product is also very difficult. You have the machine connections, you have the hardware elements, the software element, there's a lot of pieces you have to get right.
Brian: Well, great man. This has been really fun. And I hope our listeners enjoyed hearing about your journey here in the industrial IoT space. So where can people find out more about you? Are you on Twitter or LinkedIn, social media? Where can they find you?
Bill: Well, I'm not a very big Twitter user, but definitely LinkedIn. So just Bill Bither, is my name, you can find me on LinkedIn. And of course, website, MachineMetrics.com and spelled like it sounds, you can find more information there.
Brian: Cool. Well, I will put a link to both of those places in the show links. And yeah, thank you so much for coming on. And I wish you well with MachineMetrics going forward.
Bill: Well, thank you so much.
Brian: All right. Take care.