EP 090 - Optimize critical machines performance - Jeremy Frank, CEO, KCF Technologies
|May 27, 2021|
In this episode, we discuss the use of machine health platforms to solve critical issues with machines in industrial manufacturing, and the tradeoffs between the traditional rules based systems and machine learning approaches.
Jeremy is the Co-Founder and CEO of KCF Technologies. KCF Technologies is a technology development company bringing embedded intelligence, especially wireless condition monitoring sensors, into widespread use in industry. KCF’s core capability is to solve industrial problems with innovation, especially in condition monitoring of rotating machinery. https://kcftech.com/
Erik: Welcome to the Industrial IoT Spotlight, your number one spot for insight from industrial IoT thought leaders who are transforming businesses today with your host, Erik Walenza.
Welcome back to the Industrial IoT Spotlight podcast. I'm your host, Erik Walenza, CEO of IoT ONE. And our guest today is Jeremy Frank, CEO of KCF technologies. KCF Technologies is a technology development company that brings embedded intelligence into widespread use and industry. In this talk, we discussed the use of machine health platforms to solve critical issues with machines in industry and manufacturing. And we also explored the tradeoff between traditional rules based systems and modern machine learning approaches.
If you find these conversations valuable, please leave us a comment and a five-star review. And if you'd like to share your company's story or recommend a speaker, please email us at team@IoTone.com. Thank you. Jeremy, thank you so much for joining us today.
Jeremy: Thank you, Erik. Glad to be here.
Erik: Jeremy, before we get into the business of KCF Technologies, we'd love to understand a little bit more what motivated you to start this. If I'm looking at your CV, it's basically graduating with a PhD, working as a research assistant while you're doing your PhD, and then basically, you started the company right off the bat. Is that correct? Did you work in any other industries? Or did you just basically do a PhD and say, hey, I've got an idea and I want to see where this can take you and then just jump right into it?
Jeremy: It was that. I might go back just a half step before that. We actually started the company six months before I defended my PhD. And I started with my thesis advisor, Professor Gary Koopman. But I grew up in Pittsburgh, Pennsylvania, and my father had worked for Sikorsky Helicopters earlier in his career and then Grove Cranes, and ultimately went and started a forensic engineering consulting company with a couple of professors from Carnegie Mellon and Carnegie Tech at the time.
And so I grew up in a household of entrepreneurship. For me, that was sort of normal, and the idea of getting a job seemed abnormal. And so, we started the company about six months before I defended my PhD out of the Center for Acoustics and Vibration at Penn State University.
Erik: Who else did you start the company with?
Jeremy: Dr. White Chang Chen was really the guru that ran the laboratory that we worked. We're doing specialty projects mostly for the Department of Defense, Vibration, and Acoustics and device type applications that are on specialty applications in the DOD.
Erik: So this was basically faculty you had the idea and then you started it together? I know that you're focused right now on condition monitoring. And I think for a lot of people, maybe it's not a new topic, but because of machine learning, it's getting a lot of attention right now. It’s one of the primary applications there. But it sounds like you were already working on these 20 odd years ago when you started the company. Was that really also the focus of the company from the get go before we had modern machine learning algorithms, but looking at simpler approaches to monitor, maybe predict breakdowns in [inaudible 03:41] condition of machines?
Jeremy: It's a really interesting question because there wasn't any singular big idea actually that we started the company around. Although looking back, we really started the company out of just the idea of getting these amazing technologies that we were working on out into more scalable applications, instead of just doing something that might show up on one submarine doing it on something that would show up on thousands and thousands of applications. That was the idea and just building technology that would enable people to solve problems. That was our big idea, but it wasn't any one singular thing.
And it's interesting, also looking back anybody that's in the AI machine learning world that's been in it for a while knows there was a big push. This was a hot topic in the early 2000s. My PhD actually was in nonlinear optimization routines. And we were also funded by the Department of Energy in the early 2000s. There was also a whole wave of industrial wireless sensing technology. IoT that we call it now was funded 20 years ago, and there was a wave of companies doing it.
What's changed it took the market a while to be ready for it. And also the smartphone technology has really fast his way into these devices and has made the application scalable and easy to deploy. And now we've got the cloud infrastructure and makes the whole thing much more scalable than it was 15-20 years ago. I think that's what's made this all possible.
Erik: This whole set of use cases that are conceptually fairly powerful and you then have the ability to kind of imagine how this would look. I think the same issue with AR VR, conceptually, it's quite easy for an entrepreneur to think of how this could be transformational. But until a specific aspects of technology have matured to the point where meets UX requirements and reliability requirements, in this case, where there's maybe sufficient computing power on the edge and so forth, then the use case doesn't get adopted beyond submarines, and other high end assets.
What were the focus when you first set up the company? Now that you're covering really quite a wide range of industries, but were you really focused on one particular asset class, or was it saying let's just kind of throw out the net and see what we catch and then evolve the business from there?
Jerry: It was a little bit of both. It wasn't just throughout a net because again, we were a problem solving company, and so we were setting out to solve problems. We weren't really a technology focused company, even though we are a technology company. And so early on, we were funded by the government. So without anything that would be sensitive, we were building remote monitoring capabilities for chillers that make air conditioning possible on things like ships and submarines for the Navy.
And so that was one of the first early use cases was large industrial scale and DOD scale chillers. But all the ancillary equipment that exists on a ship or a submarine is also the same type of equipment that exists in basically any factory or any commercial infrastructure, its fans, motors, blowers, lots of pumps. We really started for the first five years or so, even before we got into the DoD work, we were doing consulting specialty kind of projects focused on optimizing the health and reducing the noise of those asset classes, but especially compressors was really the early focus.
Erik: Okay. So you were starting from more of a consulting perspective where you were able to develop technology, but it was a little bit operating like a system integrator or customizing solutions case by case. When did you start to productize? So now you have this smart diagnostics which integrates the IoT platform analytics, and then the enterprise software solution, what was the trajectory to productize in your domain expertise?
Jeremy: It's interesting because there's really two timelines. There's when we tried to productize and then when we began to actually have success with it. Because we did energy harvester powered wireless sensor remotely on a screw chiller remotely at an industrial site that basically both serve the Navy and in-commercial applications, and was uploading the data, “to the cloud”, which at the time was an FTP server that we then pulled down on to some custom software. And that was in 2006.
So hypothetically, we could have started selling those units. And at the time, we thought we would rapidly turn that into a product. And we started attempting to do that. But because the technology wasn't terribly mature, the software certainly wasn't mature, but also, I think that the customer has a big part to play in that. The customer at that time, I don't think, was ready to accept that type of technology. And so, it really took until about 2010 or 2011 that we started having any measure of success selling those systems to monitor machinery in real industrial applications.
Erik: Yeah, especially this topic of moving data out of very controlled industrial setting into the cloud, well, it's becoming quite socialized now. But sitting here in China, at least this is still a topic, whenever any factory is looking at the next steps in their evolution it's okay. To what extent can we put any information on the cloud? So is that in the US market still also a challenge convincing customers that this is a safe and intelligent way to manage data? Or is there basic acceptance that cloud can be properly managed at this point?
Jeremy: It's a combination of both. I haven't thought about that application in a while. But there's a couple examples. At the time on the chiller examples that I'm referring to these compressors, those compressors all have control units on them. They have a control panel. They have some measure of sensing and process control that's built in. And then secondarily, there's a building control system that already exists. It has some level of automation and monitoring.
And the real challenge is that there's vulnerabilities that come with interfacing with an existing control system because hypothetically, it could be hacked and someone could do something undesirable. But if you try to integrate just with the control panel, the system that's on the unit, that's just very costly and cumbersome because it wasn't designed for that. It wasn't designed to accept anything new.
And so what really is the game changer, at least from our experience is around 2010 or 2011, we found a way to basically leverage the cellular networks and also cloud data hosting to have our systems be completely separate. So they don't directly connect to the existing infrastructure. And that's been a key enabler over the past decade that's allowed us to actually get past some of those barriers.
There're definitely still issues because increasingly as these applications become mature, you do meet these points of convergence where it's advantageous to interface with the control systems ultimately, the maintenance CMMS systems. And so you can do that. But what's really being a key enabler is that to begin with it's very easy to have those two systems coexisting in parallel without interfacing. And only once you've allowed the teams to have the proper amount of investigation and rigor, then you can integrate in a way that addresses all the security concerns.
Erik: I'd love to get more into the architecture, and also the technical challenges of doing this well. I think there's a lot of companies that are approaching this problem, but actually doing it well is still quite challenging. But before we get there, a little bit on the business side, so I know you're working with a fairly broad range of companies right now, oil and gas, automotive, forestry, pharma, mining. What's the silver lining golden thread through these, they look like a lot of process-oriented, either high volume or process manufacturing, all have very significant asset investments, what is the criteria for you for what makes a good customer, what type of company has problems that are big enough that it makes sense for them to invest in condition monitoring?
Jeremy: It's a couple of things, Erik. For us, we've really found our sweet spot. First of all, it's just the biggest companies. The biggest industrial operators, so power industry, oil and gas mining, but then also manufacturing, auto, forest products, pharma. It's the biggest companies just because they have the biggest plants, they have the biggest machines, they have the highest throughput.
But it's not just continuous process; continuous process applications are the easiest. But we have a lot of depth into the discrete applications that are more challenging, so intermittent moving machinery. So, picture like the assembly line in an auto manufacturing plant, or a stamping plant, or two of more challenging discrete applications. We're somewhat unique in that we have the breadth, and the depth of technology to cover those things at scale but also on the complexity of those that's required by those applications.
And I think what you said is right, are some of the cheapest solution to tackle but we have something that works effectively. And so we tend to focus most on those that are most critical. So for example, I'm sitting close to campus at Penn State University, and that was one of the first places we deployed this technology 10 years ago. But the reality is, if a chilled water pump in a one of these buildings fails, the consequences are relatively minor. However, if you fail a critical exhaust fan in a pharmaceutical manufacturing side, or if you fail a critical conveyance piece of machinery in an auto assembly line, the economic consequences are severe. And so we've really found our sweet spot in those particularly economically intense applications.
Erik: And then who would you be working with? So I'm sitting here in Shanghai and, of course, we have the unique situation in China of being a country that has a very large industrial asset base, but most of the asset owners, at least the companies that we're working with are base, they're headquartered outside. So there's always the discussion of to what extent the factory GM should be able to make technology deployment decisions versus some centrally-driven initiative, especially around IT? But also sensors and there's other technologies where there tend to be global standards in some industries. So what does it look like for you? Are you typically approaching this from a corporate perspective or going directly to the GM?
Jeremy: Typically, the GM. The plant manager is that's really where the rubber meets the road. It's interesting that you describe it that way because our footprint it has grown and is dominated by North America. And so most of our relationships are starting in North America, where a lot of these companies are both headquartered and continue to have substantial manufacturing footprints.
But one thing on that just we do find that most of the choices, when a company is making the choice to deploy technology to reduce these problems: safety issues, downtime issues, waste issues, that's typically happening at the plant level. And sometimes even below the plant manager or GM, it's someone who is the reliability manager.
Companies that are more mature have a substantial function within their business where they're making a plan and implementing a plan. And then once it becomes mature, it becomes a corporate initiative, but it typically builds at the plant level. So we're in the process of going substantially global, really driven by the needs of our customers because a lot of times the headquarters are here and not exclusively, of course, but just for our customer base it's dominated that way. But they have manufacturing sites in Europe. They have manufacturing sites in South America, in Africa, Asia Pacific. And so we're getting pulled there, but by then it's more of a corporate initiative. So it's been sort of a flow, but it's definitely started at the plant level in our experience.
Erik: I wonder how this differs region by region if plants in the headquarters region have a little bit more autonomy because they will probably directly communicate into corporate headquarters a bit more frequently and talk to in the same language, maybe there's higher trust there under, I guess this is still company by company to some extent, but there's probably some dynamic there. Maybe we can go into the architecture a bit here. So, you have comprehensive solutions that you're delivering to the market, so be it'd be good on the one hand to understand what the architecture of a typical deployment looks like, but also interesting to understand where within that architecture are you building the technology yourself? Are there strategic partners that you're working with in particular aspects? And then what are you just buying off the shelf from kind of the best value supplier that meets the specs?
Jeremy: Yeah, I can walk through that. I'll tell you this, I'm a mechanical engineer, but I'm not deep on the design and software architecture side. So to really speak in depth, I tap on my team, but I can answer at the appropriate level of depth for that. The way it looks in a typical site is there are wireless sensors. One unit is a wireless sensor about the size and shape of an egg that has a wireless communication back to a gateway; we call it the base station.
One thing that we've done differently is we've built our own wireless protocol called Dark Wireless that goes between the sensor and the gateway. And it’s highly advantaged in terms of how much data can be sent on a very small power budget. So as a result, you can send lots of very high fidelity data but the batteries can still last years and years, 5-10 years depending on how you set the frequency of transmission. And so that's the infrastructure.
So typically, in a large factory, say in forest products, we have applications where there are thousands and thousands of these wireless sensors talking to dozens and dozens of gateways. Each one has a range of a couple 100 feet. And then that gateway is basically getting the data to the cloud, one of three ways, either to cellular or to WiFi or Ethernet that's available through hookups in the plant.
And the part of it that we build, so again, the wireless and all of the software is all built by us. That's all our solution. But the actual chips we're not custom building our own chips and even the sensors we're not building which is interesting because that one sensor unit measures vibration like full fidelity vibration, so you're getting the full spectrum to assess the health of a complex machine, and they measure temperature. But there's a lot of other parameters that too quickly need to be monitored to get the whole solution to address a system, an asset class.
So pressure, both static and transient pressure flow, temperature in a system like RTD sensors for temperature, oil lubrication, there's ultrasonic, there's all these other sensor classes. And we team with other companies for those. We don't build those sensors. We basically enable our wireless connectivity onto all these other classes of sensors. And then it leverages infrastructure to the gateway and then of course, everything from there is a shared architecture, both in transmission of the data and in hosting.
On that side, where we draw the line, is we build those gateways, we build the base station. We're using mostly off the shelf stuff. We're configuring off the shelf hardware in those base stations, and then going up to the cloud. So we're heavy user of Amazon AWS once it's in the cloud.
Erik: That's pretty comprehensive with the algorithms and then the data behind those. So for different asset classes, you need to develop different algorithms. Are you working on a particular machine learning engines? And then the ownership of the algorithm, does that reside with you, or are there customers that say we need to own this intellectual property we want you to develop it? Is that part of the conversation in any cases?
Jeremy: So that's really both the heart of it, and this is really my personal sweet spot. It's all about the asset classes in the solutions: having a comprehensive solution to address the health of these assets. And we look at it very differently. Historically, it's not like there's anything brand new about monitoring machines for vibration, and these other parameters to do predictive maintenance or condition based maintenance. These things have been relatively mature for decades.
But to look at it what we really build, we consider it a comprehensive machine health platform, and what's different between a platform and just a predictive maintenance solution is we're not just waiting for an inevitable failure of a bearing or coupling or some part. We're comprehensively bringing all of the necessary information into a single platform, and then enabling a solution that eradicates the problem. It doesn't wait for the symptom and the failure. It eradicates the root cause of the problem as proactively as possible.
And what that all depends on is having an understanding of the asset. There's many, many, many classes of assets that each have their own nuance. And then those are installed as systems in a wide variety of applications across each industry. The key value is basically getting to the point where you understand the behaviors that matter and the ones that don't matter that drive the economic performance of that asset class. And in our whole focus is there's just so much improvement that's possible by comprehensively attaching the health of the machinery, which means the way that they're operated and the way that they're cared for fundamentally.
Erik: Yeah, not just the breakdown, but also reducing maintenance, improving the quality of the output. So a lot of value resides on performance here. Let's look at this forestry products example. I imagine here, they have a bunch of different types of equipment, they probably have some equipment that is similar in purpose, but maybe from different brands or different models that have been installed over time. Some of this stuff is probably 30 years old, and some of it's 5 years old. To what extent does the selection of sensors and then also the development of the algorithms differ based on these variations, not even kind of looking at completely different asset classes, but within an asset class looking at different models, different brands, what's the level of complexity here?
Jeremy: In total, to fully eradicate all of the problems that exist, the level of complexity is probably nearly infinite. What's made this start to really take off, I mean, there's this whole revolution going on that we're a big part of, and it's not so much about the complexity as it is about the large addressable categories of issues initially.
And so, for example, in a big paper pulp mill, all of them have a lot of pumps that are moving the pulp, moving the chemicals throughout the process into the digester, and then ultimately into the paper machine. And those pumps as an asset class represent a huge part of the problem that exists within all industries, but in pulp and paper, that's a particularly dominant asset class category.
And just, for example, I've spent a lot of time in pump systems. I've been in on the board of directors for the hydraulic Institute which is the industry organization for pump and motor manufacturers. I've been on that board of directors. And then there's another group called Pump Systems Matter that I'm currently on the board which addresses basically outreach and education about how to properly care for pump systems. And the simple reality is there's a tremendous amount of low hanging fruit to more effectively care for and operate pumps in a healthy way.
By operating on what's called the pump curve or the best efficiency point, there's a correct way to run an industrial pump. Just as one example, they should have efficiencies in the neighborhood of 80%-85%. And the average across the applications that we see, companies are often closer to 40%. Well, the average is 40%, they're often closer to 15% or 20%. And it's just simply a lack of knowledge.
So when you ask about complexity across the couple 100 pumps that might be in a pulp mill, each one has layers of complexity that affect its operation. One might be variable speed. One might have a variable consistency flow coming into it. One might be 30 years old. One might have just been installed, but they mess something up in the configuration. And those things are all to come and that's basically the body of work is to eradicate those issues. But to begin with, there's some basic things that can be done to improve the operating condition and the maintenance, the care of all of those 100 pumps in a pulp mill.
Erik: So things are operating at 40-50% efficiency. Getting to 99% efficiency is going to require a tremendous amount of effort. But getting up to 60-70%, there's a lot of common problems that you could solve without going into the intricacies of each individual asset?
Jeremy: Yeah, that's right. And what it's interesting on that example, it's not possible to get to 99%. There's a limit that's physically possible, which is in the 85-90% range. The largest consumer of electricity across all of industry is pump systems, by far. So, just saving that energy is a massive deal. I mean, billions and billions of dollars are wasted. But that's actually just the tip of the iceberg.
Because what also happens for pump systems just again as one asset class, when you operate a pump away from its best efficiency point, its vibration, and its health is severely degraded as well. In fact, it's not linear, it's not proportional. You drop in 20% in efficiency, but the damage goes up by three or four times. And so what you're doing is then you're prematurely wearing out all of the rotating components in that pump system. And that's really actually where the greatest economic consequences are because then you're failing that asset. The mechanical seals, for example, wear out in a year or two instead of 5-10 years, and the couplings fail and the bearings fail. And that's really the root of the issue is its energy, but it's also really about the unplanned failures, and frankly unnecessary failures.
Erik: If we differentiate between maybe predictive analytics and prescriptive, so you're able to move beyond saying that there seems to be a problem, they should have a maintenance person check it? Are you able to prescribe what particular components are likely to be the issue and what the maintenance person needs to show up within their toolbox to solve the problem? To what extent are you able to identify the problem and then prescribe a solution?
Jeremy: The way I look at it, from my point of view, we're beyond both of those certainly way beyond reactive. I think everyone can agree that doing things reactively which is still at least probably half of the way things are done in this country anyway are just reactive. And it's just mostly for a lack of knowing and a lack of infrastructure that makes it easy to have some level of predictive data or indication of the damage and lack of health.
There's a series of steps. What most organizations have out of the positive half are preventive. There's a lot of preventive scheduled work that is to correct the issues that are out there. And the problem with that is that, first of all, because it's not informed by data, a lot of that preventive work is done on assets that are actually perfectly healthy. Probably the majority of work is done on assets that are healthy. In fact, in our experience, it's about half. And if that's not bad enough, in terms of preventive, about a third of the work that's done actually is not done perfectly. And humans are imperfect, and it's difficult.
These are challenged environments. Often it's not enough time to address all the assets perfectly, so it's not done perfectly. And so a lot of that preventive work actually creates unhealthier conditions in total. And that's something that we're really working to educate the market about. We found that through kind of our work in the plants over the last 10 years. But then what's often idealized is predictive. The ideal state is that you then have sensors that notice the problem and flag it before it fails.
But I think when you think about what I just described, that's actually quite foolish. Even though it's much better than being reactive, treating the failure and the unnecessary or poorly performed preventive work as an inevitability is actually not even close to the right way to do it. The right way to do it is to have a comprehensive platform that is prescriptive, but it's not just prescriptive, it's proactively attacking anything that will degrade the health of that asset so that it can have its intended economic benefit.
Erik: So you're not even just focusing on saying, okay, we're looking at things that might break down and okay, this is going to break down, what do you do. But you're saying there's something that's not optimal, some metric is not optimal, what do we need to do to bring this back towards a higher level of performance? So that's going to prevent breakdowns, but it's also going to make sure that the machine is just running more efficiently from a quality perspective, or energy efficiency perspective, etc? Is that kind of what you're doing, you have an understanding of what an optimal machine looks like, and you're trying to aim for that target?
Jeremy: That's a good way to say it. That's correct. In fact, I can give an example from a real application that I've actually talked about the customer one that I think will help to explain it. Because by the way, the technology does also monitor in real time and flag issues. Increasingly automatically we're deploying ML AI backed algorithms that are looking for indications of a pending immediate failure. So it's not that that's not valuable. The technology does do that. But that's not where the primary focus is for us and our customers. We really focus on eradicating the root cause of the issues.
For example, so one application where we have a lot of coverage is in hydraulic fracturing in oil and gas, is a big thing here over the last 5-10 years. And the core asset that makes hydraulic fracturing possible during the actual hydraulic fracturing process, the pressure pumping are these large pumps. It's a totally different type of pumps in the pulp mill that I was talking about. But these are large like 15-20, huge reciprocating like piston pumps, and they're pumping a slurry of water, and sand typically from about 100 psi up to 12,000-14,000 psi.
And until we came along about five or six years ago, companies were almost completely predictive: they didn't have any data on the health of those pumps. So they just were reactive, they just dealt with the problems. When a pump failed, those times, somebody actually went out there, very dangerous, and people got hurt in these environments. It's now not legal to go in this red zone. But regardless of whether somebody went there or not, they would react when a pump failed, try to shut it down and turn another one on and continue to operate.
And so, we entered that application about seven years ago initially. It started finding a lot of low hanging fruit. But basically, when you start to bring all this data off of the pumps, you realize that they have both the immediate failures, but they have these chronic ongoing conditions on [inaudible 34:39], so they refer to it as cavitation in that industry. It's not really the same type of cavitation that people talk about on centrifugal pumps. Not that most people may be talking about cavitation at all.
What's really happening on these big hydraulic fracturing pumps it's a flow problems. It's a little bit different. But it's not too different from when you have water hammer in your house and the pipes all start to bang and make crazy noises. That's sort of a resonance dynamics issue. But what's happening on these pumps, that's a symptom, but it's caused by just improper flow coming in to the pumps and running them at too high of speed based on the available pressure and flow coming in.
And then on top of that, they have these chronic wear issues where the parts they have packing that keeps the fluid in from the back around the pistons. And then they have these valves that move up and down to intermittently allow flow to go through the pump. All that stuff wears out and wears out, it can be very costly and dangerous and disruptive. And just to give you some context.
Five, six years ago when we really got deep into this application, and understood it, these pumps were typically only lasting about 300-350 hours of total operation. That would happen over maybe a three month period. And so what we did is over time worked with these customers immersively out in the field using the data both remotely and in the field eradicated those issues. So, you improve the flow coming into the pump, you monitor them in real time, so you operate that pump at the right speed for what it can handle and you monitor the conditions of the valves and the seats and the packing. And there's also a big gearbox call the power end driving the pump. You monitor all that stuff because each pump wears out according to its own history.
And over time, we were able to help our customers. One in particular, I can talk about as FTS International, they've talked about it quite publicly. And they've overtime improved the health of those pumps by three to four times. Like up to 1,200 hours of operation, they've eliminated 75% of the downtime, these unexpected failures. And in that application, it's not so much about efficiency, but the failures are real safeties. So they've decreased their safety incidents in the field by more than 20 fold compared to their peers.
Again, if you just put a sensor on a pump and wait for the failure and catch it at the last moment, you would never do those things you would never achieve the comprehensive Health optimization that has those kinds of outcomes, which those outcomes will cost hundreds of millions of dollars of savings to that particular company.
Erik: You're really taking a system perspective there. It's not just the pump, but it's the pump and then the inputs into the pump and the environment around it that you also have to monitor and understand. So you develop the solution for this client, what is the ownership of that? So there's a set of algorithms and there's data that you're using to develop this, is this become kind of like an for lack of a better word an app for you that you can kind of plug into other situations? Or does your client basically say you've solved this for us and we own this solution? What does that look like? Because I guess each time you do this, you're solving a problem, and you're creating somewhat of a blueprint for that solution which I think is a very valuable thing to be doing.
Jeremy: Definitely, it's a very interesting question because in addition to like cybersecurity, this is one of the obstacles that held up the industrial Internet of Things for a number of years. Because all the equipment providers, of course, there's a large complex relationship between all the equipment providers and then the [inaudible 39:02]. And so who owns the system knowledge is in this constant balance between those two.
The way we've found success with that that really enables a win-win for everyone but certainly for us is that the customer for us owns the data, it's their data, and we gather the data, we host it for them, we get paid to host it for them, but it's their data, and they own it, and that's critical. We're not so much scary, but they don't want their data going back to an equipment provider who's selling it's all of their competitors. So that's one of the things that's been a hold up. But for us, we don't care it's their data, we host it for them.
But what we do, because we're working for many customers, we do that learning. We develop the knowledge and we own the intellectual property associated with solving the problem in general. The one I just referred to, we refer to that categorically as machine IQ. And machine IQ is basically the knowledge to comprehensively address machine health for an asset. And there's just a whole bunch of slices we're just sequentially and systematically applying that methodology to all of the top asset classes in priority order. We've quantified it. We have 73,000 asset class years of data because we've been doing this for a long time. I don't know of any company that has that much contextualized data in these real applications.
And so we're using that knowledge to apply it to these asset classes. But the interesting thing that gets back to the ownership is our customers benefit from that and even the equipment providers benefit from that because they have fewer maintenance issues, they have fewer liability issues or pump causing a catastrophic problem. But our customer owns the implementation of that and that's very important. Because when it becomes proprietary for them, most of these sites are different enough in their use cases that owning the implementation of those algorithms is actually what enables them to control the piece of it that they really wish to control.
Erik: That sounds like a reasonable compromise. I think it allows you to build up scalable IP base, but it also gives them the security that they need from a data protection standpoint, but also from a business management standpoint. So it seems like a reasonable compromise. So yours was 73,000 years of machine operational data, if you're looking at the different asset classes, and look at the different modules that you've built up through your project work over the past 20 years, how many asset classes are unique frameworks have view compositions? I don't know what terminology used internally do you think you've built up over this time?
Jeremy: Well, it's not how many I think. I don't know the number off the top of my head. This is something we document and orchestrate comprehensively. In total, it's a couple 100, but those couple 100 are grouped in categories. So, for example, there are many categories of pumps, the dominant categories centrifugal rotodynamic pumps, but many of the most important critical pumps are specialty positive displacement pumps. And there are many different types and subtypes of those.
And so just in the category of pumps, there are dozens and dozens of different enough asset categories that they have to be analyzed and addressed differently. But they all share sort of a common infrastructure approach. So in total, it's a couple 100, but these things happen in waves so it's really a matter of addressing the largest categorical similarities. So the vast majority of all of these things are driven by electric motors. So there's one simple category that is shared by mostly all of these applications.
Erik: To take a bit of a business model perspective here, I'm curious, your thoughts on this kind of system of systems concept. So your business has very deep domain expertise, and IP base, and database around this particular problem set, then you have companies like Siemens Mindsphere, GE Predix, PTC ThingWorx. You have these kind of more horizontal platforms that maybe trying to do kind of more digital twin type solutions or connect your entire enterprise. Because of that, they’re, on the one hand, very comprehensive, but they're often not as deep in particular areas.
I'm curious whether you have any ongoing partnerships with companies like this, or whether they've approached you and whether if you have or you have not decided to form those partnerships. But what's your thought on this? Because we talked about the Internet of Things, but often it ends up being a lot of separate solutions that might all be quite valuable in themselves. But we'd ideally like to see 40 years in the future more really have an internet so connected solutions. I imagine a lot of these more horizontal platforms would find a lot of value in your solutions. Do you have any ongoing work in this area? Or is this an area that you're exploring, this kind of technology partnerships?
Jeremy: Certainly, all the companies you mentioned, we interface with them regularly. We don't find it necessarily to have partnerships with them. What we find is that our customers just need us to cooperate as a friendly ecosystem. That's really how the Internet of Things works is, from my point of view, our company is really a problem solving company. And so if you're able to solve the problem, it just depends on where the problem is, and what data needs to get were in front of whom in order for that problem to be solved.
And so certainly, some of those just purely horizontal high level systems, they have their place, and they attack problems that are different than the ones that we attack. And each manufacturer, they basically need solutions to address the problems that they have. But typically, that's a blend of high level things that are horizontal, and then domain-specific deep applications.
But our focus is on just being a problem solving company at the asset level. So comprehensive solutions to address the health of the machinery as a platform, and then enabling the data flows into whether it's PTC ThingWorx, or IBM Maximo, for example or OSI Pi [inaudible 46:10] probably the CMMS systems in OSI Pi more than anything else just because that's what's there. But a lot of the emerging horizontal big data analytics, and pure machine learning AI type software companies, the data sometimes flows to those. But generally speaking, it's around the use cases. So we stay quite focused on the use cases, and enable the data flows that allow you to solve the problems that you have.
Erik: No, that's a good perspective, I suppose, strategic partnerships aren't really necessary, maybe useful in some cases, but so you're able to integrate that that probably gets the job done?
Jeremy: It's not that they're not necessary. We just haven't found them to be necessary yet. Because where they are, we have alliances with equipment manufacturers. And in those cases, it's definitely necessary because, especially on larger more industry specific applications like paper machines in the forest products industry that were like stamping lines in the auto industry, the domain expertise that exists within the manufacturers of those pieces of equipment is absolutely necessary. And so having alliances and interfaces to those companies is definitely something that we do, and is really vital to solving those types of problems. But it's at the direction of our customer.
Our whole focus is on just solving the problem of our customers. And again, because they own the data, they're in the position to dictate where the data goes. Our job is really to facilitate those data flows that happen outside of our circle, which is just comprehensibly addressing the health of the machinery.
Erik: Great. So, just one last question here on the business model. You're selling hardware, software services are integrated into a solution. But you're also in a position where you have quite good visibility in the status quo so you could set a benchmark in the reality today, and then you can measure the improvement very accurately, whether that's energy or uptime, whatever that might be. So, your business model, is it primarily paper project or subscription services? Or is there any kind of at risk of saying, we believe we can make this improvement and we will take 20% of the improvement that we realize for you based on a benchmark that we agree upon in advance? What does that look like today?
Jeremy: I think in the future that will be prevalent, and certainly something we've done and in are interested. In our experience, most companies so far aren't mature enough in their ability to measure the impact that that could be done effectively. Part of the reason the stories I gave, there's typically a lot of root causes behind why these problems exist. And so agreeing contractually on a value share with something that when those problems get solved is complex. But our models really pretty straightforward.
First of all, the main focus is on software as a service. So it's all about those solutions that the asset focused solution to comprehensively address the machine health as a platform. That's a software as a service that's just an ongoing thing. And it's not typically the largest part of the revenue stream, but that's the part that is really the core long term value delivery. We sell our hardware. We don't always charge for sensors, but it depends on the application. But we have highly advanced hardware, and so typically just selling and deploying the sensors is a big part of our revenue model.
And then the last part is there's a wide range of maturity at the customer site in regard to their reliability programs, their maintenance teams, even their engineering, and so we are actually often delivering actual services. So one of the things that makes us unique is we have people who actually go to the factories, help with the installations, train the team, work with the team. We have a whole thing called the KCF Academy, where we built a whole learning environment to educate reliability professionals about how and operators and our maintenance personnel to properly address all these issues. In that context, we're actually getting paid more for services, but the core is really the ongoing long term SaaS platform.
Erik: Yeah. Thanks for that rundown. Jeremy, really appreciate the conversation today. I know you've got a run. So last question, for me would be what's the best way for listeners who are interested in learning more about KCF to reach out to you or your team?
Jeremy: So you can find us at kcFtech.com, all the information about the company and the product. But I also have a podcast of my own, where we talk about some of these stories in greater depth, and I actually interview our customers and other thought leaders. It's called the Industrial Transformation podcast. So I would invite listeners to check that out as well.
Erik: Very cool. Okay, I didn't know about that, but I'll have a look as well. Jeremy, we'll put these details into the podcast notes. But again, thanks for taking time out of your morning today.
Jeremy: Wonderful. Thanks so much. I really appreciate speaking with you, Erik.
Erik: Thanks for tuning in to another edition of the industrial IoT spotlight. Don't forget to follow us on Twitter at IotoneHQ, and to check out our database of case studies on IoTONE.com. If you have unique insight or a project deployment story to share, we'd love to feature you on a future edition. Write us at email@example.com.