Kevin Clark is the VP of Marketing and Customer Success at Falkonry. A veteran of asset management, experienced as a practitioner and educated as an engineer, Kevin brings over 30 years of experience to the fields of engineering, maintenance, and predictive analytics. As a veteran and advocate in the industrial space, Kevin plays a key role in advancing manufacturing and encouraging new technologies as a thought leader, keynote speaker, and M&R expert. He has served through decades of leadership in the Society of Maintenance & Reliability Professionals (SMRP), International Society of Automation (ISA) and as a long-standing board member of Purdue University’s Polytechnic Industry Advisory Board (IAB). Kevin recently spoke with Plant Services editor in chief Thomas Wilk about artificial intelligence's impact on the worlds of operations, maintenance, and reliability.
Listen to Kevin Clark on The Tool Belt Podcast
PS: Maybe we can start for the few listeners who might not have run into you at conferences, in all your work with Fluke and Falkonry. Tell us a little a little bit about the job you're working in right now, and some of the projects you're working with.
KC: I came on board with Falkonry. I spent a number of years with Fortive with Fluke and Accruent, and I spent most of that time on the strategy side and product management / product marketing, and taking those products to the markets and finding good useful places for them, and making them with practical as we possibly could. So I've had a relationship with Falkonry, and Fortive had invested in Falkonry about eight years ago and I was the point of contact with Falkonry. Through those years we've done panel sessions together, we've done product collaboration together, we've done the “What Ifs?” of AI inside of our products at Fluke or Accruent, and so we've done a lot of things back and forth.
More recently, Nikunj Mehta the founder of Falkonry had come back to me and asked me if I was finally ready to come over to Falkonry and get back into the startup space. And, of course, it sounded intriguing, and the longer we talked, the more it made sense. So I've come over, I've taken over AI deployments, customer support, working on some of the innovations with our customers. I also lead the marketing group and there's a there's a big tie between customer success and marketing and organizations, in how they present themselves to the market and at the same time how they perform with our customers day in and day out.
PS: Thank you for covering both those sides of your current position because I think you're really well positioned to talk about the practical applications of technology that you see in the field while also understanding the wider industry roadmap for these technologies. And you and I caught up but a week ago now at the Reliable Plant show, and figured it was time for us to talk about what AI looks like in industry right now, how companies are applying artificial intelligence. Ever since ChatGPT came along, you can't you can't get away from AI right now in the news and in discussions.
KC: ChatGPT is one of those once in a million kind of opportunities. For us it's a love hate relationship. We love the fact that they gave AI the exposure to the more common population, where they really didn't know much about AI but ChatGPT introduced it to them and got him right into the middle of it right. Now they understand the power of AI, they understand the power of what's been underneath the internet for decades now. And so once they understood that, now they're talking to Falkonry, and trying to understand, well, how does it work? And they're much more knowledgeable today than they were just six months ago. They're asking harder questions or asking more interesting questions.
The problem is, in many cases, their expectations are super inflated, and so bringing that back down to a more practical level, that's been kind of hard to help them understand, what does it mean inside of an asset management world? And what do we actually do with the technology inside of asset management? While it's been great that ChatGPT brought so much exposure to AI, it's also been a bit challenging to calm the waters.
PS: That's an excellent point in that ChatGPT generative AI is good for certain applications. But we're not really talking about that flavor of AI when it comes to what's happening in asset management. And that was my first question for you was, when you think through how you're seeing artificial intelligence applied to asset management right now, process monitoring, what are the 1, 2, 3 things, the challenges or problems that you see AI helping plants solve right now in August 2023?
KC: Some of the things that I see out there, Tom, are things that we've taken for granted over the years. I personally have been in predictive for a long time, and I can claim it; a lot of the people that might hear this would probably say, “yeah, Kevin did fail at that.” But that's been the challenge over the last 20 and 30 years is how do we take RCM and TPM and the really sound methodologies that we utilize inside of asset management, how do we turn that into something digital? We have done a number of things that have made it better in the predictive maintenance world. But we've also done a lot of things that separated us.
One of the biggest challenges we have today is that our operations data is separate from our predictive data. And I see it everywhere I go, everywhere. We've done that, we separated it, because the technologies were somewhat separate, the business units were somewhat separate. But we didn't want to mix it in with the rules and regulations of operational data. (And in fact, operational data really didn't want our asset data, our condition data inside of their MES systems and process monitoring systems.)
The separation made sense because of evolution. But what doesn't make sense is that [asset] data is as important to operations as operations data is to asset data. So what we've been advocating for is that we begin to bring that operations data together with predictive data. We tend to look at data that's continuous, and that's mostly your operations data. Some of that continuous data is your predictive data, it might be coming directly from sensors, or it might be temperature, it might be vibration, it could be could be some ultrasound. But sometimes it's just a moment, right? Like maybe it's a vibration test, but it is time series.
And so you know the time of it, and you know what the result was, and if you take that, and you drop it right into the middle of continuous process data, it’s really interesting. I don't know if you've seen it before. But when you see those signals come together, and you see the performance, and then you see where the failures are in the in the AI data, and then you also see the predictive data coming in that's showing us very similar response to that potential failure – it gets really interesting. If I just have operational data, it's good. If I have operational and predictive data, it's awesome.
PS: Interesting. If I hear you right, when we're looking at is quicker anomaly detection, or quicker anomaly verification.
KC: I would go with detection. Obviously, verification is important, but I would go with anomaly detection, which is what we call it on a regular basis.
Anomaly detection is to me, way more interesting than predictive data. Anomaly detection leads me to predictive faster and more accurately than what I would get from a single test from a vibration sensor. It's like, I only check my heartbeat once in a while, versus I've checked my heartbeat all the time, I'm connected all the time. That's the difference, right? So if I'm able to monitor through AI, which is learning what normal looks like, it's always watching for normal. And so when it sees normal, it gives you a nice color chart, heat map that that makes you feel good, right? When it sees things that are abnormal, it raises the flag, and you see the different colors inside of that heat map, and those colors indicate that something is off.
Now, in fact, it might not lead to a failure, but something's different. We need to understand what that difference is. Ww don't always get that in predictive data because we put a sensor here, we put a sensor over there, we take pictures every now and then, maybe we take some vibration tests, and it's a little bit of luck, right, that we're going to hit just the right time. So I'm a big advocate for getting that predictive data that we've got, plus that operational data that's monitoring always, and let the AI decide if we're starting to move into something that looks different. I like to use the word abnormal, not necessarily unusual, but sometimes it's unusual.
PS: Where would you put this ability to parse that much data on the maturity curve for the technology? I know ChatGPT brought AI into the popular consciousness. Are we looking at technologies here that have been able to do this for the past, say 18 months, or for the past three to four years? Or are we looking at some innovations on the monitoring and anomaly detection side?
KC: Innovations. So anomaly detection has been around for a long time, so has pattern recognition and building models and things of that sort. But anomaly detection has had some innovations that have allowed it to really move quickly. Most of that has more to do with building the right user interfaces, building the right reporting mechanisms, the right notification mechanisms, to really understand what's important.
Now one of the things that's really creative that's coming, is taking anomaly detections and being able to think about them through the idea of a criticality assessment and a FMECA. Most of us that are in the reliability side of the business understand that terminology, and it's kind of the core of what we do inside of a facility that deploys RCM and TPM. It's a very hands-on, in fact it's even got some gut feeling kind of data inside of it. But what we're seeing with anomaly detection, is that we can make an association between what we identify inside of our FMECA, and also understanding the criticality of a particular asset, all the way down to the sub-components, the signals coming in are actually extensions of a FMECA.
We can clearly identify the signals that are associated back to a particular failure mode. It starts to bring to life anomaly detection. It's not only coming back and telling you, “I'm beginning to fail in this particular area of the asset,” it's also going to tell you what the failure could be. Maybe there's three signals that are that are showing a yellow basically. And of those three signals, that usually means something, and we can label that. That's what I really like about the technology that's coming along, is it's starting to look and sound like the reliability that we're used to speaking to.
PS: That leads to my next question: when it comes to the plants that you've worked with to deploy these solutions, I'm always curious about which roles are in the best position to drive these projects. I've heard, for example, that maintenance is tied up doing very task-oriented work. IT will do the work if the project as approved by a champion. So that leaves me with the Director of Operations, Director of Reliability, the Reliability Engineer to drive these things, even if those roles don't have prior experience with AI. Is that your experience too, that it falls really on operations and reliability to educate themselves and push this forward?
KC: Yeah, absolutely. It does, it’s the argument I'm making right now with a lot of organizations and inside of my own organization as we watch the culture begin to shift towards a data-driven culture. But if you go into most organizations, there's not a role that has in their job description that they can take two or three hours a day and go and analyze anomalies. Nobody has that in their job description.
Whenever we bring AI into a plant there’s a lot of excitement around it, they want to be able to see all that data. But once that data is there and ready for them to analyze, ready for them to maybe act on, most organizations aren't necessarily prepared for that. And it's a challenge. It's interesting watching organizations get prepared for it, but it is a challenge every single time. It's that transformation of moving from, “okay, I got some data, I generally make some decisions off this data, or my CMMS data, or data out of my ERP, I make decisions off of that data.” But when AI comes in, and it says, “this is live data, this is what's happening right now on your asset, it's beginning to fail, or it's showing some abnormal signs,” most organizations aren't prepared for that act, and we're seeing it over and over. The cool thing is you're also watching organizations make the shift – painfully sometimes, but they're making the shift – because the data is so interesting, and so telling about how they're moving towards an optimal run, or maybe not so optimal.
PS: I find your comment about people being unprepared for that moment really fascinating. Is this a case where it's that the processes themselves might not be structured to respond to this kind of data? Is it a more due to a reactive culture being turned into a more productive culture? A little bit of a mix?
KC: It's a mix, yeah, there's no doubt it's a mix. You can go into pretty mature facilities, and still get a very reactive response to AI telling them that their process is going south. That's a pretty telling moment, even when you walk into that mature facility, that (1) it's either telling them something that they probably already knew, but they could never prove it, never confirm it; or (2) it's telling them something completely new, and they have no idea why that's happening.
It's a vast range of responses from people. Because the one thing I would want if AI was coming into my plan, I'd want it to confirm all the opinions that I have, which is generally a lot of opinions. And I think that's what most maintenance techs think when they bring AI in: is this going to just prove that I was right? Well, often it does, but there's also many times that it proves them wrong. And that's an interesting challenge in and of itself, is when our assumptions are proven wrong, because the data doesn't back it up. And then you get the opportunity to go dig in and figure out “okay, I was half right, but this extra piece of information made it a really interesting analysis.”
PS: That can be a tough moment for anyone. I mean, I would hate it if I saw Google Analytics one day, and it's told me I was only half right about the kind of content that we're developing out. But then you’ve got to get over your ego and be nimble enough to shift over and act on that data.
KC: The other thing too, is that AI doesn't know truth. I think that's a point that everybody needs to understand, is AI only knows what you feed it. AI only understands what normally happens. So it's looking for the things that are different, but it can't tell you that the different thing is truth or not. It just can't do that. It can tell you that something is actually happening, but often it can't tell you if it's true. Does that makes sense?
PS: It does. I was going to ask for a final question about a customer case study story that you can think of. Let me preface that by saying years ago we did a case study article with Falkonry and with an ore refinery in Wyoming, where they were having trouble finding out why certain parts of the crushing process were shutting down. It turned out that there was a moment in the crushing and sifting process where it wasn't sifting the particles out finely enough, and some of Falkonry’s algorithms successfully overlaid time series data on top of the operational data to figure out what that problem was. It saved them a lot of downtime. I don't think it was (conveyor) belt tightening, but it was making sure that the ore was moving through the sifting process efficiently enough. What are one are two examples that you've run into where AI has either solved a problem like that, or pointed out something that a plant hadn't seen and they had a sort of an AHA course correction moment?
KC: I'll give you a couple of them, and these are ones I like to refer to because, you know, I'm a reliability engineer at heart but I'm also manufacturing guy from many moons ago. One of the things I love is, not just the fact that we can identify when an asset has something going on that we can't explain – it might lead to a failure and it might not; it might lead to a delay, it might lead to some other things, and one of those other things is quality.
We've seen it over and over, it's difficult to capture – and you have to have the right data, that's a key point here – if you're, monitoring the process, and you have X amount of data, but you really needed X and Y data to do a full monitoring of it, the X data will give you enough, but maybe the X and Y would give you all of it to really be able to make some judgment calls based on what AI is seeing, what it’s learning what it's what it's identifying as an anomaly.
Quality is one of those that if you have the right data, you can not only identify whether the process is running optimally or not, but based on what it's what it's learned and what it's seen, at the end of it, it can tell you whether it's a good product or not. I think that's been that's been the most telling for some of our clients, that it was kind of an unexpected gain – to not only understand whether my asset is going to fail or not, but also to understand if my product is good or not, as an added benefit
PS: Interesting, were those cases in pharma specifically or food, or sort of across the board?
KC: This was the metals industry. So I mean, once the AI learns the process, and again this is dependent upon whether you've got enough data coming in, but once it learns the process and it identifies what is a good run for the product, it can also help identify whether it was a good product at the other end. I think that's been one of the one of the most interesting things for me is to see not only performance, but quality.
PS: That's fascinating. This reminds me of a case study that was given for a mine out west, where they had put vibration sensors on their fleet of Caterpillar ore haulers in order to identify how well the machines were performing. Turns out that once they had tightened down the machines and got them to perform optimally, the sensors were picking up not flaws in the machines, but flaws in the road leading from outside the mine. And the bigger savings was the secondary benefit of filling the potholes in the road that the sensors were picking up. As you just said, you want to find the fault in the asset, but also the surprise benefit. The benefit is improved quality, or better batch control. And in this case, the mine operator said that they had more savings from increasing throughput than they did reducing the capital expense from buying new ore haulers.
KC: Yeah, and the other side of that is if you are able to input the types of materials, the batches of materials that are going through, your AI will be even that much smarter because it'll be able to then to go back and associate a good run with the actual material numbers themselves. So you think of recalls and other things, especially in the life sciences industry like orthopedics or bio-meds of some type: to be able to say that that rant run went through and I can now identify all those components, we could do it the hard way, before we could do it with the data that was there. But it was just super hard, and there's no learning there, you'd have to just go do it as a query. But the (AI) learning would begin to tell you what materials perform better than other materials, which materials are causing more process problems. That association and that learning that AI is doing doesn't go away, it just gets better and better.
That's the thing about anomaly detection, is the longer it runs, the more it learns the more data that's flowing through, the number of anomalies tends to get smaller because now the anomalies are really the ones that are the problem causers, the things that you need to pay attention to. When they first turn it on, you can see hundreds, possibly thousands of anomalies a day until you get it tamed down and it's learned, it understands, it's got feedback. Once that happens, then when you see anomalies, you’d better be paying attention, because they're real.
PS: And then as you said, be open to the fact that you're only half right, and then take action to fix the other half.
KC: Right, that's a hard thing for all of us, especially those of us that have been around for 30-plus, you know, we think we've been there and done that, and we have a pretty good grip on it. But AI will teach you some things. But the thing you need to really understand about AI is you need to teach AI, because AI can be very biased, so if your data is bad, your AI is going to probably be bad.
PS: As you said, AI does not know truth.
KC: It does not know truth, right, it only knows what you teach it. Especially generative AI, that's the way it works is, whatever you teach it, that's what it knows. That is its truth. Whether it's true or not, is irrelevant. That's it’s truth, because that's what you taught it.
PS: Well, I think when we post this podcast, Kevin, I'm going to find a picture from the We Are the World sessions from the 1980s where Quincy Jones had a big sign saying leave your ego at the door.
KC: Yeah! And really, that's what you need to do if you want to build an AI system that works and works well. You need to be able to teach it things that are relevant and things that are helpful.