Mike Reed is the Manager of AVEVA’s AI Center for Excellence. He has over 25 years of experience in engineering and operations of electrical power plants, both in the U.S. and globally, and is experienced in mechanical engineering design, construction, startup, and operations. He earned a B.S. in Mechanical Engineering from the U.S. Military Academy at West Point; a M.S. in Systems Management from the University of Southern California; and is a U.S. Army Veteran. Plant Services Editor-in-Chief Thomas Wilk spoke with Mike about how AI and machine learning are going to transform asset performance management.
PS: For those of our audience who haven't met you, could you tell us about yourself and the things you're working on now?
MR: Sure. I'm a degreed engineer by training, a mechanical engineer. I've been running the AVEVA AI Center of Excellence since its inception and its prior versions. It was also the Monitoring Diagnostics Services Center for AVEVA.
My background comes from the actual real-world side of things. I'm an engineer. I've had experience in power plants, both in design, construction, startup, and operations. I've held positions of operator, ops manager, and maintenance manager. Part of our goal is to be able to translate what comes in the software to the end user, and to help that end user interpret what they're seeing and get the use out of it that they desire. And we're very instrumental in it. My team is made up of similar other engineers that are all focused towards that goal in asset performance management and also in other areas of our AI and software.
PS: You're talking to our audience because a lot of them probably have a similar story where they've moved between operations and maintenance, they've moved into management, seen that whole side of the plant work. And given that our topic today is how AI and machine learning are going to transform asset performance management, it's kind of a big topic. In your opinion, what are most plants doing right these days when it comes to asset management, even before factoring in AI and machine learning?
MR: Over the past couple of decades, most plants have adopted fully the concept of taking all the data and that big data that they're gathering and putting it into some sort of historian, for example, like OSIsoft PI, and that allows them to be gathering all this real-world data about their equipment and putting it in there, and starting to gather insights just using straight historian analytical tools. That part has really been accepted as canon within the industry.
Now over the last decade or so, you've also added on top of this, machine learning based predictive analytics solutions, and also condition-based management solutions. Condition-based is looking for certain thresholds of operation conditions to basically trigger actions out there: “If I'm above a certain pressure flag this, send something out and we'll have to take a look at that.”
Now predictive analytics takes it one step further. It's taking a look at previous operations and understanding what is defining good behavior by analyzing through machine learning those good behaviors in the past to identify where those plants should be across multiple operating zones, and then comparing real-world conditions to that in a dynamic evaluation. Then that could provide real-world indication to that end user that we may need to take intervention. Plant operations, they've got a really good handle on the base types of maintenance. You know, your standard reactive maintenance. Something fails, we fix it, right, everybody at least has that part.
But then going into preventative maintenance, taking those checks and services that have been recommended by the OEM or by good practice for regular checkups on the equipment, condition-based on top of that, we're looking for those thresholds you mentioned now. All those being internalized, that next level is then predictive maintenance, taking a look at conditions as they're in their incipient stages and to understand, “If I can take action when I'm made aware of this, can I avoid a larger failure or an unplanned outage? And can I be a little bit more efficient in that?”
So that's where we see most of the power market, for example, in the United States and also in the oil and gas market. They've adopted historians, they've adopted condition-based management programs in their computerized based management systems, they've adopted some form of predictive analytics. So now you're looking at a layering on of that next generation. How can we turbocharge predictive analytics into gaining further insights, and leveraging all that knowledge that's out there and putting it in the hands of the end user who's going to be making those decisions that could be risk based? How's that?
PS: That's great, and I like how you mentioned some of the verticals that are adopting these kinds of technologies right now. I think my next question would be, we've got a lot of folks listening who look over the fence at what the Joneses are doing, to see what the next plant over is going to do with this kind of technology. What percent of plants out there right now would you say are really engaged with this, and does that mean they're doing projects or does it mean they're looking closely at it? What's your sense?
MR: My feeling there is that all your larger utilities have some form of this program in place right now and, to a varying degree, your independent power producers are also adopting that, whether they've been spun off from a large producer or sometimes they'll pick it up when they get acquired by asset management that wants to get a good feel on what they own. It's a combination of self-performed using the tools that are out there, and also using software as a service or monitoring as a service to augment the software. We've seen it all around; if I threw a number out there, 70% or so of the market is probably actively using this in some form or another, and the other 30% is down that path, at least to the historian and probably going a little bit further.
PS: It's interesting that I see more and more presentations at events such as the SMRP Annual Convention where people are presenting case studies on what they're doing with machine learning. Of course, at PI World, the two times I went to San Francisco for that conference, there were a lot more case studies of people using these techniques and technologies. So, the products you're talking about, they're beyond the pilot stage, right? The people are...they're achieving results, they know what the ROI is for their effort?
MR: Pretty much. EPRI had a paper a while back that explained that, if you could identify issues, and then do a risk analysis on those issues happening versus what the actual cost would be if each one of those scenarios was taken to the end, you could develop an ROI. For example, I have a bearing vibration that's running high, and if I go in there and take a look at it, I can see what the cost of scheduling a maintenance outage and scheduling the repair of that versus if that develops into a greater problem and it becomes an actual catastrophic failure, what are those costs, what are your lost opportunity costs etc. That's all common knowledge typically within the industry. So, you can show the value of finding those issues in their early stages as opposed to waiting for them to find you.
PS: My understanding too is that AI and machine learning currently are more strategic technologies, in the sense that they aren't deployed on every single asset. They're more deployed on assets that are considered critical assets or at least asset specific basis. Is that your understanding too? Should I update my thinking on that?
MR: Well, surprisingly, it does cover all the assets but you do see them target those big returns on investment. So, your big, prime movers, your gas turbines, your steam turbines, your big pumps, your big heat exchangers, the condenser. Those kinds of things are easy to show that value for, and they're also ones that have a lot of historical data behind it that we know those modes of failure, and we know the effects of not taking those actions ahead of time.
So, if you're looking for that quick return on investment, yes, those are the biggest places where you're going to be able to focus it. But now we're starting to branch out a little bit further that, and we're getting to that next layer of insight and knowledge by taking the learnings on top of what we've been seeing with straight predicative analytics now and start working into prognostics. It's not only important to know that something's going to fail, but to know when something is likely to fail. Now I want to know, "All right, what's happening to me? What are the likely scenarios, and how long do I really have? What's my event horizon to make a decision before it's made for me?"
And so, as you continue to move further that, that's where you're going to start leveraging more and more of AI. In its simplest case, machine learning is a form of AI, but now we want to start adding on other factors outside of the realm of exactly what we're looking at, getting other insights outside that normal scope of vision of the end user, and that will allow us to provide that much more insight to that person. Not only that person who might be experienced in how to make the decisions, but you're also capturing that knowledge from other experiences to provide to that person.
And so remaining useful life is one big thing, prognostics saying, "All right, if we see this kind of scenario happening, is it a bearing failure? Is it an alignment issue? Is it something else? What kind of tasks should I take to further my next level of the investigation?” Those kinds of things can be turbocharged by AI to help bring that to the fore right in front of the end user to take a look at.
Listen to the entire interview
PS: If I can ask a follow-on question on users looking at their assets with these technologies. Is it useful for users to have a library of other assets to compare the performance of their asset against, or are these technologies useful just to focus on a single asset and, like you said, use the data from the historian to go back in time to focus and zero in on the one asset or are these two things that might work together?
MR: They work together. They do work together. Obviously, the most germane to your equipment is going to be how your equipment's running. You know, if you buy a Ford Mustang off the lot, they all have certain specs and lines on that. But how yours runs versus the other person who bought the next model in line, once you drive them off the lot, they have different behaviors and different histories. So yes, I should expect that they all should have certain commonalities. And so that data, that's more what you're talking about, the libraries and understanding that we have certain modes of failure and operation conditions around that model, but then specifically augmenting it with your own specific equipment, it’s the same concept.
If I install a gas turbine at sea level and I have one that's operating up in Denver, they're going to have some different behavioral capabilities and characteristics. Same thing if I'm also operating things in parallel or if I'm operating things by themselves. The more knowledge they have around it, the better it can help with the insights.
Now, what AI can help you do is organize that vast amount of knowledge around something. And also, to see some of those factors that interact that you may or may not understand they do interact, right.
PS: I'm imagining a comparison to a truck being driven off a lot in California versus a truck being driven off a lot in Buffalo, and the ability to compare those data sets immediately just to evaluate performance over time. You know one's going to be challenged more by the elements than the other. It's just the matter of figuring out what's applicable to where your asset is sitting.
MR: Right, and also, many times you say, "Well, look. This guy here, he's running a little bit hotter than the other one. But it always runs hotter." Right? Well, if I try to use the same metrics on whether I work on a piece of equipment based upon fixed metrics, that's one thing. But if I can actually index the metrics to that one that's always running hotter, if I index that to its normal conditions, then I know when it's running hotter than that, then I've actually got an issue.
That's where indexing the historical data to how something's behaved to that piece of equipment helps eliminate some of that, I don't want to call it flippance, but that's typically what'll happen is sometimes that we as humans can discount something just because it's always done that or it's always behaved this way. And what we want to do is put this and present it to you in context to where then you're making informed decisions, and it doesn't assume that the end user doesn't know anything about the equipment. What it's meant to do is augment this and organize it and present it back to them in a way that they can make use of it. If we think about the single biggest problem, if we would call it, of having so much data is having so much data. Right? I don't know where to start or what to do with it.
PS: Yeah.
MR: I go back 20, 30 years when I was an operator. A little right after midnight the little dot-matrix printer would spit out a ton of readings and things like that, and you go and you tear it off and you slip it down and you put it in the book. You never even really did much with it, right. Well, if we don't do something with this data, we're generating a bunch of data for nothing. So, the computers enable us to at least do something with the data and then present it in a form that somebody can actually use it and it means something to them, right.
PS: Well, let me ask you a final question which follows up on your point about a lot of data here. There are listeners who are pretty well versed with the kinds of different condition monitoring technologies commonly in use – vibration, infrared thermography, ultrasound. In your experience, are any of those technologies a good complement for AI and machine learning in the sense that they might generate a lot of data, or is it that applications for machine learning and AI are limited only by the technician's imagination really?
MR: Some of that is how much stuff is available to us that we haven't even tapped yet. Straight numerical data that we're bringing in through vibration sensors or bringing in through the sensors that are out there on the machines is well understood. But as you mentioned, thermography: we can take a look at thermography in multiple ways. We can convert those wavelengths of the thermography into numbers and then crunch then and come up with insights, or we can even look at them in a straight visual line of looking at the different color deviations and looking for patterns of that deviation in a visual screen. That's more of the more of the imagination side but it's also imagination that has been tested and being perfected.
We can also take a look at different tools that are in our belt that can help work together by synergy. Predictive analytics, predictive reliability-centered numbers is one thing that we're looking for pieces of equipment on there. But we also have a concept of taking your performance analytics around that. How well is something performing? Can we merge these two? Can we get a synergy by merging these two, of taking the predictive analytic and we take the performance analytic and feed one into the other or have them work together and provide a combined insight back out? That is where AI is now even pushing further. Not only are we getting the raw insights from the raw data itself, we're now getting those KPIs basically forming more of an insight to get a deeper understanding and a higher dynamic range of those models that we can make in order to get that feedback.
And then we connect it with the historian visualization that may be looking at that and then feed it back though. We've talked about PI, for example, you have a PI vision screen. We can take predictive analytics outputs and performance analytics outputs and feed them back into along with the analytics directly from the historian, and put it in one particular HMI type screen to make it actually accessible to somebody to see.
That's one of the things, is that all these datas, all these insights need to in the end consider how are we going to use it in the field. How can I make sure to provide actionable insights and good value data in a way that does not overload somebody who is already probably overloaded with their day-to-day tasks? We want to bring something that's pertinent to them, that they can see, "Hey, this was under my radar. This was outside of my screen my vision of what I was looking at right now."
You know, you can only look at so much as one human being. I always like to use the analogy of the "Lord of the Rings", you know, the guy's looking for the ring everywhere and two little guys sneak up and chuck it in the volcano behind him. Well, if he could've actually seen them, he could've squashed that threat right away. Same concept with issues that are going to try to break your plant. If you can see it, you can squish it.