Podcast: Best practices for implementing AI agents in manufacturing workflows
Key takeaways
- Clean, accessible, and accurate data is critical for effective AI-driven asset management.
- AI agents can automate multistep tasks like work order creation, boosting efficiency.
- Tailored AI tools must respect industry-specific privacy and compliance standards.
- Adoption of AI across sectors is accelerating, revealing untapped optimization potential.
In this episode of Great Question: A Manufacturing Podcast, Thomas Wilk, chief editor of Plant Services, is joined by Christine Nishimoto, director of asset management software at IBM, for an insightful discussion on how AI agents are reshaping data-driven asset management. Together, they explore the evolving role of artificial intelligence in improving productivity, sustainability, and safety across manufacturing sectors. From tackling long-standing data challenges to envisioning multi-agent systems that can automate complex workflows, the conversation highlights the transformative potential of AI tools in industrial environments. Christine also emphasizes the importance of transparency, data integrity, and regulatory compliance as organizations adopt these technologies.
Below is an edited excerpt from the podcast:
PS: You’ve got sweeping responsibilities at IBM, especially with Maximo, and I’m so glad you're here. We could talk for ten podcast episodes, I’m sure, but today we’re going to focus specifically on data-driven asset management—especially AI.
So let me ask you the first question: when it comes to data-driven approaches, what do you consider best practice for the kind of people we’re speaking to today—the plant managers, operators, especially those in the reliability function?
CN: You know, I would say it all comes down to data. As you just mentioned, data is a huge, huge topic—and a huge challenge. As we start to talk more about AI in this conversation, you’ll see that data is at the core of everything that’s happening today.
But it is a huge challenge. I think in the world of manufacturing—and just in general—we’re collecting more and more data and information, whether it's around assets, vibrations, defects, temperatures, or the work people are doing with work orders. There's so much information out there. Some of it is structured, some of it is unstructured—like notes or other formats that are difficult to reach.
I kind of see it as three challenges, and they all start with the letter A. First is accessibility—sometimes it’s just not easy to get to the data. Second is actionable—you might have all this data, but what do you do with it? What does it mean for you? How does it provide any value to make it useful? And third is accuracy—we see a lot of issues there, whether it's someone entering incorrect data because they misunderstood something, or just not bothering to enter it at all. We hear about this all the time when talking with customers—you have these work orders where someone just selects a generic dropdown, like "N/A," and it’s just not super helpful when you're trying to find patterns, or some trending thing with an asset, and you can’t see that information because somebody forgot to put that data in there.
So, yes, there are a lot of data challenges with data as a whole. Having the right product—obviously, the right software product—having the right process, the right tools absolutely helps. Having the right people in place to make sure it’s really happening helps.
We believe that you get a lot of value as well tracking from beginning to end, so within my world, we talk a lot about asset and asset management, but we’ve really transitioned the conversation to focus on the entire lifecycle of the asset. From the moment you’re planning what you need and the financials around that, to receiving the assets, tracking the assets, seeing what’s happening around them, the work around them, everything all the way through to disposal—each one of those steps gives you additional insight.
We’ve started to transform our conversations to be more holistic, as opposed to just managing assets. And we feel that leads to a good foundation for AI, because AI is going to be heavily dependent on having good data to work with.
PS: There are certain sectors, like power generation, that require five-nines reliability. I’m curious—from your perspective, how are you seeing this approach to asset management permeate other market sectors, like pharma, food, or utilities? Are they all taking cues from power gen, or is it moving through sectors at different rates?
CN: I feel like the impact of AI is hitting all industries. The challenges with data—and the regulations around data—is hitting everywhere. We’re all realizing there’s so much untapped value. There are opportunities for optimization, for efficiencies, for safety. And there’s a lot of opportunity for things like optimizing power consumption. There’s so much untapped opportunity that we feel it's transforming every aspect of life that we’re looking at, every kind of business.
A couple of examples of that would be to combine power generation and healthcare, where you look at the facility management. There’s a lot of questions around, can we do maintenance better? Can we do compliance better? Are there ways to optimize energy usage? How do we look at new ways to improve occupancy and repairs?
It’s all rooted in data, in looking at patterns, at making insights actionable. A lot of that is going on regardless of what industry you’re looking at, and there’s so much opportunity based on what’s happening from a technical standpoint—with AI helping as a tool to get there.
PS: So on this podcast, we’re here to talk about one specific flavor of AI for the next couple of questions: AI agents. Before we jump into how they’re applied, can you explain what AI agents are exactly? What kind of flavor AI is it, and how are they moving into manufacturing?
CN: Yeah. If we go back to the idea of a traditional AI idea, what it does it it’s looking at the world as a whole, it’s looking at leveraging basic queries—basic questions in a natural language—and interpreting them for people. In manufacturing, it could be having the ability to use computer vision to identify defects and flag problems. AI is built for things like that—it can look at the world around you, and say hey, there’s this thing here that may be an anomaly, or hey there’s this thing here that may be a little bit different from what we expect.
The next generation that we hear quite a bit about is generative AI. That’s about creating new artifacts from what’s out there today. It’s leveraging algorithms and it’s creating new content based off of patterns from existing data. When you think of things like ChatGPT, it’s doing that same thing, it’s looking at lots of text, it’s looking at patterns of “when people say thing kind of thing, these kinds of responses are what’s expected.” It’s building off of recognizing those patterns, and figuring out what the next pattern-step is.
AI agents then piles on top of that and it goes “OK you know you have all this stuff that's generating new content. You have the ability to query what's out there. Everything is very forward-looking, right? It's one kind of interaction where you ask it certain things and then it just kind of gives your response. But there's no ability to go back and actually look at how you revise or improve, or add on to. There's not really a whole lot of opportunity for that. With AI agents, what it's doing is it's actually letting you stack on top and revise and build.
An example would be if you're creating a marketing plan or an essay. If you were to use GenAI today and you say, “I want to write an essay about my job, and it spits out some content for you. But if you wanted it to be a lot more accurate and a lot more specific, you'd want it to say, “well, what kind of jobs do you have? What are some statistics or interesting things about your job that would enhance the contents being created?” And you have like a version 2 and a version 3. That's where something like an AI agent is helpful because what it does is looking at what needs to happen. It's giving you multiple opportunities to add on to the output and iterate on it, and create something new and better from it.
If we tie that into what we do today in the world and in manufacturing, you can think of agents as, “hey, I am doing a certain kind of work,” and the agent goes “I recognize what you're trying to do. You need to request a work order.” And it can go out and request the work order, create the work order for you. It maybe can look at the type of work that you're doing and say, “hey, I recognize that we have other kinds of work orders or similar to yours. Let me make some recommendations for you on what you need to populate.” Or, “I see what you're doing and it looks like it could be tied to this error code or failure code.” It starts to have a multi step process on providing you additional value and additional recommendations that you wouldn't have been able to do with a simple AI query.
PS: Interesting. It sounds like part of what you're saying, too, is that it's helping extract certain aspects of tribal knowledge which may otherwise get trapped out there with frontliners, as the AI agent itself seeks to improve the quality of the work plan or the work order itself by asking the right questions.
CN: Exactly. I'm going to go with the art of the possible for a moment, but eventually, what's really fun about the agents is that they're not bound to one application or one solution. They're really operating within a model of a business case or a use case. When you're thinking about it from the standpoint of, you have knowledge of the inputs, you have knowledge of what agents have access to (and this is from a more back end perspective), and you have knowledge of the type of output you want from it, you have multiple agents that support that.
An example would be, let's say that you run a renewables energy plant, and you're dealing with solar panels. And let's say that in your system or solution you're able to capture data and patterns of data around power generation. So you know typically what happens during the daytime, you know what typically happens at night, you know what happens during certain times of the year. And there's impacts depending on weather and other things. Now all of a sudden an AI system, an agent is monitoring and it notices weirdness happening in the pattern of what's expected for power generation. Now what it does is say “I see something that's not right. I know what the pattern should look like, and it's not quite there.” And then what it does is it calls another agent and says, “hey, you need to check out the solar farms in this location in California, something's going on there.”
So then that agent calls to a drone, and a drone is sent out, does a scan of the location, brings back information, calls to another agent that says “I'm now going to do assessment on the video that was captured. I see that there is an issue with whatever panels, they have dust on them” or whatever it is that’s impacting its ability to really generate as much power as expected because a dust storm went through. We're also able to look at weather information. Then, it calls to another agent and it says “create a work order for me. Find an available technician. Here's where there's a problem.” That person is going to be available within a given time period that we need, and then that person receives a text or a phone call, and they're told, “go out there and fix the problem.”
You don’t need a person doing all those individual steps. You have these mechanisms in the background, these agents in the background figuring all that out for you, doing all those little steps... That’s the power and the potential of the agents.
- Christine Nishimoto, IBM
All that's happening on the back end. You don't need a person doing all those individual steps. You have these mechanisms in the background, these agents in the background figuring all that out for you, doing all those little steps. Really what's happening is, an issue happened, there's a pattern anomaly someone gets a work order, and they go fix it, right? But in the background, all that stuff is happening. That's the power and the potential of the agents.
Are we there yet? No. Do we see the potential of that happening? Absolutely. Do we see the steps moving into place to make that happen? Absolutely. That kind of idea that I described is why I think everybody is so excited about agents, why it's become this big buzzword all over the place. It’s that you see the art of the possible, but it almost feels attainable and there's so much added value to that.
PS: That's fascinating. I appreciate you walking us through the chain of agents where we're not talking about one or two. We're talking about a suite of them being developed, each with a specific function, potentially even for that specific facility. That was another one of my questions it feels like we're on the cusp of a situation like we were with web development 20 years ago, where there's going to be a huge market for folks who want to develop these AI agents for each company. My hunch, Christine, and let me know how close I am on this in your opinion, is that given things like HIPAA laws for medical care, given things like data privacy concerns for manufacturing, these won't be one-size-fits-all solutions. These will be tailored to the business because you have to protect privacy, because you have to protect business information. Is that correct?
CN: I think there's multiple parts to this. First off, I don't think it’s quite similar to a web development movement. I think that, and I implied this a little bit earlier as well, that you have to really have clarity. There's lots of agent builders out there today, right? And they're like, it's natural language, it's going to be easy to use, absolutely, they're making it super easy, and then there's the more complex versions of it as you go along.
But if we even start out the very basic version of it, you still have to know what's available to you to use with the agent, and you still have to understand where you want it to get to at the end. In my example of the solar panels, I know at the end that what I need if there's an issue is someone to go fix it. nd then I still have to have an understanding of what are the things that are important to make that happen. There isn't an AI system that's going to be like, “I'm going to tell you how to do that.” You still have to know some of what you want to work with, and what you're expecting at the end.
And so I think that while it's accessible to everybody and the tools are accessible to anybody, because we're really pushing to have natural language available, the effectiveness of it and the impactfulness of it, I don't think is there unless you really, truly understand where you're trying to get to and also recognize limitations and recognize what's possible. So first I would say that in regards to, it is accessible to everybody. Is it going to be really usable by everybody? Maybe initially probably not, because it's going to take a little bit of an understanding of that.
The second part I would say, you mentioned HIPAA, you mentioned privacy – there are a lot of standards that are set that come into place. At IBM we're super, super, super careful about that, right? I mean we talk a lot about, your data is your data, we don't take your data, we work with you to help to figure out how to use AI in place in your environment. We're very, very careful about that.
We don't use public information. Some of the other consumer-facing AI technologies use public information, it learns from it and then it shares it. We don't do that because we work with a lot of customers that we have relationships with, that can't have their information out. It's super, super careful, so there's a lot of standards and steps that we take internally within IBM. But I think a lot of companies as well that deal with sensitive information have to do that.
Is it evolving? Absolutely. Are there going to be more regulations and things around that? Probably? Think about what happened with PII and GDPR and everything else, right? I think that there's going to be an evolving viewpoint right on how this is going to take place, that's going to create all kinds of regulations we’ll have to follow. Transparency I think is really important, so whatever company or technology that we're working with, you have to know, what does that look like? How was the AI technology developed? What's happening with the data itself?
There's a lot of things that can influence the output of AI, and a lot of it comes down to the training, so if that is not transparent, then it’s hard to be fully dependent on the output of it, because do you really trust it? We talked quite a bit around IBM that we're not just a black box. We want to work with you, we want to make sure that you know you can see how we got to that endpoint if you need to see it because we want to make sure that there's trust there, and there's reliability there. And we also want to obviously continue to improve, so having that interactivity with our customers and transparency helps.
Also as different companies look at solutions that offer AI, I would say that you want to look at other things like, is there a choice? Can you turn it on or off? Does it automatically default to on, and what does that mean? If you're getting a recommendation from an AI system, do you know that it's an AI system that's giving you the recommendation?
So I think even at that layer, it’s important to understand that level of transparency, to understand how much control you have over its influence on the way that you're dealing with systems, and it's compliance with your company as a whole. Each company I'm sure has rules on what you do with data and what happens to it, so having that knowledge as well is super important. There are a lot of questions that come up with AI and systems and that kind of thing, so there are I think some checks and balances that certainly we all have to do as we move forward with this technology.
PS: It's interesting, you cover so much that has happened, especially in the past 10 years. I joined Plant services 10 years ago, 5 years pre-COVID and one of the bigger changes I've seen in 10 years is that manufacturing plants are a lot more willing to share their data outside the plant walls with third party consultants or with OEMs in order for improved reliability of the assets themselves. Ironically enough, the things you're talking about, that shift in attitude towards data sharing has been in the air for a little while, and it's only going to enable AI going forward.
CN: Absolutely, it's one of the things that we've been looking at as well. We talk about reliability strategies as a whole, right? We did an acquisition fairly recently where we got a hold of a library of failure modes around certain kinds of assets because we wanted to build more robustness in our solution around that, and to really enable the ability to create FMEA reports more rapidly. So we thought we’d underpin that by having this library in place. We have an AI system that learns from it and actually helps us make it more robust.
And then the thinking was we can help to create a builder that helps with FEMA reports, and it brings down that creation of the reports from weeks and months to just days. It’s leveraging AI and leveraging some of the things we have in our system to be able to do that. Some of that is in a sense of what you said, sharing information or in our sense acquiring information, and building upon it. I think in some areas we've gotten less sensitive about sharing because we realize that it's better for the whole. And then in other areas, I think we've gotten more careful about the sharing. So I think it just depends on what area of data that we're talking about. If it's not so sensitive or we don’t view it as proprietary or a competitive advantage, I think we've gotten more open to, hey, let's bring it together because we can all be better by having that information.
PS: Christine, you also mentioned the ability to potentially turn on and turn off these AI agents. You remind me of something I saw about a month ago where it's an initial pilot being run by the University of Tennessee in conjunction with Oak Ridge National Lab. They're creating a couple of AI assistants to help collect data from field technicians, and one of the features of the assistant is that it can appear in everyone's phone or tablet, but the workers have the ability to turn it on and turn it off. If they're confident with any in the information they're providing, they can simply say no, I don't want to work with the AI today, and even that measure of control, I have to believe, is going to facilitate wider adoption, just the ability to say no once in a while.
CN: I think so because AI doesn't work unless there's trust, so if it's doing work for you in the background or foreground, if you don't trust what it's doing or it's just not adding value, then what's the point? I think part of it sometimes is that you want something to be available and you want a person to feel like they have the option to use it, but give them the chance as well to say, “I'm going to try it and see if it is helpful for me.” I think just the option of saying, I can turn it on or off makes the pressure less to try it out.
We were really thoughtful about how AI, at least for us with Maximo, we had a lot of conversations on, are chatbots helpful? Are assistants helpful? Are they more of a distraction when you're trying to do your work? What we realized is that there is a role for both. In some cases, we wanted to enhance a workflow. For example, you're someone that's maybe populating information in a work order, and you're not quite sure what the failure code is for something. We thought, well, at that moment, we can make a recommendation embedded in that work, right? So you're not popped out in a chat bot, you're actually still in there, it gives you a window and says, “here are some options for you that maybe you want think about.” It's part of the flow, it's not pulling you out of it. Hopefully it's enhancing the experience hopefully it's making it easier.
Another mechanism that we realize is that there sometimes is a place for chatting. Maybe it's because you're trying to ask a question, or not sure where to find the answer, or it takes a lot of windows to get to the place that you need the data in. Having a chat in place that says, can you show me all the work orders that have this issue going on, associated with certain assets? It can go in, find that information for you, but maybe it displays in a way that you're more comfortable with, so it's not in some chat format, right? It opens it up into a grid and then within the grid you can actually work within it, sort it, interact with it, provide whatever you need. It can provide whatever you need within that grid itself to do work on.
So there's a place where you're doing a little bit of both. It's embedded in the application, but there might be some chat interface as well, it just depends on what is happening that can enhance the value of the experience. But in every case, were really always thinking about, what is the user’s experience? Where is the value here? Is it distracting or adding? More and more as we move forward, there's a lot of thinking around that. Are there things that could be happening that are enhancing the experience and not detracting from it? I think that as we move forward, we're going to see better usage of AI with that experience.
PS: Let me springboard off your points about better experience with the AI and lead to our last question of the podcast, which is more of a general question about reliability best practices. Every now and then a tool will come along in manufacturing operations that will help change the nature of it. Do you see AI facilitating better across the board maintenance and reliability best practices, just by the fact that someday it will be an ever present assistant. Or do you see this as something which will be like other tools, which if you use it correctly you’ll get gains, and if you don't use it well your plant might lag. I'm curious to know your thoughts on the potential to pull through other best practices along with AI.
CN: I think it's going to get better. I think the shift that we're going to see is that you're going to be able to enable productivity for more of the new generation of workforce coming in. You're going to see increased productivity because you're going to have systems in place that have learned from the experiences, have learned from the libraries, that can help to get them productive faster without them having 20 years of experience. I think that's the improvement we're going to see on productivity or efficiency or better reliability engineers.
That being said, I think there are incremental steps, and I know we hear all the time as well on the gains that you get with AI and predictive. Maybe I'm a little bit more modest on that, on those gains. I think that we're going to get to a place where we are looking at condition-based versus predictive initially. So meaning that right now we are gathering data, we're seeing patterns, but there's still scheduled maintenance, there's still scheduled things going on regularly, every month, every year, you’re checking this, you're doing that, or weekly, right, and it's not super efficient or effective because it's not always needed.
So where we're seeing things going and some of the things we're putting into our solution as well is looking at condition-based. Based on all the stuff going on, based on the weather, based on the environment of a facility, based on usage, should we change the way that we are maintaining our assets? And, is there a way to address it with what we have today? I believe the answer is yes. I think we're getting close to figuring out ways to address that.
So to me, I think the next phase that's going to be super transformative is condition-based. From there it gets reliable enough where we go, “when we see this happening, we're thinking there’s going to be an issue, we're monitoring it.” Then I think you get to predictive, because now all of a sudden you're like, “I've done this so many times, we know these conditions hit, this asset has an issue in this environment and in this facility. Now we can start to be productive. I think it still has a phased approach, I don't think we're quite that predictive yet, but I think that condition-based is going to be first and then we’re going to get there.
About the Podcast
Great Question: A Manufacturing Podcast offers news and information for the people who make, store and move things and those who manage and maintain the facilities where that work gets done. Manufacturers from chemical producers to automakers to machine shops can listen for critical insights into the technologies, economic conditions and best practices that can influence how to best run facilities to reach operational excellence.
Listen to another episode and subscribe on your favorite podcast app
About the Author

Thomas Wilk
editor in chief
Thomas Wilk joined Plant Services as editor in chief in 2014. Previously, Wilk was content strategist / mobile media manager at Panduit. Prior to Panduit, Tom was lead editor for Battelle Memorial Institute's Environmental Restoration team, and taught business and technical writing at Ohio State University for eight years. Tom holds a BA from the University of Illinois and an MA from Ohio State University