Podcast: Best practices for implementing AI agents in manufacturing workflows
Key takeaways
- Clean, accessible, and accurate data is critical for effective AI-driven asset management.
- AI agents can automate multistep tasks like work order creation, boosting efficiency.
- Tailored AI tools must respect industry-specific privacy and compliance standards.
- Adoption of AI across sectors is accelerating, revealing untapped optimization potential.
In this episode of Great Question: A Manufacturing Podcast, Thomas Wilk, chief editor of Plant Services, is joined by Christine Nishimoto, director of asset management software at IBM, for an insightful discussion on how AI agents are reshaping data-driven asset management. Together, they explore the evolving role of artificial intelligence in improving productivity, sustainability, and safety across manufacturing sectors. From tackling long-standing data challenges to envisioning multi-agent systems that can automate complex workflows, the conversation highlights the transformative potential of AI tools in industrial environments. Christine also emphasizes the importance of transparency, data integrity, and regulatory compliance as organizations adopt these technologies.
Below is an edited excerpt from the podcast:
PS: You’ve got sweeping responsibilities at IBM, especially with Maximo, and I’m so glad you're here. We could talk for ten podcast episodes, I’m sure, but today we’re going to focus specifically on data-driven asset management—especially AI.
So let me ask you the first question: when it comes to data-driven approaches, what do you consider best practice for the kind of people we’re speaking to today—the plant managers, operators, especially those in the reliability function?
CN: You know, I would say it all comes down to data. As you just mentioned, data is a huge, huge topic—and a huge challenge. As we start to talk more about AI in this conversation, you’ll see that data is at the core of everything that’s happening today.
But it is a huge challenge. I think in the world of manufacturing—and just in general—we’re collecting more and more data and information, whether it's around assets, vibrations, defects, temperatures, or the work people are doing with work orders. There's so much information out there. Some of it is structured, some of it is unstructured—like notes or other formats that are difficult to reach.
I kind of see it as three challenges, and they all start with the letter A. First is accessibility—sometimes it’s just not easy to get to the data. Second is actionable—you might have all this data, but what do you do with it? What does it mean for you? How does it provide any value to make it useful? And third is accuracy—we see a lot of issues there, whether it's someone entering incorrect data because they misunderstood something, or just not bothering to enter it at all. We hear about this all the time when talking with customers—you have these work orders where someone just selects a generic dropdown, like "N/A," and it’s just not super helpful when you're trying to find patterns, or some trending thing with an asset, and you can’t see that information because somebody forgot to put that data in there.
So, yes, there are a lot of data challenges with data as a whole. Having the right product—obviously, the right software product—having the right process, the right tools absolutely helps. Having the right people in place to make sure it’s really happening helps.
We believe that you get a lot of value as well tracking from beginning to end, so within my world, we talk a lot about asset and asset management, but we’ve really transitioned the conversation to focus on the entire lifecycle of the asset. From the moment you’re planning what you need and the financials around that, to receiving the assets, tracking the assets, seeing what’s happening around them, the work around them, everything all the way through to disposal—each one of those steps gives you additional insight.
We’ve started to transform our conversations to be more holistic, as opposed to just managing assets. And we feel that leads to a good foundation for AI, because AI is going to be heavily dependent on having good data to work with.
PS: There are certain sectors, like power generation, that require five-nines reliability. I’m curious—from your perspective, how are you seeing this approach to asset management permeate other market sectors, like pharma, food, or utilities? Are they all taking cues from power gen, or is it moving through sectors at different rates?
CN: I feel like the impact of AI is hitting all industries. The challenges with data—and the regulations around data—is hitting everywhere. We’re all realizing there’s so much untapped value. There are opportunities for optimization, for efficiencies, for safety. And there’s a lot of opportunity for things like optimizing power consumption. There’s so much untapped opportunity that we feel it's transforming every aspect of life that we’re looking at, every kind of business.
A couple of examples of that would be to combine power generation and healthcare, where you look at the facility management. There’s a lot of questions around, can we do maintenance better? Can we do compliance better? Are there ways to optimize energy usage? How do we look at new ways to improve occupancy and repairs?
It’s all rooted in data, in looking at patterns, at making insights actionable. A lot of that is going on regardless of what industry you’re looking at, and there’s so much opportunity based on what’s happening from a technical standpoint—with AI helping as a tool to get there.
PS: So on this podcast, we’re here to talk about one specific flavor of AI for the next couple of questions: AI agents. Before we jump into how they’re applied, can you explain what AI agents are exactly? What kind of flavor AI is it, and how are they moving into manufacturing?
CN: Yeah. If we go back to the idea of a traditional AI idea, what it does it it’s looking at the world as a whole, it’s looking at leveraging basic queries—basic questions in a natural language—and interpreting them for people. In manufacturing, it could be having the ability to use computer vision to identify defects and flag problems. AI is built for things like that—it can look at the world around you, and say hey, there’s this thing here that may be an anomaly, or hey there’s this thing here that may be a little bit different from what we expect.
The next generation that we hear quite a bit about is generative AI. That’s about creating new artifacts from what’s out there today. It’s leveraging algorithms and it’s creating new content based off of patterns from existing data. When you think of things like ChatGPT, it’s doing that same thing, it’s looking at lots of text, it’s looking at patterns of “when people say thing kind of thing, these kinds of responses are what’s expected.” It’s building off of recognizing those patterns, and figuring out what the next pattern-step is.
AI agents then piles on top of that and it goes “OK you know you have all this stuff that's generating new content. You have the ability to query what's out there. Everything is very forward-looking, right? It's one kind of interaction where you ask it certain things and then it just kind of gives your response. But there's no ability to go back and actually look at how you revise or improve, or add on to. There's not really a whole lot of opportunity for that. With AI agents, what it's doing is it's actually letting you stack on top and revise and build.
An example would be if you're creating a marketing plan or an essay. If you were to use GenAI today and you say, “I want to write an essay about my job, and it spits out some content for you. But if you wanted it to be a lot more accurate and a lot more specific, you'd want it to say, “well, what kind of jobs do you have? What are some statistics or interesting things about your job that would enhance the contents being created?” And you have like a version 2 and a version 3. That's where something like an AI agent is helpful because what it does is looking at what needs to happen. It's giving you multiple opportunities to add on to the output and iterate on it, and create something new and better from it.
If we tie that into what we do today in the world and in manufacturing, you can think of agents as, “hey, I am doing a certain kind of work,” and the agent goes “I recognize what you're trying to do. You need to request a work order.” And it can go out and request the work order, create the work order for you. It maybe can look at the type of work that you're doing and say, “hey, I recognize that we have other kinds of work orders or similar to yours. Let me make some recommendations for you on what you need to populate.” Or, “I see what you're doing and it looks like it could be tied to this error code or failure code.” It starts to have a multi step process on providing you additional value and additional recommendations that you wouldn't have been able to do with a simple AI query.
PS: Interesting. It sounds like part of what you're saying, too, is that it's helping extract certain aspects of tribal knowledge which may otherwise get trapped out there with frontliners, as the AI agent itself seeks to improve the quality of the work plan or the work order itself by asking the right questions.
CN: Exactly. I'm going to go with the art of the possible for a moment, but eventually, what's really fun about the agents is that they're not bound to one application or one solution. They're really operating within a model of a business case or a use case. When you're thinking about it from the standpoint of, you have knowledge of the inputs, you have knowledge of what agents have access to (and this is from a more back end perspective), and you have knowledge of the type of output you want from it, you have multiple agents that support that.
An example would be, let's say that you run a renewables energy plant, and you're dealing with solar panels. And let's say that in your system or solution you're able to capture data and patterns of data around power generation. So you know typically what happens during the daytime, you know what typically happens at night, you know what happens during certain times of the year. And there's impacts depending on weather and other things. Now all of a sudden an AI system, an agent is monitoring and it notices weirdness happening in the pattern of what's expected for power generation. Now what it does is say “I see something that's not right. I know what the pattern should look like, and it's not quite there.” And then what it does is it calls another agent and says, “hey, you need to check out the solar farms in this location in California, something's going on there.”
So then that agent calls to a drone, and a drone is sent out, does a scan of the location, brings back information, calls to another agent that says “I'm now going to do assessment on the video that was captured. I see that there is an issue with whatever panels, they have dust on them” or whatever it is that’s impacting its ability to really generate as much power as expected because a dust storm went through. We're also able to look at weather information. Then, it calls to another agent and it says “create a work order for me. Find an available technician. Here's where there's a problem.” That person is going to be available within a given time period that we need, and then that person receives a text or a phone call, and they're told, “go out there and fix the problem.”
All that's happening on the back end. You don't need a person doing all those individual steps. You have these mechanisms in the background, these agents in the background figuring all that out for you, doing all those little steps. Really what's happening is, an issue happened, there's a pattern anomaly someone gets a work order, and they go fix it, right? But in the background, all that stuff is happening. That's the power and the potential of the agents.
Are we there yet? No. Do we see the potential of that happening? Absolutely. Do we see the steps moving into place to make that happen? Absolutely. That kind of idea that I described is why I think everybody is so excited about agents, why it's become this big buzzword all over the place. It’s that you see the art of the possible, but it almost feels attainable and there's so much added value to that.
PS: That's fascinating. I appreciate you walking us through the chain of agents where we're not talking about one or two. We're talking about a suite of them being developed, each with a specific function, potentially even for that specific facility. That was another one of my questions it feels like we're on the cusp of a situation like we were with web development 20 years ago, where there's going to be a huge market for folks who want to develop these AI agents for each company. My hunch, Christine, and let me know how close I am on this in your opinion, is that given things like HIPAA laws for medical care, given things like data privacy concerns for manufacturing, these won't be one-size-fits-all solutions. These will be tailored to the business because you have to protect privacy, because you have to protect business information. Is that correct?
CN: I think there's multiple parts to this. First off, I don't think it’s quite similar to a web development movement. I think that, and I implied this a little bit earlier as well, that you have to really have clarity. There's lots of agent builders out there today, right? And they're like, it's natural language, it's going to be easy to use, absolutely, they're making it super easy, and then there's the more complex versions of it as you go along.
But if we even start out the very basic version of it, you still have to know what's available to you to use with the agent, and you still have to understand where you want it to get to at the end. In my example of the solar panels, I know at the end that what I need if there's an issue is someone to go fix it. nd then I still have to have an understanding of what are the things that are important to make that happen. There isn't an AI system that's going to be like, “I'm going to tell you how to do that.” You still have to know some of what you want to work with, and what you're expecting at the end.
And so I think that while it's accessible to everybody and the tools are accessible to anybody, because we're really pushing to have natural language available, the effectiveness of it and the impactfulness of it, I don't think is there unless you really, truly understand where you're trying to get to and also recognize limitations and recognize what's possible. So first I would say that in regards to, it is accessible to everybody. Is it going to be really usable by everybody? Maybe initially probably not, because it's going to take a little bit of an understanding of that.
The second part I would say, you mentioned HIPAA, you mentioned privacy – there are a lot of standards that are set that come into place. At IBM we're super, super, super careful about that, right? I mean we talk a lot about, your data is your data, we don't take your data, we work with you to help to figure out how to use AI in place in your environment. We're very, very careful about that.
We don't use public information. Some of the other consumer-facing AI technologies use public information, it learns from it and then it shares it. We don't do that because we work with a lot of customers that we have relationships with, that can't have their information out. It's super, super careful, so there's a lot of standards and steps that we take internally within IBM. But I think a lot of companies as well that deal with sensitive information have to do that.
Is it evolving? Absolutely. Are there going to be more regulations and things around that? Probably? Think about what happened with PII and GDPR and everything else, right? I think that there's going to be an evolving viewpoint right on how this is going to take place, that's going to create all kinds of regulations we’ll have to follow. Transparency I think is really important, so whatever company or technology that we're working with, you have to know, what does that look like? How was the AI technology developed? What's happening with the data itself?
There's a lot of things that can influence the output of AI, and a lot of it comes down to the training, so if that is not transparent, then it’s hard to be fully dependent on the output of it, because do you really trust it? We talked quite a bit around IBM that we're not just a black box. We want to work with you, we want to make sure that you know you can see how we got to that endpoint if you need to see it because we want to make sure that there's trust there, and there's reliability there. And we also want to obviously continue to improve, so having that interactivity with our customers and transparency helps.
Also as different companies look at solutions that offer AI, I would say that you want to look at other things like, is there a choice? Can you turn it on or off? Does it automatically default to on, and what does that mean? If you're getting a recommendation from an AI system, do you know that it's an AI system that's giving you the recommendation?
So I think even at that layer, it’s important to understand that level of transparency, to understand how much control you have over its influence on the way that you're dealing with systems, and it's compliance with your company as a whole. Each company I'm sure has rules on what you do with data and what happens to it, so having that knowledge as well is super important. There are a lot of questions that come up with AI and systems and that kind of thing, so there are I think some checks and balances that certainly we all have to do as we move forward with this technology.
PS: It's interesting, you cover so much that has happened, especially in the past 10 years. I joined Plant services 10 years ago, 5 years pre-COVID and one of the bigger changes I've seen in 10 years is that manufacturing plants are a lot more willing to share their data outside the plant walls with third party consultants or with OEMs in order for improved reliability of the assets themselves. Ironically enough, the things you're talking about, that shift in attitude towards data sharing has been in the air for a little while, and it's only going to enable AI going forward.
CN: Absolutely, it's one of the things that we've been looking at as well. We talk about reliability strategies as a whole, right? We did an acquisition fairly recently where we got a hold of a library of failure modes around certain kinds of assets because we wanted to build more robustness in our solution around that, and to really enable the ability to create FMEA reports more rapidly. So we thought we’d underpin that by having this library in place. We have an AI system that learns from it and actually helps us make it more robust.