Dennis Scimeca: Hello, my name is Dennis Scimeca, Senior Editor for Technology at Industry Week. And Robert Schoenberger, our editor-in-chief, has allowed me to hijack the podcast again to talk about something I'm interested in. And in this case, we're talking about the definition of agentic AI. So when AI really broke into the mainstream a couple of years ago, it was generative AI, Chat, GPT, and the rest. That's really what everybody was talking about. And those are the sorts of tools that people were beginning to adopt and deploy. Agentic AI, from my perspective, really entered into the discourse strongly last year. And it's a different technology. And so we have invited 3 guests today to help us define what Agentic AI is and how it's different from the AI we may already have been using. And I'd like to open up the floor to introduce yourselves. How about we go, Ron, Naga, Sanjay?
Ron Norris: Sure. Hey, I'm Ron Norris. I am former director of innovation at Georgia Pacific, which is a pulp and paper company based in Atlanta, but all around the country. Since I retired from there, I am the CEO of Advanced Innovation Management. And that's pretty much me. Glad to be here. Thanks.
Nagadithya Nookala: Hello, everyone. I'm Nagadithya Nookala. I go by Naga. I work as a product manager at Ford for data analytics and AI related products. So thank you for having me.
Sanjay Ahire: Hi, my name is Sanjay. I'm a doctor and I teach at Henry Ford College. And as well, I am a full-time data scientist at Ford Motor Company. Over 30 years of experience in various fields of automotive, aerospace, FMCG, and so on and so forth. Glad to be here.
DS: Thanks, everybody, for joining us. I super appreciate it. I'm excited about this one. I love talking about AI. So full disclosure, everyone, we had a prelim call the other day and came up with two really good examples, I think, of how to differentiate between an AI agent and agentic AI. So Sanjay, you actually suggested the metaphor of GPS. Could you break that down for us?
SA: So just to lay the ground, right? So I would say the difference primarily between the agent and the agentic AI would be like a agent is a helper tool. So that does one specific thing when you ask something like a GPS I talked about, right? So GPS will tell you the best route and like a Google search gives you links and then like a bot sends an alert when something goes wrong. So I would say like it helps, but you still do the deciding and acting as for the agent.
Versus like agentic AI, it's a helper that can also take action and handle multiple steps on its own to reach a goal. So for example, a self-driving car, right? So it doesn't just show directions. It actually drives, slows down, changes lanes, avoids, say, dangers. It's like a research assistant doesn't just give links. So it kind of like reads through the stuff, it summarizes, it recommends, it drafts your next steps. So it does the thinking, the planning, and doing within the rules you set. So I would say the easiest way to remember is agent is kind of like giving you help, and agentic AI gets the job done step by step.
DS: Naga, you had also suggested the metaphor of a web search. I think we just touched upon that, but was there anything else from your example you wanted to throw out there to help us come up with that basic definition?
NN: Yeah, certainly. So, for instance, right, a traditional agent is just like something which can talk. where an agent AI is something which can act on your behalf. So it's more like a personal buddy, an assistant, a personal assistant to you. For instance, if you are looking for something on the web, so a traditional intelligent search engine can just give you the options. However, the agent AI would not only does the analysis, research, and everything for you, but also it can do the job for you. You just need to tell the outcome and what job you want it to do and it will act on your behalf and does the things for you.
So if you have asked this meeting somewhere, schedule this meeting somewhere in. in a location where the weather is not good. So if I take the help of my personal assistant, which is Agentic AI, it could look into the weather and also look into your calendar and it could come back and suggest you that, hey, maybe it may not be a good idea to meet outside because of the bad weather conditions. Probably it could be a best time to meet would be certain time suggesting us and act on our behalf and schedule a data place where we can have this conversation in a much better way.
DS: Now, Ron, you have a very specific criteria for the difference between an agent and an agentic AI. Want to tell us about that?
RN: Yes. Well, I mean, I think that there's recently been a lot of different ways that the word “agent” has been framed. I know some people that are calling basically chatbots an agent, and they're not. But originally, originally, the definition of an agent was that, it has the ability to learn, it has the ability to predict, reason by itself, and decide what to do, and then explain to us why. And that was the big thing about, being an agent. I think that's more of a definition now of a causal agent. We can get to that in a minute. But I like what Naga and Sanjay said, because if you think about when you drill down what they're talking about, an agent is a unit that can perform a task. And agentic AI is the way that the agents behave. It's kind of a maybe in a nutshell, kind of what that is. And they can make decisions on their own with limited help. And then what I mean by limited help is that, you mentioned Waze earlier. I think that when we first started using Waze, I didn't trust it. And so I had to learn to trust it. And then I learned that in some, most instances, I would take its recommendations. And then sometimes I wouldn't. But, and so when I say limited human help, but that's pretty much what it is, that, you know, we have to give it the authority to act on its own. And it's up to us to validate that. make sense?
DS: It does. Sanjay, you actually had said something about the learning aspect of authentic AI, how the learning aspect. Do you want to expound upon that? Kind of touches on what Ron was saying?
SA: I think I just wanted to add what Ron just said. So the learning aspect, because learning can sound kind of scary in this operational environment, like if you go to the manufacturing or some kind of a really intense environment, it doesn't mean that the system is kind of like randomly changing itself. So typically in industrial settings, learning should be bounded like measurable and auditable, right? I mean, somebody can come in and check it out. So I describe it in primarily like 3 loosely said like a safe layers. So I would say learning patterns, right? So it gets better at the detecting early signals. So there are like a drift or the anomalies or the precursors to the defects usually, right? So you have some kind of alert and you, it could be a sixth sense or something, but as a data scientist, I know that something is not working right.
And then learning preferences and policies. So teams operate kind of like differently, by and large, manufacturing operates differently, quality operates differently. So some want like the fewer false alarms, some want earlier interventions. Agentic systems can learn from what we prefer. So still insight policy within the limits. And then the third one I would say is the learning outcomes through closed loop feedback. So it's like you're trying to tie down everything together. So this is like a big value. It takes some actions, measures what happened and improves upon the playbook. So not guessing, but measuring. So the key point learning happens within the guardrails with logs so that somebody can go and check it out and humans can review what it did. So why it didn't, what happened. So to Ron's point, right, I mean, like a lot of reasoning behind it, so I would say just learning doesn't mean uncontrolled autonomy per se, but it means like a closed-loop improvements inside the guardrails with audit trails.