Podcast: Navigating manufacturing disruptions with smarter hardware and software solutions
Key takeaways
- Combining smart hardware with AI boosts operational efficiency and reduces downtime on the factory floor.
- IoT sensors improve real-time equipment health monitoring, enabling faster issue resolution and less unplanned downtime.
- Advances in AI chips and computing power enable more complex, faster decision-making to optimize manufacturing processes.
- Adapting to tech requires trust and control; success depends on managing system change and human acceptance together.
In this episode of Great Question: A Manufacturing Podcast, Robert Schoenberger, chief editor of IndustryWeek, and Deborah Golden, chief innovation officer at Deloitte, delve into the rapid technological changes shaping the manufacturing industry. They explore the crucial role of hardware, from AI and IoT sensors to the evolving power of chips like NVIDIA's, and discuss how integrating these innovations can enhance operational efficiency and address challenges such as supply chain visibility and downtime reduction. The conversation also touches on the importance of combining software with hardware to enable smarter manufacturing decisions and unlock future possibilities, despite the friction and resistance to change that often accompany new technologies.
Below is an edited excerpt from the podcast:
IW: I see you're going to be speaking a little later today about hardware and how that's really affecting manufacturing. I hope right now we can just kind of take a step back. We'll get into the hardware in a bit, but let's start more generally.
You know, we're in a tough time right now for manufacturing—so many changes are happening very, very rapidly. When you think of how companies can turn to technology to address some of the concerns out there—like the lack of visibility into the supply chain, or the challenges they’re facing trying to match capacity with demand without knowing what demand even is—
What are some of the levers you think people can pull? What are some of the things they can be doing at this moment?
DG: Well, I think some of them—when you think about, I’ll say, the “no-brainers” today—I mean, it wouldn’t be a conversation if we weren’t talking about AI. Obviously, AI applicability. But hardware is really the necessity.
So how do you apply software to the advantage of leveraging hardware in that capacity?
AI to me is operational efficiency. So, the no-brainer to me is: how do you make things more efficient?
The competitive advantage is actually, how do you change the dynamic of your business? You cannot do that without hardware. Fundamentally, you cannot do that without hardware.
I think we're looking at software automation more than we’re focused on how to utilize hardware brilliance. I keep talking about “hardware brilliance.”
It’s not just about operational efficiency—like how do I find a better widget, how do I find quicker, better, faster. It's: how do I actually make my hardware smarter for me? And how do I actually make it make decisions for me on the fly?
And in order to make up for the capacity for that—accuracy, unplanned outages, beneficial decisions—how can I do that so I can free up capacity to create demand in the supply chain in a better, faster, quicker way?
So that when I have an unplanned outage, I'm not just spending minutes, days, weeks, or months down. I'm actually creating that uptime much faster—not without human interaction, but with a smarter return time.
That, to me, is where I think the advantage is coming. Because it's not about supply and demand anymore. It’s about smarter supply and demand.
And I think that’s really where we need to get a little bit better—not just looking at software and AI, but hardware too. It’s the combination of those. And without hardware, candidly, the software and AI really aren’t going to matter.
IW: A lot of the focus I’ve heard over the past year has been on the software side. Like, get your ERP in place, get your ES. So we can do all these great things—predictive stuff, better data. But that better data has to come from somewhere.
If you're not putting the sensors in your equipment, if you're not taking advantage of advances in chips—I think you mentioned earlier that you're going to be talking about AI chips, NVIDIA chips, things that are driving this huge market surge—what excites you so much about what’s going on in hardware?
DG: Just think—when you're putting more and more of these IoT sensors in devices, you can learn more and more about their health.
Whether it’s the health of a hardware device, the health of a satellite device, the health of something that you typically wouldn’t have even thought about tracking the health of—I love to say that, because we used to just expect downtime to be downtime.
We still want control and to understand what that control looks like. And yeah, we could be in another Black Mirror episode in five seconds—that’s not the intent. But there are real dangers, risks, and challenges we have to plan for.
- Deborah Golden
And when you think about downtime, the quicker you can get up and operational, the better your work can be producing.
So, it’s not just about, “Let’s make AI work faster.” We’re in such a rush to make humans think better, faster, quicker.
The only way we can do that is with compute time and space. And to do that, we have to know the health of the hardware it’s operating on.
I don’t care where that hardware is—on-prem, in the cloud, or literally in outer space. Having a sensor in those environments is going to be critical to understanding that health.
We can use AI to help us understand that health, and using IT sensors on those devices is going to be a huge advantage.
IW: So, going back to chips—there was a long time where it was all about the Intel-style chips with central processing units. A lot of parallel processes, managing everything centrally. That seems to be shifting in favor of serial processing again—like with NVIDIA—where you run as many serial computations in parallel as possible, run multiple simulations, multiple scenarios. How do you see that change in compute power affecting the factory floor, decision-making, and manufacturing in general?
DG: I mean, that could get us into a Black Mirror episode pretty quickly. You could flash forward to—pick a year—2028 if I’m optimistic, 2050 if I’m not.
You start thinking about synthetic DNA processing, neural networks, and going beyond just individual compute to chips processing like brains.
So what does that do to manufacturing? I’m an opportunist and I’m also an optimist. I still believe none of this is going to eliminate the human. You still need the human to help understand how to process. That’s really critical to understand.
So whether you're processing serially, in parallel, or like a human brain—you’re just going to do it more efficiently and with fewer errors. That frees up time and space to solve bigger problems we’re not solving for today.
If you'd asked, back in the day, “How do we make transportation better?”—someone would’ve said, “I want faster horses.” No one would’ve said, “I want cars,” because cars didn’t exist yet.
So if you allow imagination to unfold, we don’t know today what we’ll want to dream about. And to allow people to dream about those things, we have to give them that space.
So yeah, I fundamentally believe we will create something we don’t yet know, and that new industries will emerge that don’t exist today—just like cars didn’t exist once.
Manufacturing may look different, but we’re going to continue to manufacture things.
IW: Years ago I worked for a software company that did remote scheduling. We used a genetic algorithm to set up daily staffing—but we couldn’t explain why the algorithm chose a particular setup.
It would just run a million scenarios and land on what worked. But clients still wanted explanations. Do you still see that kind of resistance—people wanting to understand what the technology is doing?
DG: I recently wrote a LinkedIn post about this. We’re not just in a race for adoption of technology—we’re struggling with adaptation. Systems will be our biggest challenge. People will be our biggest challenge.
We still want control and to understand what that control looks like. And yeah, we could be in another Black Mirror episode in five seconds—that’s not the intent. But there are real dangers, risks, and challenges we have to plan for.
Take synthetic data. In the wrong hands, without proper controls, it could create very serious problems. But there are also huge upsides.
Could we solve cancer one day through quantum computing breakthroughs? Sure. But that requires hardware—and if we can’t bring down the price of silicon, we won’t have the chips to process that computing.
So, back to your point—yes, people still need to believe in outcomes. They need to trust the systems. And right now, even when we talk about changing basic operational processes, many don’t even want to start pilot programs because they don’t trust the outcomes.
And part of the problem is that the people who created the old systems are often the same people who don’t want to change them.
IW: You mentioned synthetic data. I was at the Automate show earlier this month, and there were several companies talking about using synthetic data to train their AI models—because of that “what-if” scenario. We don't have good data on what happens when this machine completely fails, so let's map that out. There seem to be some really interesting ways to use that, as long as it's intentional and thought out and controlled for. I could see the opposite happening, though—what if we run nothing but "what-if" scenarios? Where’s the practical use of some of those things?
DG: It's really dangerous. I mean, there's a lot of bias—both conscious and unconscious bias—that I think could be, and has been, proven to come out in synthetic data.
If you think about the way synthetic data learns, it's a learning model built on not-real information. It could be one one-hundredth of a piece of information that just learns and learns and learns. It could take days, months, years for you to even see the consequence of that synthetic information.
So, we wouldn't even know the compounding effect of that data. But again, I say that as an optimist, not as a pessimist.
No different than you and I learning—we're at the conference learning right now, right? We went to school, we got an education, we continue to learn. We're in a continuing education mode. We always want to learn. I always want to learn. I'm a dreamer by nature.
AI is a dreamer by nature. Synthetic data is about dreaming by nature—like, it's continuing about learning. It's just code that wants to learn. It doesn't mean it can go out into the wild and just learn.
So we still have to think about: What are the guidelines and the frameworks by which code now needs to learn? If that's the case, we have to figure out how we build those guidelines and frameworks.
The challenge is, it's not the same way that people go about learning. And what we've learned—at least, what I see—is that we're trying to put traditional frameworks around code and non-traditional ways of learning.
So when you think about it that simplistically—it sounds simple. It's probably not a simple problem to solve. And I don't think we've solved that yet, because we have very traditional laws—legal, regulatory, other things—that we're trying to wrap around these very non-traditional ways of thinking about life.
So, I do think we still have a ways to go in figuring out how we put some control around synthetic data, around code learning—things of that nature that we just really haven't gotten to yet.
About the Podcast
Great Question: A Manufacturing Podcast offers news and information for the people who make, store and move things and those who manage and maintain the facilities where that work gets done. Manufacturers from chemical producers to automakers to machine shops can listen for critical insights into the technologies, economic conditions and best practices that can influence how to best run facilities to reach operational excellence.
Listen to another episode and subscribe on your favorite podcast app
About the Author
Robert Schoenberger
Robert Schoenberger has been writing about manufacturing technology in one form or another since the late 1990s. He began his career in newspapers in South Texas and has worked for The Clarion-Ledger in Jackson, Mississippi; The Courier-Journal in Louisville, Kentucky; and The Plain Dealer in Cleveland where he spent more than six years as the automotive reporter. In 2013, he launched Today's Motor Vehicles, a magazine focusing on design and manufacturing topics within the automotive and commercial truck worlds. He joined IndustryWeek in late 2021.