The road to prescriptive maintenance

How industry leaders in energy, oil and gas, and more are mapping out a smarter future for maintenance.

By Sheila Kennedy, CMRP, contributing editor

1 of 2 < 1 | 2 View on one page

Who would have imagined how dramatically the industrial internet of things (IIoT) would elevate reliability and maintenance practices? Today, we have sophisticated sensors monitoring multiple variables, closing information gaps, eliminating data silos, and populating Big Data repositories in the cloud, where artificial intelligence (AI), advanced pattern recognition (APR), machine learning (ML), and advanced analytics work their magic on common industrial challenges.

Predictive maintenance (PdM) gave us our first taste of the power of monitoring individual machine conditions. With prescriptive maintenance (RxM), data is assimilated from diverse process and performance variables and woven into actionable recommendations (or “prescriptions”) on what to do, when to do it, and how.

The benefits are readily evident – better-quality data, earlier problem detection, more timely and accurate response, and perhaps of the most importance, less reliance on manual knowledge capture. Following are some companies that are on the cusp of this new level of maintenance maturity called RxM.

Network preparation at Penn State


Maintenance strategies such as PdM and RxM are possible only in connected environments. Tempered Networks recently helped Penn State’s Office of Physical Plant (OPP) instantaneously connect, segment, secure, and manage all of its network devices cohesively despite unique building and campus challenges. As a result, OPP is now making real-time control adjustments based on conditions, entering the predictive stage of maintenance and preparing for a future in which recommendations will be prescribed

Previously, each building was a separate entity. A lot of the systems in use were standalone, and there was a server for every application. “It causes headaches for maintenance when buildings are disjointed like that,” says Tom Walker, environmental systems design specialist at Penn State.

Now, about 300–350 buildings are connected at University Park, with all or most servers housed at the data center. Everything is on a virtualized server; hardware is shared among multiple systems; and authorized personnel have instant access to the systems. “This increased our resiliency, reliability, and overall uptime,” Walker says. “It also gave us the path to start sharing data with other systems and stakeholders.”

For instance, OPP is now working to enable fault detection and diagnostics within the building automation systems, which is expected to help reduce energy use and maintain optimum facility operation. OPP’s new energy dashboard visualizes when an energy problem emerges in a building so the issue can be addressed proactively. In the future, OPP would like it to prescribe what to do based on ML and data analytics from the connected systems.

Efforts are also underway to automate work orders in IBM’s Maximo based on certain fault conditions and eventually prescribe corrective actions. “Right now the work orders are only telling that there’s an issue that needs to be investigated,” Walker explains. “We’re working with our Maximo group on being able to feed more data on the assets.”

Walker’s biggest lesson learned so far is that the use of analytics packages that read directly from the server is a better option than pulling data directly from the controllers, which does not scale. There are also issues with legacy control systems. “With Tempered Networks, we’re putting a shell around all of our legacy systems by locking them out and using microsegmentation to say only this device can talk to this server,” says Walker. “It’s really solved a lot of problems.”

Segmentation and isolation has become a best practice, but it is fragile using traditional technologies. “You can set it up once, but as time goes on, it becomes impossible to maintain, so it’s important to keep it simple,” observes Erik Giesa, vice president of products at Tempered Networks. Instead of using a traditional enterprise IT solution to force-fit connections, Tempered Networks technology was borne in an ICS and OT data environment and bridges legacy systems in a simplified manner, Giesa says.

Prescriptive services for Refining NZ


Industry has come to expect maintenance service providers to employ state-of-the-art technologies and practices. The outcome-based maintenance service for industrial control systems from Honeywell Process Solutions is relied upon by companies such as Refining NZ, New Zealand’s only oil refinery.

Peter Smit, head of process control at Refining NZ, says: “The Honeywell Assurance 360 program we have in place provides us with the confidence that we have our Honeywell distributed control systems and Honeywell Advanced Solution applications at an agreed level of availability. We are very clear what outcomes we expect, and this allows Honeywell to leverage their knowledge and resources to meet the agreed outcomes in a structured and planned way.”

Steve Linton, director of programs and contracts at Honeywell Process Solutions, explains the underlying goal. “We are trying to facilitate achievement of our customers’ business drivers and provide the outcomes they expect,” he says, “whether it’s control system performance, control system availability, or reduced incidences on the control system.”

Tools such as planned, preventive, predictive, prognostic, and prescriptive analytics and maintenance aid in driving toward those outcomes. Prescriptive approaches are being beta-tested at some customer sites.

With RxM, Honeywell’s goal is to amalgamate data across multiple control systems to provide insights that say, “There is X probability in X time frame that X is going to happen, so go look at these things to prevent an undesirable outcome.” To do this, information from multiple customer systems is put into a data lake in the Honeywell Sentience IoT platform, which is appropriately controlled, cordoned off, and anonymized. Self-learning algorithms use and analyze the data and provide information that the customer can use to better maintain its control systems.

Prescriptive reliability analytics for MOL

Corrosion, fouling, opportunity crudes, and resulting process fluctuations are the most common operative challenges faced daily at MOL, an integrated oil, gas, and petrochemicals company based in Hungary. It is a member of MOL Group, one of the largest companies in Central and Eastern Europe.

MOL Group’s 2030–Enter Tomorrow program and recent strategic initiatives require a dynamic enterprise-operations-focused data and information infrastructure to improve productivity and increase process safety performance, says Gábor Bereznai, maintenance engineering manager at MOL. “Crude analysis, process simulations, continuous data monitoring, and early failure detection are the only possible answers to keeping our processes safe and under control,” Bereznai says.

MOL began its journey to refinery maintenance excellence with reliability-centered maintenance (RCM) almost two decades ago. At that time, a race to acquire software led to implementation islands and a lack of deliberate business process re-engineering.

In the next era, the focus was on software integration and connecting the systems with the corporate SAP ERP solution. MOL’s daily operations have come to rely on the company’s successful integration of asset management software, including Emerson AMS with SAP EAM and OSIsoft’s PI System with SAP PM.

1 of 2 < 1 | 2 View on one page
Show Comments
Hide Comments

Join the discussion

We welcome your thoughtful comments.
All comments will display your user name.

Want to participate in the discussion?

Register for free

Log in for complete access.

Comments

No one has commented on this page yet.

RSS feed for comments on this page | RSS feed for all comments