Big Data Analytics / Asset Management System / Predictive Maintenance / Prescriptive Maintenance

Taking off the PdM training wheels

In this installment of What Works, an Arizona utility finds success shifting predictive analytics control back in-house.

By Christine LaFave Grace, managing editor

It’s maybe not the most conventional trajectory for asset performance management: Salt River Project (SRP), a Tempe, AZ-based public power utility (also the oldest federal reclamation project in the United States), for years relied on an outside partner to handle the heavy lifting when it came to its predictive data analytics. And then, in 2012, with the support of said outside partner, it brought that technically demanding work back in-house.

The back story: SRP began working with GE Digital’s Managed Services team (formerly SmartSignal) way back in 2005 after recognizing that with better use of asset performance data, it could begin to shift out of reactive maintenance mode.

“We had a lot of data coming in from our coal and gas plants, and we were doing mostly kind of post-mortem (analysis),” says Andy Johnson, engineering supervisor for power generation services at SRP. “After something occurred with a specific piece of equipment, we would go back into that data and try to identify what were the causes of those issues.” In learning more about the emerging field of predictive analytics, Johnson says, SRP saw “that having this data was a very valuable resource, but...we weren’t doing enough with it.”

SRP worked with GE to deploy a predictive analytics software program, GE’s SmartSignal, at a single pilot site. The utility already had several years’ worth of asset data from the site; this data was built into predictive analytics models that SRP was eager to use as “an early warning system of potential issues” with the equipment it was using, Johnson says. “As a side benefit,” he adds, “we were also able to begin moving from kind of calendar-based maintenance to more condition-based maintenance activity.”

GE Digital itself was building out and fine-tuning the software as early adopters such as SRP were using it – the relationship was collaborative from the beginning, say Johnson and GE Digital’s Chad Stoecker, who was involved with the implementation. “We got this very early preview that gave us an early view of what we could do and how we could take it fleetwide,” Johnson says. Adds Stoecker: “With all of our customers, we’re always trading ideas back and forth…we’re all trying to go to the same thing, which is to create a safer work environment, a more environmentally efficient work environment, a more profitable industrial work environment.”

In 2012, SRP was ready to expand use of the predictive analytics models across its sites. But before making that move, SRP made a big decision: It decided to pull management of the models in-house, recruiting and training a team of its own performance analysts and engineers to oversee the asset performance management tools and make specific maintenance recommendations to different SRP facilities.

Why? “We thought, you know, by having our own staff looking at these models, maintaining these models...it gives you the opportunity to have that built-in trust factor,” Johnson says. And as any manager charged with overseeing deployment of new technology knows, earning the confidence of workers who will interact with the new technology – and with technical support teams – is no small task.

“Sometimes you worry, are the plants going to trust you? Are they going to see you as Big Brother looking over their shoulder, or are they going to see you as your co-worker, your friend watching your back for you?” Johnson says. “One thing we’ve been very conscious about is building that trust, and by having our own people internally do the monitoring, modeling, and maintenance of the models, we’re able to build that trust and have that built into our center.”

It was a strategic and carefully planned move, and one that was made more easily via the technical and logistical support provided by GE Digital both before and after the responsibility shift was completed, Johnson comments. “They didn’t just cut us loose when we began monitoring in-house; they’ve always been a partner to us and provided their expertise when we needed it,” he says.

Stoecker details the transition process: “We did some combined cycles and we monitored them for a while; we basically did the predictive maintenance functions for them so they could experience the benefits ... then, over time, they kept building up (with) more and more assets to the point where they built out their fleet, and at that point we transitioned services over to them completely.”

Building the right internal team to manage the maintenance models and be champions of this predictive maintenance approach was crucial. SRP’s maintenance modeling team consists of three performance monitoring analysts and two performance engineers. Each of the monitoring analysts has 15 to 30 years of plant-level experience, Johnson notes. “They know the equipment; they know the people; and they’re also pretty technologically savvy,” he says. “They were identified as the people who were already working with the data.”

He adds: “Sometimes at sites you have people who want to steer as far away from using software as they can, and some people embrace it and they really want to use it. So we identified those people who were really interested and had that drive to learn more and really dive deep into software.” All team members have their own sites to monitor and all have their own roles; engineers focus more on thermal performance as well as server and network management, for example, says Johnson.

Getting the SRP team to the point of maintaining and sustaining a PdM modeling program on its own was about more than developing technical proficiency, Johnson and Stoecker agree. “It is really about the digital transformation journey,” Stoecker says. “It’s about a culture change that every company has to go through, to shift from reactive to proactive.”

And the SRP team learned quickly how much of a time commitment managing the maintenance models is. “The thing that surprised us I think is the amount of work that’s associated with maintaining those models and what a drain on resources that can be if not managed properly,” Johnson says. For the models to be as accurate as possible, they need to draw from the most up-to-date information possible, he notes – and that requires some periodic “retraining” by the experts.

“When temperatures change, you have to retrain your models to reflect current conditions,” he says, rather than performance over the entire past year. “Or when you have an outage, you have to retrain your models to have the most recent equipment operating conditions.”

Beyond the time demands of keeping the models as accurate as possible, the biggest challenge, Johnson says, has been prioritization of maintenance issues identified. “We can’t just throw everything out at the plant and let them deal with it,” he says. “We do a lot of investigation of the issues that are identified before we send anything on to the plant. And if we do send it on to the plant, we track it. We monitor any work orders associated with that issue.” That issue-tracking database contains information such as work site, the unit, the specific equipment used and serviced, and communication between the plant and the maintenance modeling team, along with screenshots from the applicable maintenance model and operating condition trends that were identified. This not only leaves a digital trail of recommendations made and actions taken, but also it aids in the maintenance team’s efforts to quantify the value of its work.

“We quantify it as a save if the plant had no prior knowledge of an issue and it was able to take some type of corrective action based on the information that we provided, we quantify that as a save,” Johnson says. “And anything that’s a save, we do an estimated monitoring value on. What was the value of that information that we provided; what did we help prevent?”

To answer the question, the maintenance modeling team uses an application it developed that attaches a probability to best-case, moderate, and worst-case scenarios if the issue hadn’t been identified and addressed. If the problem hadn’t been tackled, would downtime likely have occurred? How much? How could unit efficiency have been affected?

“That’s how we determined our overall estimated monitoring value for a specific save,” Johnson says. That kind of calculation isn’t required by SRP’s executive management team, he notes, but such proof of value is important nonetheless – “if not for our current operations, for our future operations,” he says.

Johnson and Stoecker emphasize that the work of shifting to a more-proactive maintenance approach is a process, and it remains for SRP a work in progress, more than 10 years after the utility first began exploring the potential of predictive analytics. “It’s not a one-week kind of thing where you just turn on the software and your whole world changes,” Stoecker says. “How are you going to transform your people, help train them and (adjust) processes to really take advantage of the predictive analytics?”

For SRP, the success of this effort to shift the maintenance culture has hinged on developing an internal team of experts and interested advocates for the technology. “This is such a quickly advancing field and such an interesting field to be in at the moment,” Johnson says.