You’ve heard the catch phrases. You’ve read the success stories. You’ve seen the ROI calculations and the positive trending that reliability engineers bring. You’ve even lobbied upper management. Your argument has been so convincing that you’ve secured approval in this year’s budget to hire a reliability engineer. The hiring process has begun and you can’t wait for the results: more reliable equipment, fewer failures, increased availability and production.
If only it were that easy. Building a reliability program led by competent professionals takes work. It requires setting expectations properly. Most of all, it requires time. How do you accelerate the process to ensure the maximum benefit in the shortest time? Avoiding these top five time wasters could help.
Lack of vision
No, we aren’t talking about whether your reliability engineer could have been a fighter pilot or whether he needs bifocals, although not being able to read the computer screen while reviewing an asset hierarchy certainly will slow one down. A lack of an overall vision for your facility’s reliability program is the No. 1 reason for inefficiency and wasted effort, not just in a reliability program, but with any initiative.
Lack of education
Before I became a reliability engineer, I worked as a project engineer improving equipment performance through sound engineering principles. I often was involved in maintenance engineering activities, troubleshooting equipment failures as they occurred in the hopes of getting the plant back up and running as soon as possible. My boss had read an article about the benefits of having a staff reliability engineer, and literally overnight the reliability engineer position was initiated, and I was asked to take the job.
What changed? The title on my business card and the signature line on my e-mails. I continued to perform the same maintenance engineering activities without any real strategic direction, for we had no vision of what we wanted our reliability program to look like.
I worked within my means throughout the next year to learn more about what true reliability engineering was. I learned the difference between a run-to-failure maintenance strategy, which we had, by neglect, been practicing at our facility for years, a time-based maintenance strategy and a condition-based maintenance strategy. More importantly, I learned that a smart combination of the three strategies is ideal, depending on the criticality of the system in question. I educated my boss. He educated his boss. We learned that reliability best practices are well-established; there’s no need to reinvent the wheel.
We also learned that there are no shortcuts. For example, if you want accurate failure data, you need to have a computerized maintenance management system in place. It needs to have an accurate hierarchy that represents the parent/child relationships that represent your plant or facility. The system needs to be populated, all the time, by operations and maintenance with work requests and failure codes and material/labor dollars so that accurate analysis can be performed. Building an accurate hierarchy takes time and money, and there are companies that have done it before and can help make the process less painful. It’s the foundation of any reliability program and, like a foundation for a house, if done poorly will result in imminent collapse.
No master plan
Once there’s an understanding of reliability best practices, you need a plan that’s in line with the reliability program’s overall vision. Develop a clear methodology that can be understood at all levels of your organization. It should start with an assessment of current-state reliability practices — where are you and your facility now in your reliability journey? — as well as well-defined tasks that can bring your program in line with your vision. As with any good project plan, these tasks should have well-defined dates and persons responsible for completion. Resources such as people, money and time should be defined, and proper approval from site leadership must be obtained.
Communication is vital to ensure proper expectations are set and maintained. If site leadership doesn’t understand, for example, that before maintenance plans can be developed and optimized, site hierarchy needs to be established and a criticality assessment performed, then they’ll wonder what you’re working on in all of those cross-functional development meetings, all the while not seeing the desired results.
Lack of prioritization
Prioritization of effort is a challenge we all face, regardless of industry or occupation. In this economic environment we’re constantly asked to do more with less, and there are only so many hours in the day. Where and how we, as reliability engineers, spend our time is of utmost importance. The reliability engineer’s responsibilities are vast and overwhelming: developing cost-effective maintenance strategies for critical equipment; conducting root cause analysis investigations to eliminate or mitigate repetitive failures; implementing effective corrective actions; developing and ensuring proper use of facility management of change (MOC) policies and procedures; identifying limiting factors that lead to high energy, utility, maintenance and supply chain costs — the list goes on and on, but the question remains the same: where do you start?
You and your site leadership can determine where the priorities are and address the most critical items first if you have a well-defined vision and master plan and the proper education.
No measuring stick
How do you know how well you’re performing? Are you and the reliability program getting the results you desire? Are they in line with the overall vision set forth in your reliability program? How can you tell?
Waiting for the year-end numbers is like looking at the final score of a football game to see how your team is doing: you might get the information you want, but far too late to make the necessary adjustments. A robust reliability program has a strong mix of both leading indicators, which tell you how you are doing day in and day out, and lagging indicators, which tell you how you did.
If I wanted to evaluate the success of a facility’s PdM program, I’d need to understand the number of critical assets under surveillance, whether inspections are performed on time, the percentage of identified issues that get entered into the CMMS (leading indicators), as well as the overall program costs, dollar value of saves and annualized cost per unit asset (lagging indicators). Once understood, such metrics foster informed decisions.
There’s no sense in reinventing the wheel. Such indicators should be clear, be concise and make sense for where you and your facility are on your reliability journey.
For reference, SMRP compiled its 2010 Best Practice Metrics, available for a fee at promocorpstore.com/smrp/compendium.html. For international practitioners out there, SMRP also teamed with the European Federation of National Maintenance Societies (EFNMS) to compare and document standard indicators for maintenance and reliability performance, available at promocorpstore.com/smrp/harmonized-indicators.html.
Whether or not you know it, all of us are on a reliability journey. Just as there’s no job that goes unplanned — even an unplanned job gets planned haphazardly during the execution process — we and the organizations we support make decisions everyday that affect how smoothly the trip goes. Proper alignment and planning helps avoid the first three time wasters. Proper execution circumvents the fourth time waster. And proper control mitigates the final time waster. Comparison to industry-accepted standards shows us how far we’ve come and how much further there is to go.
Josh Rothenberg is reliability subject matter expert at Life Cycle Engineering in Charleston, South Carolina. Contact him at email@example.com and (800) 556-9589.