The good and bad of OEE

March 9, 2009
David Berger, P.Eng., contributing editor, says overall equipment effectiveness is a powerful, meaningful metric if you're aware of what it excludes.

For the past 10 years, I’ve witnessed a steady rise in the interest senior management displays regarding asset management. This trend coincides with the increase in asset cost and complexity, especially plant equipment, making it even more important to manage assets well. Not surprisingly, in response to these trends, top management has been demanding greater visibility into asset health, better control of costs, and improved asset effectiveness. Therein lies the reason behind the steady popularity gain of measuring overall equipment effectiveness (OEE).

View related content on PlantServices.com

Definition

OEE is an asset’s actual output divided by the theoretical maximum output, expressed as a percentage. The metric is commonly quantified as the product of availability times performance times quality.

This formula is based on the premise that there are three main impediments to achieving the theoretical maximum. First of all, if the asset experiences downtime and is unavailable to produce, then it can’t achieve maximum output. Secondly, output is lower if the asset’s performance is suboptimal. For example, it might be restricted to a lower speed than the theoretical maximum. Finally, even if the asset is always available and operates at full speed, a percentage of its output might be of unacceptable quality, thereby rendering it impossible to achieve the theoretical maximum output.

Because OEE is the product of three decimal fractions, it’s highly sensitive to change. For example, suppose your average uptime for some machine is only 93%, and the average speed at which it runs is about 90% of the designed maximum. The quality assurance department estimates that the average acceptable quality is about 95%, based on the percentage of scrap, waste, rejects and other losses. OEE in this example is then calculated as follows:

OEE = 0.93 x 0.90 x 0.95 = 0.795, or just under 80%

Suppose you implement an exhaustive improvement program and experience a rise in availability, performance and quality to 97% each. Even with such a dramatic increase in the three separate measures, the OEE is now at a mere 91.3%. There’s clearly lots of room for further improvement. An outstanding OEE for any piece of production equipment is about 93%. Companies find it progressively difficult to squeeze out greater improvement when OEE climbs above the 90% mark.

The CMMS can be helpful in tracking OEE. Certainly any ERP package with a fully integrated CMMS/EAM module can track availability, performance and quality. Even if OEE isn’t an out-of-the-box, standard report, it’s an easy report to generate if the system is capturing data accurately. A few best-of-breed CMMS packages track OEE as well, with the assumption that some of the data either will be keyed manually into the CMMS or captured electronically through an interface with your ERP system.

The good

The emergence of OEE has been a powerful influence in many asset-intensive industries. It brings a sharp focus on major gaps in the efficiency and effectiveness of both maintenance and operations. Its key benefits are many.

At least you’re measuring: There are many companies today that run from one fire to another, struggling to meet production commitments. OEE has allowed companies in this situation to start measuring the extent of the problem, and track progress in changing the fire-fighting mentality.

It’s an important measure: If you have to choose a measure to focus on, OEE isn’t a bad option. It’s composed of three high-impact factors that, if optimized, can drive considerable cost savings and increased revenue.

Operations, maintenance and engineering must work together: Maximizing OEE requires cooperation between departments that can help determine the root causes of downtime, low cycle times and quality problems.

The bad

Although OEE is a good measure, it’s not a panacea. Be aware of the pitfalls if you put all of your eggs in the OEE basket.

You miss key measures that trade-off: OEE is a combination of three measures. However, there are an additional three metrics that must be considered: utilization, reliability and total cost of ownership. Be careful that you don’t jack up the OEE at the expense of one or more of these other measures.

For example, suppose machine #1 has an OEE of 90% and an identical machine #2 has an OEE of 75%. Now, suppose the mean time between failures (MTBF) for machine #1 is twice that of machine #2. Which is the better-performing machine? What if, for machine #1, the total cost of ownership needed to achieve an OEE of 90% is twice that of machine #2? Now which machine is preferred?

What if OEE for machine #2 is lower because it’s not running 24/7 like machine #1? Maybe its utilization is lower, even though its mechanical downtime is identical to that of machine #2. Which machine is your preference in this scenario? Thus, it’s important to track the complete set of six asset-related measures that trade off against each other, rather than focusing only on the three buried within OEE.

You are missing major cost drivers: By focusing on OEE but ignoring the key components of total cost of ownership, you might be glossing over some critical cost drivers such as energy consumption, carbon emissions and even the spare parts required for maintaining it.

OEE needlessly obscures the root cause: In expressing OEE as the product of three variables, you produce a superfluous, two-step process for determining what is behind the number. In my view, there’s greater value and simplicity in tracking the three individual measures, along with the other key measures mentioned above.

For example, suppose OEE = 75% for a given asset. This number only provides a clue that there might be a problem with availability, performance or the quality of output. Perhaps availability = 99%, performance = 99%, and quality of output = 76%. Alternatively, maybe availability = 76% and quality of output is 99%, or perhaps all three measures are roughly equal at about 91% each. You can’t take decisive action until you drill down on OEE to determine the root cause. So, why not just report the measures separately?

OEE encourages cheating: By focusing on a single measure to evaluate the performance of an asset, plant or person, you increase the pressure on people to manipulate the results. One favorite way to increase OEE is to base one or more ratios on plan versus theoretical maximum.

For example, suppose you run an asset for only one eight-hour shift for five days per week, the maximum possible availability would need to be calculated as (8x5)/(7x24) = 24%, not 100%. This ignores the measure called utilization – a problem discussed above. If the machines are idle for one hour each day during lunches and breaks, then planned availability drops to 21%. Similarly, if the normal speed of the machine is 10 units/hour, but the theoretical maximum is 12, then the planned performance = 10/12 = 83%, not 100%. Finally, if the expected and historical average scrap rate is 5%, then the maximum you could ever achieve for quality of output would be 95%, not 100%. This translates into a planned OEE of about 17%, not the 100% many managers try to claim.

In theory, OEE never can be greater than 100%. However, if you define OEE based on plan rather than theoretical maximum, then you can easily manipulate the numbers to work in your favor, such as running the asset for an additional shift, or cranking the speed temporarily to 11 units/hour, or reducing the scrap rate by holding off on unnecessary setups and changeovers.

E-mail Contributing Editor David Berger, P.Eng., partner, Western Management Consultants, at [email protected].

Sponsored Recommendations

Arc Flash Prevention: What You Need to Know

March 28, 2024
Download to learn: how an arc flash forms and common causes, safety recommendations to help prevent arc flash exposure (including the use of lockout tagout and energy isolating...

Reduce engineering time by 50%

March 28, 2024
Learn how smart value chain applications are made possible by moving from manually-intensive CAD-based drafting packages to modern CAE software.

Filter Monitoring with Rittal's Blue e Air Conditioner

March 28, 2024
Steve Sullivan, Training Supervisor for Rittal North America, provides an overview of the filter monitoring capabilities of the Blue e line of industrial air conditioners.

Limitations of MERV Ratings for Dust Collector Filters

Feb. 23, 2024
It can be complicated and confusing to select the safest and most efficient dust collector filters for your facility. For the HVAC industry, MERV ratings are king. But MERV ratings...