Keith Mobley said it succinctly in his November 2002 column: "Effective preventive maintenance requires reliable, accurate instrumentation and gauges." An element of preventive maintenance is regularly testing and calibrating process instrumentation.
Test and calibration devices generally fall into three grades: industrial, instrument and laboratory (Table 1). Laboratory-grade equipment is used under controlled, stable environmental conditions. Industrial-grade devices are applied broadly for testing or monitoring, usually with whole-number resolution only. Instrument-grade equipment is used for tolerance validation, testing and calibration. This grade has at least four times the accuracy of the instrument to be calibrated or tested, typically with full-scale temperature compensation.
Accuracy variesThe accuracy specification refers to the degree of uncertainty associated with a measurement. Each manufacturer expresses accuracy in a format favorable to its equipment, which makes comparison a challenge.
Accuracy specifications are stated as full-scale, range, percent of scale, or percent of reading or indicated value, and may include temperature compensation. For digital instrumentation, counts or digits are often part of the specification. To interpret these varied descriptions of accuracy, calculate the magnitude of the measurement uncertainty.
For example, a pressure transmitter has a range of 0 to 30 psi and an overall accuracy spec of 1% of full scale. The uncertainty in the measurement is 0.3 psi throughout the entire range (1% of 30 psi). For example, at 15 psi, the true measurement can be anywhere between 14.7 psi and 15.3 psi.
Table 1. Test and calibration equipment can be graded according to its accuracy and use.
View related content on PlantServics.com
Determine requirementsIn practice, many companies have adopted a minimum 4:1 ratio: The measurement standard must be four times as accurate as the unit under test, for the given range.
The measurement standard range must be as close as possible to the unit under test. Compare the accuracy at points throughout the measurement range.
For example, compare the transmitter above , 0 to 30 psi, 1% full-scale accuracy and a 4:1 accuracy ratio , to an industrial-grade test gauge with a 0 to 100 psi range and 0.25% full-scale accuracy. (Comparing only full-scale accuracy, one might guess the test gauge itself can be used as a calibration standard.) Just for fun, let's compare a dedicated, instrument-grade hand-held pressure calibrator, also having a range of 0 to 100 psi, but with 0.025% full-scale accuracy.
Remember that the measurement uncertainty must be applied to the full scale , 0 to 100 psi , in both cases. Table 2 illustrates the results. The test gauge, with a 1.2:1 accuracy ratio, is inappropriate for a calibration standard, but is appropriate for a test or indicator gauge. The dedicated hand-held pressure calibrator has an uncertainty of 0.025 psi and a 12:1 accuracy ratio and is an appropriate calibration standard.
Now, select a more accurate transmitter with, say, 0.1% full-scale accuracy over the same range. Will that hand-held pressure calibrator still be appropriate? Table 3 shows that the dedicated hand-held calibrator is not an appropriate calibration standard if a 4:1 accuracy ratio is required. In fact, if the calibrator accuracy was 0.01%, it would still provide only a 3:1 accuracy ratio. But a hand-held calibrator with the same scale as the unit under test is an appropriate standard.
The calibration procedure should reflect the appropriate standard clearly so these calculations need to be performed only once.