Text preview for : 5991-1265EN Understanding Measurement Risk - White Paper c20140812 [20].pdf part of Agilent 5991-1265EN Understanding Measurement Risk - White Paper c20140812 [20] Agilent 5991-1265EN Understanding Measurement Risk - White Paper c20140812 [20].pdf



Back to : 5991-1265EN Understanding | Home

Keysight Technologies
Understanding Measurement Risk




White Paper




1 Abstract
One key reason for performing a calibration is to assess a device as either in- or out-of-
tolerance. Common calibration test scenarios compare a device parameter against that of
a measurement standard by way of a measurement process. If the difference between the
device parameter and the measurement standard is greater than the specified tolerance, the
device is deemed out-of-tolerance. However, errors in the measurement process bring about
the possibility of an incorrect assessment. An incorrect assessment may result in devices
incorrectly declared as in-tolerance (false-accept) or incorrectly declared as out-of-tolerance
(false-reject).

The risk of making an incorrect in- or out-of-tolerance assessment can be determined by
evaluating probability density functions that incorporate a device's parameter population and
the measurement error. This paper provides an intuitive explanation of these probability density
functions drawing on Monte Carlo simulation to demonstrate the relationship between a
device's true value and the corresponding measured value.
2.0 Introduction

In manufacturing facilities throughout the world, test engineers design measurement
procedures for manufacturing purposes. It is common for test engineers to rely
on the specifications of measuring equipment to assess the accuracy of the
measurement procedures. This creates a dependency between the measuring
equipment specifications and the quality of the manufacturing process. To maintain
manufacturing process quality, the measuring equipment requires periodic calibration.

For the above scenario, one of the primary purposes of calibration is to verify that
the measuring equipment performs at a level consistent with the equipment's
specifications. In other words, is the measuring equipment in- or out-of-tolerance?

Frequently, calibration involves comparing a device parameter (that is, a parameter
of the measuring equipment) against that of a measurement standard. For example,
assume we wish to calibrate an RF power source with a power meter. The purpose
of the calibration is to assess the RF power source error (the difference between the
indicated power and the true power supplied by the source) and determine if it is
less than a specified tolerance. If it were possible to use a perfect power meter and
a perfect measuring procedure, determining the RF power source's error is simply a
matter of noting the difference between the power meter's reading and the indicated
value of the RF power source. However, since a real-world power meter is not perfect,
knowing the exact RF power source error is not possible. Our lack of knowledge
about the exact error is what gives rise to the possibility of declaring a device as
in-tolerance when it is actually out-of-tolerance (false-accept) or, declaring a device
as out-of-tolerance when it is actually in-tolerance (false-reject).

Michael Dobbert, Keysight Technologies
3 Device Error
From our example, the RF power source error is the value that we wish to
compare against the tolerance limit. The RF power source displays the power
level it purports to output. The RF power source error is the difference between
the purported output power level and the actual true value [7] of the power.
Expressed mathematically,
edut = ndut