Text preview for : 5991-1707EN Language of Specifications - White Paper c20140804 [5].pdf part of Agilent 5991-1707EN Language of Specifications - White Paper c20140804 [5] Agilent 5991-1707EN Language of Specifications - White Paper c20140804 [5].pdf



Back to : 5991-1707EN Language of S | Home

Keysight Technologies
Language of Specifications



White Paper
02 | Keysight | Language of Specifications - White Paper



What do Specifications Mean?
Hitting the Mark
Specifications describe a product's capability but some basic terms are often misunderstood. Has this dart been thrown accurately, or precisely? Is there a
difference? This article explains some of the arcane language used in describing a product's characteristics.

Thumb through any instrument specification and you are presented with a whole host of technical terms describing the product's capability. There are
some basic ones which are often misunderstood, though --accuracy, precision, resolution and sensitivity spring to mind.


Basic Terminology
Experience has shown that some basic metrological terms are often confused. What is the difference between accurate and precise,
resolution and sensitivity, instability and noise? We'll use some graphics to illustrate. Firstly, there are some archery or shooting
targets. Four marksmen were aiming for the center "bulls-eye". This is analogous to making a perfect measurement with the "bull"
being the conventional, "true value". So, take aim and fire five rounds...




Looking at the first target (above left), the shots are widely distributed and mostly
off-target -- this guy's obviously a beginner, both inaccurate and unrepeatable.
However, is the second marksman (above right) much better? These shots are closely grouped but they've all missed the target
completely! He's precise but inaccurate. On to the third (below left) and our man has reliably hit the target but the shots are dispersed
-- so we have accuracy (two in the "bull") but imprecision. Of course, the final target shows the way it should be done -- an Olympic
champion's performance perhaps -- little deviation from "true" every time, showing both accuracy and precision.




As far as calibration is concerned, the attribute accurate often also implies precise but it's worth remembering it may not be the case.
Conversely, the supplier that claims his product is precise may not be making any claim at all for its correctness (relationship to
national standards) -- be warned!

The degree of accuracy and precision results from the combined effect of measuring equipment, technique, environmental conditions
and the characteristics of the item being tested. If a series of repeated measurements were made and the data plotted as a histogram
(bar graph), the shape described by the bar-heights represents the distribution.
03 | Keysight | Language of Specifications - White Paper




4 2




3

1



Increasing error Increasing error
"TRUE VALUE"



The plots show the performance of our marksmen when given machine guns (lots of
data), where their aiming-point (bulls-eye) is the "true value". The distance of each
peak from "true" is their average error and the width of the curve shows the dispersion.
Whose performance is represented by each plot, do you think?

1. The "beginner" is purple (inaccurate/imprecise)
2. The second marksman is green (repeatable but poor accuracy)
3. Red is the intermediate marksman (accuracy but not good precision)
4. The "expert" is blue (accurate and precise)

Since they all had the same number of shots, the area under the curve must be equal
(the total length of the histogram bars is the same) and so the plots have different
"amplitudes". The curve shape depends upon each individual's performance and the
amount of data analyzed, but we've assumed normal or Gaussian distribution. In
calibration, of course, we don't know the "true value" and an uncertainty is effectively a
figure of merit for the reported measurement -- the limit of potential inaccuracy which
should encompass the measured value's deviation from true.

Sometimes resolution is mistaken to be the same as accuracy. This misconception
often relates to instruments with digital read-outs where a similar assumption is that,
for example, a frequency counter with 11 digits must be a hundred times more accurate
than one with 9 digit resolution. Resolution is just the discrimination that the instrument
can show.
04 | Keysight | Language of Specifications - White Paper




0 1 2 3
centimeters


Look at this metric ruler; its resolution is 2 millimeters (one fifth of a centimeter) even
though it can readily be used to measure the length of the red line with better estimated
resolution (certainly to 1 mm and possibly 0.1 mm with magnification). However, our
ability to visually subdivide between the marked graduations contributes to the
uncertainty of the measurement. From inspection the evidence is that the line is
between 2.6 and 2.8 cm and, considering only the resolution, it would be reported as
2.6