When we perform a measurement of a variable to be measured, we are sure that there is a true value for the result. However, we recognize that we humans will never know what this true value is.
So, when we use monitoring and measuring resource to determine the value, we get a measured value of v1.
Can we trust on v1?
To obviate the problem of not knowing the true value, instead of crossing our arms, we adopt an engineering approach, we will find a measurement standard traceable to international or national measurement standards, something that serves as a reference and that can be used as an approach of the true value. For example, if we are working with a scale that gives results to the second decimal place, if we use a measurement standard weigh with five decimal places, we can admit that that measurement standard is the true value, for our practical situation.
So, when we perform a measurement we have:
How do you read this in a calibration report?
First, let’s find the deviation. The calibration laboratory performs a set of measurements with the monitoring and measuring resource within the measuring range. Something like:
We calculate the deviation, or systematic error, by calculating the absolute value of the difference between the true value and the measured value.
What is the worst-case within the measuring range? Find the highest value for the calculated deviations. For example, d5. So, for any measurement done within the measuring range, there is an associated maximum error, max-error, equal to |d5 + uncertainty|
Now consider an example: We have a product that we put on the market. This product has a characteristic X (the mass, for example) that is promised to customers to be within the range of a specification.
"Buy our product, we guarantee that it has a mass of 20g with a tolerance of plus or minus 2g"
We will create a grid to assess the effect of the dimension of the measurement error on our assessment of product quality in terms of compliance with the specification. Something like:
As we approach the limits of the specification, there is an increased risk of making errors of appreciation, the so-called alpha and beta errors, accepting a bad product as being good, and rejecting a good product as being bad.
If the measurement error (max-error) increases in size, the likelihood of making these alpha and beta errors increases, as shown in the following figure:
The greater the measurement error, the greater the risk of making error alpha or error beta.
Reject a good product as bad, or accept a bad product as good.
The bigger the percentage of the tolerance interval “eaten” by the measurement error (max-error), the higher the probability of committing an alpha error or a beta error, that is, the risk of making a wrong decision.
By calling the tolerance range “2 x T” (because of ± T) and the measurement error (max-error) as ME, we can calculate the following ratio:
If R = 1; 2 x T = ME, the degree of risk in decision making, following the measurement is 100%.
If R = 2; 2 x T = 2 x ME, the degree of risk is 50%
If R = 10; 2 x T = 10 x ME, the degree of risk is 10%.
In other words: Only when the measured value falls within the blue areas of the figure below, is there a risk of making the alpha or beta error of appreciation, that is, a 25% risk.
So, we can say:
The decision criteria for establishing the maximum-error (ME) to accept a measurement instrument, following a calibration, is not metrological, it is a management criteria (we are not talking about legal metrology). What risk do we accept in our measurement assessment?
The risk will always exist, always! We have to assess its dimension, and which is the dimension from which we find it too uncomfortable.
From the above example, does our scale measure the mass of a pharmaceutically active ingredient for a recipe? Or measure the amount of flour to put in a pastry cake? What is the risk associated with each situation?
ISO 10012-1, in the Application Guide, advised (I say advised because I do not have the latest version at hand) that the R-value should be as high as possible, and that the range should be between a minimum of 3 and a maximum of 10 (more than 10 means having a measuring device that is too good, maybe too expensive).
Consider your monitoring and measurement resource and check what is the lowest tolerance allowed in a measurement made with it, and then determine your R.