SPRING DISCOUNT
Get 30% off on toolkits, course exams, and books.
Limited-time offer – ends May 26, 2022
Use promo code:
SPRING30

Expert Advice Community

Guest

Defining calibration criteria

  Quote
Guest
Guest user Created:   Mar 02, 2021 Last commented:   Mar 02, 2021

Defining calibration criteria

I would like to ask one concern for How we defining or identify acceptance criteria for calibration? Could you please suggest us?

0 0

Assign topic to the user

ISO 9001 DOCUMENTATION TOOLKIT

Step-by-step implementation for smaller companies.

ISO 9001 DOCUMENTATION TOOLKIT

Step-by-step implementation for smaller companies.

Expert
Carlos Pereira da Cruz Mar 02, 2021

https://www.screencast.com/users/ccruz5284/folders/Default/media/68a03e21-a2dc-471b-acdb-a53c843899ba

When we perform a measurement of a variable to be measured, we are sure that there is a true value for the result. However, we recognize that we humans will never know what this true value is.

So, when we use monitoring and measuring resource to determine the value, we get a measured value of v1.

Can we trust on v1?

To obviate the problem of not knowing the true value, instead of crossing our arms, we adopt an engineering approach, we will find a measurement standard traceable to international or national measurement standards, something that serves as a reference and that can be used as an approach of the true value. For example, if we are working with a scale that gives results to the second decimal place, if we use a measurement standard weigh with five decimal places, we can admit that that measurement standard is the true value, for our practical situation.

So, when we perform a measurement we have:

https://www.screencast.com/users/ccruz5284/folders/Default/media/71fd6451-3061-4105-8dc5-25c459cbc771

How do you read this in a calibration report?

First, let’s find the deviation. The calibration laboratory performs a set of measurements with the monitoring and measuring resource within the measuring range. Something like:

https://www.screencast.com/users/ccruz5284/folders/Default/media/e92646f6-1567-4ea4-9df3-64e0655bf4d2

We calculate the deviation, or systematic error, by calculating the absolute value of the difference between the true value and the measured value.

What is the worst-case within the measuring range? Find the highest value for the calculated deviations. For example, d5. So, for any measurement done within the measuring range, there is an associated maximum error, max-error, equal to   |d5 + uncertainty|

Now consider an example: We have a product that we put on the market. This product has a characteristic X (the mass, for example) that is promised to customers to be within the range of a specification.

They claim:

"Buy our product, we guarantee that it has a mass of 20g with a tolerance of plus or minus 2g"

Something like:

https://www.screencast.com/users/ccruz5284/folders/Default/media/e997ec75-c301-4219-9989-dba022e817f2

We will create a grid to assess the effect of the dimension of the measurement error on our assessment of product quality in terms of compliance with the specification. Something like:

https://www.screencast.com/users/ccruz5284/folders/Default/media/1c011c96-b951-4d67-9180-cc8d1242528b

As we approach the limits of the specification, there is an increased risk of making errors of appreciation, the so-called alpha and beta errors, accepting a bad product as being good, and rejecting a good product as being bad.

If the measurement error (max-error) increases in size, the likelihood of making these alpha and beta errors increases, as shown in the following figure:

https://www.screencast.com/users/ccruz5284/folders/Default/media/c2cfdff0-8b08-4c81-8453-73e20bb10280

The greater the measurement error, the greater the risk of making error alpha or error beta.

Reject a good product as bad, or accept a bad product as good.

The bigger the percentage of the tolerance interval “eaten” by the measurement error (max-error), the higher the probability of committing an alpha error or a beta error, that is, the risk of making a wrong decision.

By calling the tolerance range “2 x T” (because of ± T) and the measurement error (max-error) as ME, we can calculate the following ratio:

https://www.screencast.com/users/ccruz5284/folders/Default/media/d6f12537-62d7-47cd-a144-8605ae606fab

If R = 1; 2 x T = ME, the degree of risk in decision making, following the measurement is 100%.

If R = 2; 2 x T = 2 x ME, the degree of risk is 50%

If R = 10; 2 x T = 10 x ME, the degree of risk is 10%.

In other words: Only when the measured value falls within the blue areas of the figure below, is there a risk of making the alpha or beta error of appreciation, that is, a 25% risk.

So, we can say:

https://www.screencast.com/users/ccruz5284/folders/Default/media/47d6c16d-8470-40f0-88fd-66c493fc5a6a

The decision criteria for establishing the maximum-error (ME) to accept a measurement instrument, following a calibration, is not metrological, it is a management criteria (we are not talking about legal metrology). What risk do we accept in our measurement assessment?

The risk will always exist, always! We have to assess its dimension, and which is the dimension from which we find it too uncomfortable.

From the above example, does our scale measure the mass of a pharmaceutically active ingredient for a recipe? Or measure the amount of flour to put in a pastry cake? What is the risk associated with each situation?

ISO 10012-1, in the Application Guide, advised (I say advised because I do not have the latest version at hand) that the R-value should be as high as possible, and that the range should be between a minimum of 3 and a maximum of 10 (more than 10 means having a measuring device that is too good, maybe too expensive).

Consider your monitoring and measurement resource and check what is the lowest tolerance allowed in a measurement made with it, and then determine your R.

You can find more information below:

Quote
0 0

Comment as guest or Sign in

HTML tags are not allowed

Mar 02, 2021

Mar 02, 2021

Suggested Topics