QuikSigma: Gauge Repeatability and Reproducibility, Part 1

Today, we go over how to evaluate a measurement system using the Gauge Repeatability and Reproducibility tool in QuikSigma.

 

 

Transcript:

Early in the project, early in the measure phase, you should always verify that your measurement systems are working. You would probably be amazed if you knew how many projects we had seen where there was only the illusion of a problem caused by a faulty measurement system and possibly the most widely used of the measurement systems analysis tools is the gauge repeatability and reproducibility study, which is right here.

Before we dive into the software, let’s look at a couple of basic concepts. Let’s say that we have some sort of laboratory standard artifact gauge block, standard voltage cell, standard kilogram, something like that. Let’s say that we then take many measurements of that object and we’re going to get a distribution of measurements, assuming that we’ve got some kind of random error in our measurement system, which we almost always do. That distribution of measurements is going to have a mean and the difference between the mean of the measurements and the true value is called the bias. If there is no bias, or so little bias that it doesn’t matter, then the measurement system is calibrated. Calibration and bias are not addressed by Gauge Repeatability and Reproducibility. Those are fairly easy to correct problems. Where the action is, is in the standard deviation of the distribution of measurements. So what I’m looking for is sigma E, the standard deviation of the random error in the measurement system. I would like that to be small compared with the total observed variation of my process. Or, if I’m doing a P-T ratio, I want it to be small compared with my tolerance. So Gauge Repeatability and Reproducibility is not going to help you with bias or accuracy. It’s going to tell you a lot about this and it turns out that this is really where the troublesome problems are and where you can get some advantage in improving your system.

Setup is very easy, you’ll have a collection of parts drawn at random out of your process. You don’t want to make these all out of one batch because it’ll probably make your measurement system look worse than it should. But just select a set of parts and if you’re going to have more, which you very likely will, you just add them down here like this. These parts will be measured by some group of operators and you may have three of them or you may have more than that and you may want to put names in here, probably not, but if you do you can. Each of these operators are going to see each part more than once and we’ll take care of that under the replicates. Now, to decide how many replicates were going to do, we put in here the smallest effect that we think would be of any interest. I’m going to put a 1, and what’s the natural standard deviation of my process? I think it’s about .8, so I’ll put that in.

Now, the question is how many times will these operators make their measurements? What I do then as I just started increasing the number of replicates until I get a power that’s satisfactory. Let me go up to 4.88. That’s probably pretty good and now I can click generate design, and click, and here’s my design layout. What I’ll do then is I’ll take part 1, have Alice measure it, and write the answer here. Then I’ll take part 1 and have Bob measure it. If I want to be very careful, I can randomize this simply by sorting on run order and that will give me a fully randomized design, which is quite desirable. So that’s how you set it up. Pretty simple.

Return to Blog Posts >>

Leave a Reply

Your email address will not be published. Required fields are marked *