Why is 1.33 the desired Cpk or Ppk value? Basically based on a gut feeling years ago when it was “decided” that a process with a Ppk of 1.0 was capable and someone then asked; “What about process shifts over time?” – “well,” the expert replied, “I guess we can add one standard deviation for this and then the requirement is 1.33 instead”.
But then I must ask; “when we run OQ at worst case settings, we try to simulate the expected process shifts. Then the Cpk of these runs should be 1.0 and not 1.33, correct?”
And this is just to match the general expectations because there are no laws of physics that made 1.0 or 1.33 the truth. Both numbers are based on nice round numbers and gut feelings from statisticians. Not that they are wrong, there is plenty of common sense in these numbers, but why do we not question these from time to time?
To question the “science” of capability analysis we must of course understand it first.
Any Cpk (or Ppk) value represents a number of parts outside specifications. This number can be very low like 1PPM or 30 PPM but it still an accepted number of “defects” since out-of-spec is defined as defects.
So here are my questions; are you sure you cannot risk a few more defects?
Imagine the Cpk is only 0.8 which would be considered a disaster in quality. It is equivalent to 8000 defects per million (almost 1%). But what if it is one of 8 cavities? Then the customer will observe a quality of 1:1000 which may be acceptable (or maybe not). Where is your risk analysis when it comes to deciding Cpk requirements?
Next question; Since the capability analysis is based on the assumption that the normal distribution applies, then these 1000 defects will be only slightly outside the specification and not far off, so what is the real risk? Within these 1000 “defects” against a specification of 50±0.05 mm we should expect a worst case scenario where the deviation is 8 microns – or the diameter of a red blood cell.
Would two plastic parts assemble or not if a red blood cell got stuck in between them?