Do You dare to say “I’m not certain”? – And; how much proof do You need to be certain?
In statistical analysis the definition of certainty is called confidence. Either a level of confidence (a percentage) or an interval (a range of likely conclusions).
You will often hear someone claim that 95% confidence is good – and so it may be.
BUT it depends on the situation and the criticality. Remember; that all quality management and every decision You take is, and should be, based on risk assessments.
Being only 95% confident that an airplane will not crash is not very good for the passengers.
If the railways are 80% confident that 90% of the trains will arrive on time – who cares if it is really 9% or 12% that are late? The passengers late for work will complain anyway but no one gets killed.
It is also important to remember that a lower confidence does not mean the initial conclusion is way off.
It just means that there is a chance that the result may not be exactly the same as “the real world” and it is more likely that the truth is close to the first estimate than it is far away from it.
Finally there is one more thing to remember:
If a scientist publish the result from an experiment with a certain confidence level and that experiment is then repeated by several other scientists around the world who reach the same conclusion, then the confidence of course increases.
Let’s say the first result had a confidence of 80%, that can (roughly) be translated to a 20% risk the conclusion was wrong. But if two more experiments come out with the same result, then all three experiments should have failed if the conclusion is wrong.
That means 20% of 20% of 20% risk, which is just 0.8% or the equivalent of 99.2% confidence.
So remember to apply a risk based approach and if possible repeat your tests to minimize sampling to decide and to reach the needed level of confidence.
Speaking of sample sizes……There is more to come, so stay tuned.