Today I was reflecting on a potential topic that could come up in a traditional project involving any test or DOE utilizing a ’p-value’ criterion – it actually did for me a few times in the past.
Hypothetically, say for example there is a process that has very low inherent process variation (process s is very low), with a very high Cp or Pp (depends on how you define s, but say 2.0 or greater), yet has a very low Cpk or Ppk (or Z for that matter – basically the process is hugging or is outside either the upper or lower customer acceptance limits, but very stable at this level), so the situation is characterized as a classical optimization problem.
A black belt performs a DOE, and finds a factor that has a P-value of 0.05 or less on the output. Using the information, the new factor setting is applied to the process, and the new Z value has barely improved. The black belt is frustrated, because the factor was ’supposed’ to be significant. What gives?
In this case, the effect of the factor was much smaller than the level of optimization needed to make a real difference. But the effect was large enough to make a statistical difference based on the low inherent variation level of the process. A quick check of the main-effects plot (and the coefficients in the analysis for that matter) compared with what’s needed to achieve the required optimization would confirm the situation.
So why would this happen anyway? I’ve seen some black belts “go by the numbers” (specifically p-values) without looking at the graphics….and without looking at the real picture of what’s needed at the output side of the project.
A great teacher once told me to look at the graphs first….he was right.