Over time, there has been very fine debate (both positive and negative) surrounding the 1.5 sigma shift. As such, the on-going discussion well serves the need to “keep the idea alive,” so to speak. To this end, I have recently completed a small book on the subject, soon to be available on iSixSigma. The title of the book is Resolving the Mysteries of Six Sigma: Statistical Constructs and Engineering Rationale. Of interest, it sets forth the theoretical constructs and statistical equations that under girds and validates the so-called “shift factor” commonly referenced in the quality literature.
As mathematically demonstrated in the book, the “1.5 sigma shift” can be attributable solely to the influence of random sampling error. In this context, the 1.5 sigma shift is a statistically based correction for scientifically compensating or otherwise adjusting the postulated model of instantaneous reproducibility for the inevitable consequences associated with dynamic long-term random sampling error. Naturally, such an adjustment (1.5 sigma shift) is only considered and instituted at the opportunity level of a product configuration. Thus, the model performance distribution of a given critical performance characteristic (CTQ) can be effectively attenuated for many of the operational uncertainties encountered when planning and executing a design-process qualification (DPQ).
Based on this understanding, it should be fairly evident that the 1.5 sigma shift factor can be treated as a “statistical constant,” but only under certain engineering conditions. By all means, the shift factor (1.5 sigma) does not constitute a “literal” shift in the mean of a performance distribution – as many quality practitioners and process engineers falsely believe or try to postulate through uniformed speculation and conjecture. However, its judicious application during the course of engineering a system, product, service, event, or activity can greatly facilitate the analysis and optimization of “configuration repeatability.”
By consistently applying the 1.5 sigma shift factor (during the course of product configuration), an engineer can meaningfully “design in” the statistical confidence necessary to ensure or otherwise assure that related performance safety margins are not violated by unknown (but anticipated) process variations. Also of interest, its existence and conscientious application has many pragmatic implications (and benefits) for reliability engineering. Furthermore, it can be used to “normalize” certain types and forms of benchmarking data in the interests of assuring a “level playing field” when considering heterogeneous products, services, and processes.
In summary, the 1.5 sigma shift factor should only be viewed as a mathematical construct of a theoretical nature. When treated as a “statistical constant,” its origin can be mathematically derived as an equivalent statistical quantity representing the “worst-case error” inherent to an estimate of short-term process capability. Hence, the shift factor is merely an “algebraic byproduct” of the chi-square distribution. Its general application is fully constrained to engineering analyses – especially those that are dependent upon process capability data. Perhaps even more importantly, it is employed to establish a “criterion” short-term standard deviation – prior to executing a six sigma design-process qualification (DPQ).