Years ago (almost ten now!) when I was going through Black Belt training, I remember seeing the famous slide describing what a three-sigma world would look like. The presentation slide described how three-sigma aircraft landing performance would mean two long or short landings per day, and that 20,000 articles of mail would be lost per day at a three-sigma level. After completing the presentation, an astute participant in the class asked why 3.4 DPMO was described as six-sigma performance… to him, it seemed like a high level of defects for a true six-sigma process.
All statistical purists know this is the case, but the instructor started describing how the 1.5 shift and drift effect degrades performance over time, and that this number was used based on historical performance, etc….
Since I’ve been in a position to coach various individuals in six sigma concepts for some time now, I have to admit that each time I describe the 1.5 shift it gets more and more frustrating. Here’s why:
- The 1.5 shift doesn’t really hold over time in all cases, so it can be a poor approximation. I’ve seen long-term performance influenced by much less than a 1.5 shift and drift factor (sometimes maybe more like 0.5). Likewise, I’ve seen much worse (maybe at 2 or 2.5).
- I can’t honestly say that from my experience the shift factor distribution is normal, so therefore I can’t predict it ;).
Based on this, here’s my proposal:
- Get rid of the 1.5 shift factor in training, and explain only what long term shifts in the processes do to process performance.
- As a second step, explain how to calculate a short-term and long-term sigma value from a process.
- Lastly, make a “continuous improvement” metric out of getting the long-term process sigma value closer to short-term sigma performance levels (minimizing process shifts).
At the very least, the above methodology should make the concept of “long-term shift” more understandable to new practitioners, and make it a practical tool as well.