The purpose of a training program is to impart knowledge of a system or process to someone new to that process. Individuals may have worked on something similar previously, so will have an idea, or at least a starting point, on which to build. However, most training programs must start with the basics and a zero-knowledge baseline.
By using quality measures, managers can determine how well people perform in their role – but these measures don’t necessarily demonstrate how successful their training was. A common perception is that quality improves with experience. So, just how far should training go before people are let loose on a process and allowed to gain the experience that will improve their quality?
Training requires a clear agenda of the material to be covered as well as achievement targets. In one group I once worked with, there was an editorial content training program followed by a test, which was the gate between being a trainee and being allowed to work on the live system. The pass mark for the test was 90 percent. When, some time later, I applied Six Sigma tools and statistical analysis on this process, the managers were concerned that the overall quality score for this process was around 90 percent. I, however, was not surprised – the results perfectly demonstrated that people are only as good as their training. If management expected a 95 percent score, then they need to train people to be 95 percent good at the job. At this point in time, the company realized the potential for benefits of continuous improvement through Six Sigma.
I believe that a greater focus on trainers and their training programs provides for considerable efficiency gains later on. This article presents two examples to demonstrate this approach using Six Sigma principles and statistical process control (SPC) measures.
Finding a Better Way
In 2009, my company established a new group of analysts at one of our offices in India to carry out a series of editorial tasks. A three-week editorial content value-add training program was provided by two UK trainers. This program was originally designed to train 15 analysts in three distinct content areas. The 15 analysts would then work according to a rotation on each content area. It quickly became clear, however, that this plan was not going to work.
The amount of knowledge to be covered in three weeks was too much. The two trainers themselves were not familiar with all content sets; within two weeks, one content set was returned for full production in the UK. The idea of a rotation prevented the analysts from quickly gaining the experience so as to be effective on any one task. The result was to split the group into two and for the separate groups to focus on a single content set each. The SPC chart below shows the progress of the first group of nine analysts from week 2 of 2009, when they were assigned full time to their single task, to the end of that year.
The first nine weeks of quality results gave an average pass rate of 82.2 percent; weeks 10 to 30 showed an improvement to 87.6 percent. Another increase was seen at weeks 31 to 33 (90.6 percent average pass rate) until finally from week 34 onward the score measured at 95.1 percent and was directly comparable to the scores reported for an experienced UK analyst working on the same task. This meant that to achieve a group of analysts who could perform to our expectations took 34 weeks of live work (plus three weeks of training and additionally the five lost weeks of rotation-based work while assigned to multiple tasks). Assuming that any future training activities would be focused on the correct activity (which in itself would have saved five weeks), what could be done to improve the training timeframe? Remember that at 34 weeks only the expectations of acceptable quality were achieved and several more weeks were necessary to confirm consistency of quality at this level.
The graph shows three step-ups in quality; this would suggest that the supplementary training activities at these points in time had the desired effect. To reduce the training timeline, quite simply, we incorporated into our future training the activities that occurred at these times to ensure analysts are fully trained to meet our expectations.
In September 2010, following significant attrition from the original team of analysts, a new intake of eight analysts joined the group. On this occasion, all the training was provided locally in India by one of the original analysts from the previous year with the support of colleagues both in India and, to a much lesser extent, in the UK. This group of analysts began live work after two weeks of training. The SPC chart below shows an initial average quality pass rate of 92.3 percent. In just four weeks of support and mentoring, the average for the entire group of new editors increased to 96.6 percent – 1.5 percent above that which took more than eight months to achieve with the original group of trainees.
Not only do the quality scores speak for themselves, but a consideration of the cost savings must be made for the second batch of training. We originally sent two trainers for three weeks to India and then we estimated that five UK analysts spent approximately 75 percent of their time over the next nine months supporting the new Indian analysts (100 percent quality checking, data analysis and supplemental training sessions). We estimate the cost of the original training to the UK team was approximately $140,000. For the group who started in September 2010 and were trained and mentored locally in India, this figure against the UK budget was essentially $0.
Receiving Confirmation
With the training carried out in 2009, the second group of analysts were trained on a task whose SPC chart is shown below. This chart follows the progress of quality from this team following their initial training.
In this situation, the chart shows there were three major step-ups in quality. While 100 percent accuracy was seen in weeks 11, 17 and 18, it was not until week 21 onward that consistency at the expected standard was achieved. Again, as with the first team, a group of UK analysts provided constant support and training over this period to achieve the target quality scores.
In November 2009, one UK trainer traveled to India to train an additional group of eight analysts at a new location in this same task. Following the lessons learned from the previous training efforts, an improved program yielded the following results.
From the first week of live work, results met our expectations. Through tracking performance and using Six Sigma analysis tools, significant improvements in our training programs were achieved. These achievements provided lower training costs through a right-first-time approach as well as the production benefits from bringing well-trained analysts online far quicker at subsequent recruitment times.
Lessons Learned
There are several key lessons learned from our training experiences that demonstrate how the same task can be trained more efficiently. The most important factors we discovered were:
- Plan a training program properly
- Allow sufficient time to hire the right people
- Ensure the training materials cover everything that needs to be taught and do not include anything that does not
- Define what acceptable quality is and how it will be measured
- Show that the goals are achievable and what must be demonstrated in terms of consistency to show that training is complete and successful
- Set clear targets
- Use the best person to deliver the training – their knowledge and enthusiasm will be the example that the trainees follow
- Support and mentoring are an essential aspect of training and needs to be included in the training plan
- Finally, implement what you learn from previous experiences