To facilitate the development of its employees and better respond to the changing business environment, one department of a large financial-services company decided to revamp its existing performance management system through a Six Sigma project. A pre-project analysis revealed that a complete redesign of the system was required. As incremental improvement in the existing system was not possible, the project team followed the Design for Six Sigma (DFSS) DMADV (Define, Measure, Analyze, Design, Verify) roadmap, incorporating best practices from Six Sigma, project management and information technology (IT) service management. This case study covers a few major aspects of the project, which could readily be applied in similar situations across various industries and business environments.
Define Phase
In the Define phase, the team created a goal statement: To implement a comprehensive, well-aligned and consistent performance management system for Department A.
The team looked at the existing performance management system. It had the following prominent attributes:
- The system supported all product lines of Department A, covering more than 200 employees.
- The implementation and usage of the system was limited to individual departments.
- The company had functional silos, and employee goals were determined within the department.
- Unit managers (there were multiple units within each line of business) were responsible for setting performance targets for their units.
- Individual performance was compared against the set target.
Measure Phase
As part of the Measure phase of the project, the team analyzed the existing performance management system. Then by interviewing key stakeholders, team members identified what the company wanted from such a system.
In particular, the improvement team focused on the identification of:
- Most useful features of the current system
- Not-so-useful features
- Missing features (i.e., needed improvements)
When interviewing managers, team members asked the following questions:
- How can this system help you in coaching, performance appraisal and decision making?
- What information do you want to receive in order to develop and maintain employee and service performance?
The team further had to consider these factors for the performance management system:
- Frequency: How often should metrics be updated?
- Availability: At what time should the performance management system should be available?
- Security: Who should be able to see what information?
- Continuity: Is it a business-critical application? What needs to happen during/after a disaster?
- Capacity: How many users need to be supported? How much data needs to be stored?
Analyze Phase
Based on the information gathered during the Measure phase, the team identified the vital problems of the existing system and derived key requirements for the new system as an output of the Analyze phase (table below).
Deriving the Key Requirements of a New Performance Management System
Problem with Existing System | Key Requirement for New System |
Managers were able to influence the targets heavily and were setting up lenient targets. Thus, a lot of employees were rated high performers while the business was not benefitting equally. | • Head of department to set product-line targets. These targets should be used to compare roll-up level metric values to determine each unit’s performance. • Current achieved performance levels at unit and department level should be used to derive the future performance targets. |
Performance metrics and ratings across product lines were not standardized, thus making it very difficult to translate and roll up metrics from individual employee to department level. | Standardize performance management process, metrics and ratings to enable quick understanding of process, roll-up of metrics and comparison of employees across product lines. |
Performance metrics were available only at the end of the month, making it difficult for managers and individual associates to take corrective action proactively. | System should be refreshed daily to provide up-to-date information. |
Productivity was weighted much higher than quality. The focus on productivity came at the expense of quality. | Equal importance to be given to productivity and quality. Department head should also have a mechanism to change relative importance according to business need. |
Individuals within each product line were compared against each other, even if performing different tasks. This comparison was not standardized, making entire system biased toward some types of tasks. | Set up peer groups to enable fair comparison. Together with standardized performance metrics and ratings, this should enable comparison across the board. |
Individual employees did not have access to their own performance metrics, thus hindering self-directed performance improvements. | Each employee should have access to their own performance metrics and should have a way to compare it against the baseline and against the peer group. |
Month-over-month metric trends were not available. Creating such a trend report required a lot of manual effort. | Month-over-month metrics values and trends should be automatically generated. Also, provide a facility to select the reporting period. |
Design Phase
With its functional silos, the business environment was not conducive to a solution that incorporated a balanced scorecard approach. The functional silos made it difficult to cascade organizational-level targets to departmental-level targets and further to individual-level targets. Due to these reasons, a balanced scorecard approach was eliminated from the scope of the project.
Because a key requirement of the new performance management system was to move from using a fixed-target system to a dynamic-target system, two alternatives for measuring baseline performance were thoroughly explored: 1) the best known performer, also referred to as k-performer, and 2) the average performer. Ultimately the department decided to go with average performance as the baseline.
Before the decision was made to move from a fixed-target-based system to one based on a dynamic target, there was much deliberation on that question. Some of the prominent points that came out of those discussions would be of interest to all practitioners:
- A fixed-target system provides visible targets to employees. Typically, it does not require complex calculations and makes it easy for individual employees to determine their own rating solely based on their own performance. On the other hand, the dynamic-target determination system (either best performer as baseline, or average performer as baseline) makes it difficult to determine the target and leaves employees with some guess work until the final performance targets are derived and announced.
- A fixed-target system is susceptible to violations. Targets could be set so that they are either too strict or too lenient. A dynamic-target determination system fills this gap. It is also self-corrective in nature, and adapts itself according to business and employee performance. For example, in a fixed-target system if there is not enough work to achieve productivity targets, employees would not have enough opportunity to meet the targets and would be rated “below target.” The dynamic target determination system would accommodate such fluctuations and is thus a more robust system.
- Setting the k-performer as a baseline would make the entire population’s performance ranking highly vulnerable to the performance of one individual. This is similar to the impact that an outlier can have on a set of data. Setting the average performance as a baseline reduces this vulnerability, making the system more robust.
Equipped with input from stakeholders and the derived requirements for a new performance management system, the team moved forward with a solution following these five steps:
- Identification of alternatives
- Comparison of alternatives
- Selection of the most feasible alternative
- Creation of an implementation plan
- Implementation of the solution
Those steps were considered from both a functional and an IT perspective.
Verify Phase
As part of process control, a technique for automated data validation and verification was employed. This technique helps indicate any out-of-order data point to line managers and the department director. These out-of-order data points and any other significant events are recorded in an event log. An incident log has been established to capture various incidents that take place with respect to the performance management system. The event log and the incident log play a pivotal role in the identification of improvement opportunities.
About 12 months after implementation, the system has been performing to expectations. An evaluation to either maintain the status quo or to improve the system further would be made as part of the strategic planning session for the following year.