A complete value stream map (VSM) is like a flowchart on steroids. There is the usual action boxes with arrows showing the flow of work, but a lot of other information, too – material and information flow, operating parameters, process and lead times, inventory, a timeline depicting value-added time relative to overall lead time, and so on.
A value stream map can look daunting, especially for anyone who has not worked much with flowcharts and does not have all the needed data. But as one major national bank discovered, there is still a lot to be gained from starting a value stream map, even if there is not enough data to do the perfect map the first time through.
The Challenge: Slash Cycle Time on Mailings
The bank in question mails thousands of credit card offers each month. The process was on a 65-day cycle, meaning if the mailing was in early February, the process would have to be started in December. Bank management wanted to cut that time in half, and go even shorter if possible. Here is why:
- The bank’s credit card offers had a 1- to 2-percent acceptance rate.
- The average balance transferred to the cards (in the targeted demographic segment) was about $1,000.
- The lowest interest rate the bank charged was 12 percent (and some cards went as high as 21 percent).
Given these factors, for every 100,000 offers mailed, the bank could expect to gain minimally $1,000,000 in credit card loans (1,000 acceptances multiplied by $1,000 transfers). That equals about $10,000 in interest per month. This nationwide bank often mailed out many hundreds of thousands of offers. Doubling the number of mailings in a year could well mean a million dollars or more in additional interest charges.
Another minor factor was that addresses tend to get outdated quickly. So the faster the bank could process offers to the lists it purchased, the fewer bad addresses they would have to deal with.
The Procedure: Determine Value and Time
The bank had never studied, mapped or measured this process before, so doing a complete value stream map was out of the question. But just doing some preliminary analysis was well worth the effort.
One of the most important elements of a value stream map is quantifying times. A fully developed VSM has time data for each step – how long it takes a work item to make it through one step, how long it waits in queue before moving to the next step, etc. Initially, the bank had no data at all on actual time needed to complete various steps in the process, and it determined that gathering detailed data would be impossible since there were no mechanisms for tracking time per step.
But with a minimal effort, a consultant working on the project with them was able to divide the process into four phases and get reliable estimates for the length of those phases. The “prototype VSM” is shown in Figure 1.
With this basic information in place, the bank could then examine the work that happens in each step and ask three key questions:
- Does this work add value?
- What determines how long this step takes?
- Is there any way to do it faster? (Or, can some work currently done manually be automated? Or, can some work in this step be combined with work from another step?)
What Was Discovered: Big Delays, Lots of Waste
The bank’s analysis of the VSM was that the pace of the process was determined in large part by an outdated mailing schedule that spelled out exactly what would be happening on each day of a two-month cycle. Unfortunately, it was built around artificial timelines, not actual process capability. So, for example, if a mailing list was ready to be merged on Day 20, but the schedule said that merging was to occur on Day 25, the list would sit around in the databank for five days.
Cumulatively, delays caused by this artificially determined mailing schedule accounted for almost half of the two-month processing time. Simply moving to a system in which triggers alerted one step that work from a previous step was ready to go saved almost 30 days in the cycle time.
The analysis also exposed a lot of work that was not value added from a customer’s perspective. For example, the bank obtained mailing lists from a number of sources, which would arrive in a variety of electronic formats. The lists were formatted into a common software for processing. However, before the data processors moved the lists into the common software, they would convert each original list to a format they were most comfortable manipulating, then move it into the common software.
A third main contributor to delays was the “large-batch” mentality built into the original system. The people who generated the offer letters through a mail merge process liked to work in monthly batches as large as 1.5 million letters. The system was capable of handling about 300,000 records per day. So if a particular list had only 50,000 or 100,000 names, they would wait until more lists came in and do the entire month at one time. Running 1.5 million records in one batch caused numerous delays downstream.
The Result: Cycle Time Down by 60 Percent
Four months after the project began, lead time for this mass mailing process was down from 65 days to 27 days. A year later, it reached 12 days.
The lesson is that an organization should not let the complexity of some value stream maps prevent it from taking the first steps in doing a value and time analysis. As the bank learned, there was so much waste in its original process that broad measures of time and a little probing into what was truly value added allowed it to quickly cut lead time by more than 60 percent. If a company has never looked at how time is spent in a particular process or done a value-added analysis, the odds are very good that the same kind of benefits are out there waiting to be reaped.