Anyone who has read posts in any quality improvement group or blog will find discussions touting one methodology over another. Many times, much effort is spent burning down a particular methodology. For example:
- “My list of letters is better than your list of letters.”
- “To meet your business needs you must use my list and only my list of steps and the associated language.”
- “If you don’t, you cannot and will never be successful.”
Malarkey!
What really is the difference between PDCA, PDSA, DMAIC, DCOV, IDOV, IDDOV, DIDOVM, DMADOV, DMADV, DMEDI, 8D, A3, Red X, Theory of Constraints, BPD, Agile and the scientific method? I propose that the real difference is in focus and scope rather than method. The different acronyms then lead to a push of different tools. All these groupings of letters and numbers follow the same thought process. Each of these acronyms have their “must use” and tool usage by phase. Why not do something novel and use the tool that is best for the job at hand independent of an acronym? For example, you can use a 10 mm wrench on a bicycle, race car and supercomputer. I can use this wrench for teardown or assembly, new build and repair. You might be able to drive a nail with that same 10 mm wrench, but a hammer would be much more efficient and will demonstrate better capability.
Each of these methods begin with answering the question: What decisions must be made? They then build the plan to make the decision, or more specifically, evidence-based decisions. In this context, evidence is meant to be relevant reliable data measuring the correct parameter(s) that is linked to the needs of the customers. We should define the scope and objectives for the project or activity. Some are more complex than others. We must define what success looks like, but usually we ought to have a clear business benefit in mind. Sometimes there is a technical objective such as creating a new super widget but there should be a business need to do so. How much priority should be given to an activity that has no clear business benefit? If we assume that everyone is busy, it means there is always something to work on that produces more value to the business and customers. This step should deliver a plan of the activities to provide the deliverables on time, on budget with reliability, and use relevant data (evidence). The plan should address: Who? What? Where? When? And How?
The next step is to choose what to measure. How will we separate “winners” from “losers”? In other words, good solutions from not-so-good solutions? This statement sounds simple and easy but most of the time it isn’t. What is important to the customers, businesses and stakeholders? How are these needs and desires measured reliably effectively and efficiently? What are the operating conditions that the topic of the project need to function? How are the results scored? There are tools to help that are part of various methodologies based on focus and scope. Be careful of falling into the trap of just meeting a specification. We should try to verify that the specification meets the customer’s expectations. Customer empathy is lacking and seldom included in most projects. How can value be provided to the customer if we don’t know what the customer wants? Independent of where you live on the system’s “V,” requirement flow down/capability flow up, your requirements must be traced to customer expectation for ultimate success.
Next, we must identify alternative concepts to provide potential solutions. What are the many ways of meeting the customers and business requirements? This is where creativity techniques should be used (brainstorming to TRIZ to axiomatic design). Be wary when you only have the current method and the new method to choose from. Or even more risky, in a new design, you only have one alternative. When there is only one choice, what is the likelihood of picking the best concept – or even a good concept? Alternative concepts are always more the better.
Pick the winning alternative. Include optimization of the winning alternative when appropriate. To pick the winner, we use the rules to choose that we identified earlier. Many times, the chosen concept needs to be optimized – or, in other words – fine-tuned. This is where DoE, modeling and all the fun geeky tools typically fit. (Geek is a term of endearment in my world.)
We next need to confirm our choice based on analysis and see if our choice really performs as expected. Does our chosen alternative really meet the requirements?
Finally, does the performance of our chosen and optimized solution meet our customers’ expectations and provide high value (performance, cost, time and risk)? Does the solution meet business objectives over time in the real operating environment? We must then document our results for the business.
Rather than spending energy arguing about acronyms let’s focus on making sound, evidence-based decisions that deliver value to our customers and our businesses. We all need to pull together to focus on not “doing” (insert favorite acronym) but by executing evidence-based decision-making by developing meaningful criteria, many alternatives and using evidence to find the best solution to deliver excellent value for our business and our customers independent of the favored acronym. They are all effective for the scope and focus intended.
“A rose by any other name would smell as sweet,” wrote William Shakespeare in Romeo and Juliet. In this play, Juliet argues that it does not matter that Romeo is from her family’s rival house of Montague, his name is Romeo. By another name Romeo would be as handsome and Juliet would still be in love. However, if he were not named Romeo, he would not be Montague, and there would be no issue with them getting married. Names of things do not affect what they really are. Let’s go solve problems using evidence to make well-reasoned decisions independent of the solution’s name.
The figure below looks at the various problem-solving methodologies. The colors point out similar tools/approaches used within the methods.