My recent work projects have had me thinking a lot about variation. Our clients are interested in variation because they know that an important element of operational excellence is identifying and understanding the sources of variation in a product, process, or service. Not all variance is the same however and we can generally think of observations being either common cause or special cause. Common cause variations can be attributed to the system itself and special cause variations can be attributed to the individual functioning within that system.
Many factors such as Environment, Equipment, Materials, Methods, and People of a given process are acting at random and independently of each other. If only common or expected causes of variation are present, the output of a process forms a distribution that is stable over time. This type of distribution would lend it self very well to analysis or prediction models. Non-random events are responsible for special cause variations and as such, the process output is not stable over time and is not predictable. These processes must be statistically controlled by detecting and removing the special cause variation.
Assessing variance in a high variable process involving human error is unlike manufacturing where we can easily hold certain variables constant. Most examples of variability assessment are simple and involve manufacturing where a large batch of a single item’s attributes are observed and recorded. These examples are always in a relatively controlled environment and employ basic statistical methods that are pre-packaged in GUI based applications like Minitab and STATA.
What if the process you are trying to evaluate is multi-faceted, highly subjective, and allows many opportunities for operator error?
…Continued in Part 2