Practical Significance versus Statistical Significance

Anytime we draw conclusions from statistical inference, other process evidence must support the conclusion. Statistical evidence is only half of the voice of the process. The big picture includes a thorough look at the practical significance of the statistical result.

One area that gives many process improvement teams difficulty is the selection of an acceptance level that is consistent with the reality surrounding the process. There are no hard and fast rules that can help to ensure the selection of the best acceptance criteria. This requires the observation of the process, an evaluation of the business’ objectives, an understanding of the business’ economic realities, and most importantly, the CTQs of the business’ customer base. For example, the acceptance criteria for the safety of an airplane might be set at 0.2 instead of 0.5.

Another problem area is in the interpretation of the statistic result. Since the data creates a picture of the process’ behavior, is this picture consistent with reality? Some important questions to answer are:

Does the statistical result make sense within the process’ current reality?
Does the statistical result point the way to defect reduction?
Does the statistical result point toward a reduced COPQ?
Are there any negative impacts associated with accepting the statistical result?
Does the customer care?

A good data detective will always question statistical conclusions. Performing reality checks throughout the statistical analysis process will help to prevent costly mistakes, improve buy-in, and help to sell the recommendations made by the team.

Practical Significance versus Statistical Significance

When hypothesis-testing tools are used, we are working with statistical significance. Statistical significance is based upon the quality and amount of the data. Process significance involves whether the observed statistical difference is meaningful to the process.

This can work two ways. First, a statistically significant difference can indicate that a problem exists, while at the same time, the actual measured difference may have little or no practical significance. For example, when comparing two methods of completing a task, a statistically significant difference is found in the time required to complete the task. From a practical standpoint, though, the cycle time difference had no impact on the customer. Either the team measured something unimportant to the customer, or a larger difference is needed to affect the customer.

The opposite is also true. The team can find that the observed time difference from above is not statistically significant, but that there is a practical difference in customer or financial impact. The team may need to adjust the acceptance criteria, collect more data (i.e., increase the sample size), or move forward with process changes.

When statistical and practical significance do not agree, it indicates that an analysis problem exists. This may involve sample size, voice of the customer, measurement system problems, or other factors.

Standardizing Processes

When working to improve a process, it is not enough to implement a solution and stop. Without a plan to maintain the gains, at the first sign of trouble, systems will revert to what has been comfortable in the past. That usually means a return to some past operating procedure. To prevent this, there must be a linkage of the improvement to the management system. This involves monitoring important metrics, documenting methods and procedures, and providing a strategy for dealing with problems in the future.

 This is the purpose of the Control Phase of a Six Sigma Project. It involves a plan to maintain the gains from the new process, and building that plan into the management system. This will provide for on going accountability. Considering that process improvement projects will typically cross functional boundaries, the various process owners, and what they are accountable for, will be need to be specified, and included in the plan, in order to insure long-term success.

 The result is consistent customer satisfaction, a linkage between quality initiatives and strategic objectives, direction for future improvement activities, and a reliance on data by the process owners. These are the ingredients of successful improvement projects

 Discipline (Standardization)

 Discipline, in this case, applies to the adherence to standardization. Just as a disciplined athlete adheres to a standard practice routine to reach the highest level of their performance, a business must have the discipline to adhere to proven methods of doing work. This is standardization.

 Standardization is about making sure that important elements of a process are performed the same way every time, as prescribed by the standardized process. A lack of consistency will cause the process to generate defects and compromise safety. Standardization also provides predictability, which allows the process owners to prevent problems before they affect the customer.

 In a process improvement project, the improvement team can use the PDCA (Plan, Do, Check, Act) cycle to find the best way to do the work. The data collected in the PDCA cycle becomes the basis for changing a process, or for leaving it as is. Eventually, when no further improvement is mandated, a standard work practice is developed.

 When a process or practice is standardized, changes are made only when data shows a need to change. This prevents individuals from doing the work the way that seems best to them, thus compromising quality and negatively affecting the customer. The objective is to maintain consistent quality over time in spite of environmental changes.

 Documentation is an underlying principle in standardization. Making sure documentation is up to date and utilized encourages the ongoing use of standardized methods. In addition, documentation provides the information necessary to anticipate problems and to see where potential improvements can be made.

 If managed properly, standardized work establishes a relationship between people and their work processes. This relationship can enhance ownership and pride in the quality of work performed. From the customer’s perspective, standardized work keeps processes in control so that the highest quality products and services are provided. From the service or product provider’s perspective, standardized work improves safety, improves employee morale, controls production costs, and provides business longevity by returning satisfied customers.

 Standardization has three components: elimination of waste, workplace simplification (5-S philosophy), and work process analysis. All three of these components are necessary. If work is standardized without waste elimination, waste production becomes standardized. If work is standardized without workplace simplification, complexity becomes standardized. If a process is not being measured, it is not being managed.

 The place to do all of this documentation is in a work process analysis document. This tool documents how work is done. It is convenient to look at work process analysis as a detailed description of the process’ process map. It can also be a documentation of workflow through an area of space (e.g., a factory floor). The restraints of this article prevent me from including a Work Process Analysis template.  You can get this, though, by visiting my website at leanmeanprocessimprovement.com.

 The work process analysis tool also makes an excellent training tool. The process steps are detailed and the expected cycle time is given. This becomes a target for the process operators. The process diagram can also be the floor layout of the workflow. The exact content is dependent upon the needs of the process owners.

5S and the Engineering of Waste Reduction

5-S

The 5-S philosophy is associated with lean thinking. The objective of lean thinking is to provide a business with long-term profitability by developing a more effective workplace, which is accomplished by eliminating waste in the work environment. The result is a safer workplace, improved product quality, and lower costs for both the business and its customers.

 Lean thinking may result in a reduction in work force, but that is not its purpose. In fact, the application of lean thinking for the purpose of reducing the work force is not lean thinking at all. Since some companies have done this, lean thinking has been given a bad reputation and has made waste reduction efforts more difficult.

 The 5-S approach involves five activities in the workplace: scrapping, sorting, scrubbing, standardizing, and sustaining. Depending upon which book you read, there may be different names for each S, but the intent is the same.

 Scrapping means to throw away unneeded material. A trashy work environment, in addition to being unsafe, tends to create a casual attitude toward quality. There should be a strategy for knowing what to keep and what to throw away. Take junk mail for example. It should only be handled once. Look at it, decide to use it or throw it away, and then take the appropriate action. When junk mail is handled more than once, it piles up on your desk making normal productive work more difficult. The same thing happens in a shop with trash and old parts, and in a store with boxes and packing material.

 Sorting is the process of placing everything where it belongs. Imagine a toolbox where the drill bits are scattered throughout. If a bit is needed, it will take some time to find the bit. This adds time and cost to work. Now imagine a toolbox with the drill bits organized in a labeled drawer and separated logically by size. The time necessary to find the needed bit and get the job done is shortened, and the cost of the work is reduced.

 Scrubbing the work environment involves cleaning the work area. A clean work area is safer than a dirty one and is conducive to higher quality work. It is related to discarding scrap but goes further by including the cleaning up of what is left. Consider a machine shop where cutting oil is left on the floor. This becomes a slipping hazard and indicates sloppiness. If you were inspecting machine shops to see which one to hire, what would you think about the shop with an oil mess on the floor?

 Another example of the importance of scrubbing is preventative maintenance. In a manufacturing facility, for example, the machining equipment can be painted white and wiped down each shift with white cloths. It becomes easy to see any unusual oil leaks or dirt. This allows the factory workers to diagnose machine problems before breakdowns occur. The result is reduced cost.

 Standardization is about making sure that important elements of a process are performed consistently and in the safest and best possible way. Lack of consistency will cause a process to generate defects and compromise safety. The standardization of work practices increases predictability. Predictability, in turn, allows the process owners and operators to prevent problems before they affect the customer.

 Sustain means to maintain the gains. The 5-S philosophy will only wor

Mistake Proofing

Mistake Proofing

Mistake proofing is an effort to stop defects at the source. The prime objective is to prevent defects from occurring in the first place, but if they do occur, to stop their progression through the process. By stopping a defect at its source, its cost impact is minimized.

 The further the defect progresses through a process, the more waste occurs. The more waste that occurs, the higher the cost impact. As a result, the best place to stop a defect is in the design of the process, product, or service. Once the process is in place, waste starts to be generated as a process output along with the product or service.

 The first step in mistake proofing is to determine the kind of error, or errors, that caused the defect. In a Six Sigma project, this is what the Define, Measure, and Analyze phases have been isolating on a project level. As part of the Improve Phase, the problem process will be re-engineered. Part of this re-engineering will be mistake proofing the process steps.

 There are general classifications of errors that lead to defects. Different organizations may have somewhat different categories.

 Concentration: Lack of concentration, breaks in concentration, interruptions

Knowledge:  Lack of training or experience

Judgment:  Prejudice, expectation

Mistakes:  Forgetting, accidents

Speed:  Working too fast, working too slow

Standards: The absence of standardized work, absence of performance     standards

Independence:  Deciding to ignore rules or standards, freelancing

Intentional: Deliberate mistakes, sabotage

Incidental: Equipment failures, environment, surprises

Unknown: These will usually find their way into one of the above categories after analysis.

 There are several approaches to mistake proofing. Each approach addresses at least one of the above error categories. The following are some of the more common strategies.

 In manufacturing, one of the most common approaches is the use of fail-safe devices. These devices prevent the operator or machine from creating a defect. An example would be the use of a slipping-type torque wrench to prevent over tightening.

 The magnification of the senses is another mistake proofing method. Examples would be optical magnification to improve vision and closed circuit video to see where it is not otherwise possible to see (distance, safety, etc.). Also used are pictures instead of numbers (LED bar charts instead or a numerical display on a meter) and multiple signals (audible and visual alarms used together).

 The elimination of error-prone steps in a process is another method of mistake proofing. This may require designing a new process or the use of automation. An example of this is the use of ambient-light sensors to turn outside lighting on or off.

 Facilitation of the work process will also aid in mistake proofing. This is changing the process steps so that they are easier to do, or easier to do right. An example would be to color code parts that are similar in shape. This would make it easier to identify the correct part for assembly.

 Devices that detect an incorrect action or part can be used to mistake proof a process. Examples would include a weld counter to ensure the correct number of welds or a software modification that will not allow incorrect entries.

 There are as many mistake-proofing strategies as there are mistakes. It requires communication and cooperation between the operators, the process owners, and the engineering staff to successfully execute. In many businesses these functions are silo’ed and do not work together well. This is why progressive companies are putting together production teams for both products and services. These teams are made up of dedicated operators, engineers, and managers all working in the same process. They all have ownership of the process, and as a result, communication and cooperation are easier to maintain.

Central tendency: Mean, Median, Mode

Before discussing measures of central tendency, a word of caution is necessary. Customers do not feel averages. They feel their specific experience. As a result, while central tendency is an important descriptive statistic, it is often misused. For example, a customer is told that the average delivery time is noon, but his actual delivery time turns out to be 3:00 PM. The customer, in this case, does not experience the average and may feel that he has been lied to.

The central tendency of a dataset is a measure of the predictable center of a distribution of data. Stated another way, it is the location of the bulk of the observations in a dataset. Knowing the central tendency of a process’ outputs, in combination with its standard deviation, will allow the prediction of the process’ future performance. The common measures of central tendency are the mean, the median, and the mode.

Mean, Median, Mode

The mean (also called the average) of a dataset is one of the most used and abused statistical tools for determining central tendency. It is the most used because it is the easiest to apply. It is the most abused because of a lack of understanding of its limitations.

In a normally distributed dataset, the average is the statistical tool of choice for determining central tendency. We use averages every day to make comparisons of all kinds such as batting averages, gas mileage, and school grades.

One weakness of the mean is that it tells nothing about segmentation in the data. Consider the batting average of a professional baseball player. It might be said that he bats .300 (Meaning a 30 percent success rate), but this does not mean that on a given night he will bat .300. In fact, this rarely happens. A closer evaluation reveals that he bats .200 against left-handed pitchers and .350 against right-handed pitchers. He also bats close to .400 at home and .250 on the road. What results is a family of distributions

As can be seen, the overall batting average of this baseball player does not do a good job of predicting the actual ability of this athlete on a given night. Instead, coaches use specific averages for specific situations. That way they can predict who will best support the team’s offense, given a specific pitcher and game location. This is a common situation with datasets. Many processes produce data that represent families of distributions, like those in the diagram above. Knowledge of these data characteristics can tell a lot about how a process behaves.

Another weakness of the mean is that it does not give the true central tendency of skewed distributions. An example would be a call center’s cycle time for handling calls.
If you were to diagram call center cycle time data, you would see how the mean is shifted to the right due to the skewedness of the distribution. This happens because we calculate the mean from the magnitudes of the individual observations. The data points to the right have a higher magnitude and bias the calculation, even though they have lower frequencies. What we need in this case is a method that establishes central tendency without “magnitude bias”. There are two ways of doing this: the median and the mode.

The median is the middle of the dataset, when arranged in order of smallest to largest. If there are nine data points, as in the number set below, then five is the median of the set. If another three is added to the number set, the median would be 4.5 (the mid-point of the dataset residing between 4 and 5).

1 2 3 4 5 6 7 8 9 1 2 3 3 4 5 6 7 8 9

The mode, on the other hand, is a measure of central tendency that represents the most frequently observed value or range of values. In the dataset below, the central tendency as described by the mode, is three. Note that the median is 4.5 and the mean is 4.8, indicating that the distribution is skewed to the right.

1 2 3 3 4 5 6 7 8 9

The mode is most useful when the dataset has more than one segment, is badly skewed, or it is necessary to eliminate the effect of extreme values. An example of a segmented dataset would the observed height of all thirty-year-old people in a town. This dataset would have two peaks, because it is made up of two segments. The male and female data points would form two separate distributions, and as a result, the combined distribution would have two modes.

In this dataset, the mean would be 5.5 and the median would be of similar magnitude. Using the mean or median to predict the next person’s height would not be of value. Instead, knowing the gender of the next person would allow the use of the appropriate mode. This would result in a better predictor of the next person’s height.

In other words, the appropriate method of calculating central tendency is dependent upon the nature of the data. In a non-skewed distribution of data, the mean, median, and mode are equally suited to define central tendency. They are, in fact right on top of each other.

In a skewed distribution, like that of the call center mentioned earlier, the mean, median, and mode are all different. For prediction purposes, with a skewed distribution, the mean is of little value. The median and the mode would better predictors, but each tells a different story. Which is best depends upon why the data is skewed and how the result will be used. In a skewed dataset, the median may be the best indication of central tendency for hypothesis testing (see “Non-Parametric Tests”). The mode may be a better predictor of the next observation.

A shift in the process’ output can make a dataset seem skewed. In that case, the recent data is evidence of special cause variation. It means that the dataset is on the way to becoming bimodal, not skewed. For example, consider measuring the height of all thirty-year-old-people in a town as above. If females are measured first, there will be a normally distributed dataset centered around 5 feet. As the men begin to be measured, the date set will begin to take on a skewed look. Eventually, the dataset will become bimodal. This phenomenon can make statistical decision making difficult. The key is to understand the reason for the dataset’s skewedness.

The lesson to be learned here is that things are not always what they seem to be. You have to know what is happening behind the numbers to make the correct decision about how to calculate central tendency.

Understanding the nature of the data is also critical to making good choices about which statistical tools to use. Many poor conclusions find their origin in a lack of data intelligence.

In summary, as a rule, the mean is most useful when the dataset is not skewed or multi-modal. Either the median or mode is useful when the dataset is skewed, depending upon why it is skewed. The mode is most useful when the dataset is multi-modal. Under all circumstances, the nature of the data will dictate which measure of central tendency will be best.

Cost of Poor Quality

The cost of poor quality (COPQ) is the total cost impact of defects produced by the process. There have been many discussions, some heated, about what categories of costs should be considered in this important  process metric. Many organizations make the mistake of only counting the COPQ that they can see. The problem is that this is only the tip of the iceberg. One way to see these costs is to look at what expense types in the process’ operating budget will decrease if the process operates defect free. With this point of view, the cost of poor quality becomes the difference between the as-is cost of producing a product or service, minus the cost of production with no defects.

 The COPQ of a process appears in three categories: prevention costs, appraisal costs, and failure costs. Failure costs can be broken down further into internal failure costs and external failure costs.

 Prevention costs are associated with any activity designed to prevent defects. This includes quality improvement efforts, re-engineering, and new process design. These activities are non-value added in the eyes of the customer.

 Appraisal costs are associated with inspection activities. These activities are designed to prevent existing defects from getting to the customer. Referring back to the Define Phase, remember that any activity associated with finding defects after they occur is non-value-added. Even though the customer may be glad that the defects were caught before the delivery of the product or service, they do not want to pay for the cost of removing bad outputs.

 Failure costs are associated with the mitigation or correction of defects. All internal and external failure activities are non-value added. Internal failure costs are incurred before the defective product or service is delivered to the customer. Examples are scrap and rework. External failure costs are incurred after the defective product or service is delivered to the customer. Examples are warrantee costs, customer returns, customer complaints, and lawsuits.

 An improvement team needs to investigate all potential cost categories in order to capture all of the costs of poor quality. The advantage for the improvement team is more than just a set of data. An understanding of the cost structure surrounding a process will prove extremely valuable when analyzing the process’ performance and when trying to determine which process problem areas to focus on.

Six Sigma Performance Teams

If you were selecting a basketball team, what criteria would you use for the selection of players? If an academic team were being selected, what would your criteria be? If the selection of a Six Sigma process improvement team were the objective, what criteria would be used?

It seems so simple, but we can sometimes make things complicated. Too much attention to politics and control will spoil the team’s chemistry. There are some simple rules, though, that can make the process less painful and more successful.

First, the make-up of a Lean Six Sigma team will need to change a little as it moves through the process improvement phases. There will be a core group of members all the way through, but there will be a shifting of other members depending upon what the team needs at the time.

There are three main types of team members: regular, ad hoc, and resource members. The regular team members attend all meetings, unless advised otherwise, and participate in all team activities. Ad hoc team members participate only when the team requires their expertise. Lastly, there are resource team members. Their meeting attendance is at the discretion of the team leader. These team members are sources of information, resources (time, money, etc.), or coaching.

The second rule has to do with the specific talents of the team members. A Lean Six Sigma process improvement team should include a process owner, a process expert, a budget and accounting member, someone from engineering (if applicable), and maybe even a stakeholder (customer). It can also be helpful to place persons on the team that may work against you if left out. Being a team member will give them buy in.

Third, a team should have a common purpose. This common purpose comes from building a common identity. The team should know what the business expects from them, as well as the known roadblocks and limitations. This third criterion can become sticky. What should a team leader do if he has a team member who is trying to sabotage the team’s work? This is not an uncommon situation.

Team Dynamics

Positive team member behavior involves respect. This respect is built upon a willingness to show consideration and appreciation for others on the team. In fact, showing respect for others is a cornerstone of a stable society. With it, there is progress and synergy (alignment). Without it, there is stagnation and disintegration. The team environment is a micro-society. A team that is respectful of others, and the team as a whole, will have the best chance of success.

The appreciation of diversity of opinion is the starting point of positive team dynamics. The point of putting a team together is to have a diversity of opinion. All opinions and ideas have value and contribute to developing a best solution or result. To be successful, the team members should recognize and celebrate diversity of opinion. This means looking for the useful and positive in everyone’s comments and questions.

Agreeing to disagree is the adult method of dealing with conflict. This is how a team gains consensus. It is also a means to allow diversity of opinion to exist and drive the team forward. By agreeing to disagree, team members do not have to let go of their opinions to move forward.

Another important aspect of respect within a team is attendance. Team members have to be present in both body and mind in order to contribute toward the team’s success. If a team member is absent, that person does not contribute and slows the team’s progress. An unengaged team member presents a similar problem.

Attendance ties in with completion of action items. Since teams use tasks and timelines to move a project forward, action items become the vehicle for team progress. The team assigns action items to a responsible person and a due date is set. This makes the team’s progress predictable and the distribution of resources easier to control. When team members do not take action items seriously, the team cannot function.

When a team is functioning correctly, everyone is contributing. Contributing means participation, voicing your opinion, and adding your brainpower to the team’s efforts. One moment you are giving information, the next you are listening, and the next you are negotiating. The resulting high energy level speeds the team’s progress. It is also more fun.

Team Leadership

The team leader plays an important role in making sure that all team members contribute. This may mean asking someone’s opinion, or slowing down a team member who is too dominating. In either case, every team member’s dignity is important.

The responsible person on a team is called a leader for a reason. This is because the leader is expected to lead and manage, not supervise the team effort. This implies that exceptional leadership skills are necessary for those responsible for managing an improvement team. A leader is more effective than a supervisor is in this case. This is because a leader gets their power from people being willing to follow (a team environment) while supervisors get their power from higher levels of management (a command and control environment). In fact, a supervisory approach to team management will prevent success. Process improvement is a “What do you think?” activity not a “Do as you are told” activity.

Team Member Behavior

Some types of team member behavior will hinder a team’s progress. An example might be the team member who is there because it is part of their performance evaluation. This person is not there for the team. They are there to service their own needs. This team member will usually find pleasure in hindering team progress with arguments that have no substance or by not helping with action items.

The “card player” is another example of an ineffective team member. This person is quietly paying attention to the ebb and flow of power. They align themselves with the winning side on issues and rarely express their real opinions. It is all about keeping themselves on the correct side (the prevailing point of view) of issues. This kind of participation is more political than constructive. As a result, their contribution can bias the results of team activities such as scoring matrices, brainstorming, and multi-voting.

An especially dangerous attitude is associated with the team member who is there to represent their boss. This person is following orders. They will express the views of their boss rather than their own opinion. When this is a means of controlling the team, there will be problems. This boss’ opinion can be valuable as long as it is not subversive. The danger for the team is when the truth is suppressed or conclusions are biased because someone in a position of power is protecting the status quo or attacking something that the team is working on.

Conflict

Dealing with conflict is an important team function. Not only is conflict unavoidable in a team environment, it is desirable. The team should cherish conflict that results from diversity of opinion. The team leader will need to intervene when the conflict becomes personal or destructive. The bulk of the responsibility for this lies with the team leader, but some responsibility lies with the other team members as well. A set of ground rules will help the team prevent conflict from dragging it away from its mission.

The first rule is that conflict will be taken off line when it becomes a problem. During a meeting, this may mean taking a break, changing the subject, or both. This allows the conflict to be isolated from the rest of the team. Before reconvening, the warring parties must agree to disagree or to develop a plan to address their differences at another time.

Another rule is to obtain an agreement from all team members to take a team perspective during conflict situations. The point is that the team’s focus is not on individuals. At an adult level of understanding, it should be clear, that in most cases, what is good for the team is also good for the individual. It is not, “What’s in this for me?” Instead, it should be, “What’s in this for us?”

A team also needs ground rules for conflict outside of meetings. Conflict outside of meetings can derail the efforts of the team as effectively as conflict during meetings. Undermining teammates, or the team’s work, with persons who are not team members will sub optimize the team’s efforts in favor of individual goals. Team issues are the team’s business, unless the team leader feels that the situation is becoming unmanageable. Then the team leader can go outside of the team for help.

Dealing with conflict is all about respect. A team that has individuals who do not show mutual respect to their teammates will not be effective. From a team leader perspective, there may come a time when they have to remove a member from the team. This is a severe action and is a last resort. Making an enemy of a former team member will create internal and external repercussions for the team.

The point to all of the above is that the team must keep their eye on the ball. The team that keeps its focus has the best chance of success. Keeping their focus means controlling the pressure to spend time on individual or political concerns.

Six Sigma and Process Analysis

There are different ways to see a process.  As we think it is, as we think it should be, and as it really is.  When we see a process as we think it is, we are disconnected from reality.  When we view a process this way, we cannot see the source of the process’s defects and waste.  This is the most common way that people see processes.

 When we view a process as we think it should be, we are still not seeing the process for what it really is.  Similar to above, when viewed this way we cannot see the source of the process’s defects and waste.  Viewing a process from the perspective of how we think it operates, or as we think that it should operate, we are literally working in the world of what we “think”, instead of what really is. 

 To make any progress in improving a process we must work with reality.  When we view a process as it really is, we can see its associated defects and waste. Bottlenecks and hidden factories become visible and critical to quality issues can be linked to particular steps in the process. In the Define Phase of an improvement project, knowledge of the “As Is” process is the foundation of all that follows.

 The “As Is” perspective is a perspective of truth.  We can make progress toward positive change when we are honest about how our processes really perform.  Without the truth, not only will we not improve, it is likely that things will actually get worse.

Six Sigma Success and Honesty

Not all business problems lend themselves to the Six Sigma process improvement methodologies, especially those that have short time lines. There are many problems that business leadership understand and should just fix. A Six Sigma improvement project typically requires one to six months for a team to complete, depending upon the complexity and scope of the problem. This is longer than acceptable for some problems. In addition, many of the tools used in Six Sigma do not apply well to problems that are not process based. Examples of these would be emergencies and relationship issues. Process improvement tools apply better to up-front planning for these situations, than to the situations themselves.

Two other important considerations are the impact of variation and the truth. Not all variation is bad. Without variation, there would be no improvement. Six Sigma projects use variation to find both problems and solutions. This is because the awareness of a better way to do something manifests itself as variation. Consider, for example, that there are two processes producing an identical output. The operator of one process makes a change and introduces variation between the two processes. This new process produces fewer defects than the former process. Thus, by way of introducing variation, the operator discovers a better way to produce the output. Conversely, by eliminating all variation, we eliminate all experimentation, and as a result, we eliminate process improvement. The key is to plan and control variation. By planning and experimenting, a process owner can discover new and better ways to produce the product or service.

The truth is the basis of any effort to improve processes and eliminate defects. Sacred cows, sub-optimization, and parochialism are enemies of the truth and place limits upon how much improvement is achievable. To optimize improvement, we must embrace the truth, even if it hurts. The truth will literally set us free.

Statistics and Lean Six Sigma Process Improvement

Process improvement strategies use two applications of statistics. These are descriptive and inferential statistics. Descriptive statistics describe the basic characteristics of a data set. It uses the data’s mean, median, mode, and standard deviation to create a picture of the behavior of the data.

Inferential statistics uses descriptive statistics to infer qualities on a population, based on a sample from that population. This involves making predictions. Examples of this are voter exit poling, sporting odds, and predicting customer behavior.

Statistics are an important part of process improvement. Even so, statistical calculations do not solve problems. Business acumen and non-statistical tools are partners with statistical calculations in establishing root causes and in developing solutions. As important as some sources tend to make statistical tools, improvement projects rarely fail because of math problems. Instead, they fail due to a lack of honesty, management support, or a lack of business acumen. The best screwdriver in the world will still make a poor pry bar.