Lean and Mean Process Improvement CD and Audio Book

Lean and Mean Process Improvement is now available on CD as a PDF along with an assortment of Six Sigma Tools. Email me at walt.m@att.net for details on how to purchase this CD.

Work started last week on converting Lean and Mean Process Improvement to an audio book. This work is in progress. I will post notification about availability as soon as it is ready. If you email me at walt.m@att.net, I will notify you when it is ready for distribution.

Calculating Process Yield

Calculating Process Yield by Walter McIntyre

I recently visited several contract manufacturers (CM) to discuss a project I am working on. The purpose of the visits was to evaluate their ability to produce an electronic device we are developing for the automotive industry. One of the production control metrics I asked from each project manager was an estimate of the typical roll throughput yield on their production lines.

Only one of the project managers knew the rolled throughput yield (RTY) on their lines.  All the others gave me a first time yield (FTY) instead. When I pressed each of these about how they manage quality on their production lines, they gave me their version of how failed units are repaired or disposed of before shipping, so that our customers are protected. This approach makes the yield look better than it really is and increases the CM’s cost of production. Make no mistake, increased cost for the CM means increase cost to you, the customer.

The one CM who knew his production line’s rolled throughput yield, also gave me dollar amounts of lost value through wasted components and rework. This CM also addresses the yield issues at each step in their production process with improvement teams.

A significant difference in the quotes received for the CM’s we visited was their circuit board testing schedule. Rather than test every circuit board in the production stream, as the first time yield CM’s did, the CM using roll throughput yield was able to reduce this to 10 percent of every production run. This is a direct result of having good control of their production process. The result was the roll throughput yield CM giving us the lowest quoted cost of production.

This experience led me to write this piece on the various ways to calculate the yield from a process.  If you are a CM, I encourage you to use roll throughput yield and make yourself a hero of cost reduction in your business.  If you are evaluating CM’s for a project, make sure you look hard at the way they calculate yield on their production lines and how they use the results.

First Time Yield (FTY):  The probability of a defect free output from a process is called the First Time Yield. This metric considers only the criteria at the end of the process.  The first time yield is unit sensitive and is calculated by dividing the outputs from a process by its inputs.

The First Time Yield will not detect the effect of hidden factories.  Consequently, it will typically indicate that a process is performing better than it really is.  Even so, this is the most common way to calculate process yield in business today.  This is due, in part, to the way businesses report their performance to financial analysts. It is useful to the business in this way, but First Time Yield will not help the business find and correct problems in their processes.

Rolled Throughput Yield (RTY):  Rolled Throughput Yield is the probability of passing all “in-process” criteria for each step in a process, as well as all end process criteria.  Rolled Throughput Yield is defect sensitive.  Mathematically, Rolled Throughput Yield is the result of multiplying the First Time Yield’s from each process step together.

When a process step produces defects, the yield for that step will be less than 100%.  Even if the defective outputs are corrected (a separate process step), the yield for this step is unchanged.  The drawing below shows the relationship between First Time Yield and Rolled Throughput Yield.

Yield

In the example above, the First Time Yield indicates a good process with no defects getting to the customers.  There are 100 inputs and 100 outputs. The First Time Yield does not capture the effect of the 5 % defect rate from each of the process steps.  Ten percent of the outputs are being reworked to keep customers from getting defects.  The process has to do enough work to make 110 outputs to produce the resulting 100, defect free, outputs. The two hidden factories exist because of defect generation and the process owner’s desire for the customer to receive defect free outputs.  The rework (repair or replacement of the 10 defective outputs) will show up as a component of the process’s Cost of Poor Quality.

The rolled throughput yield in the diagram indicates a marginal process because it captures the work done by the two hidden factories.  Instead of a process in 100% compliance, as described by the first time yield, rolled throughput yield describes a process that wastes 10 % of its resources.

These calculations demonstrate the difference between an “As we think it is” process and an “As is” process.  As a result, they point the way to where improvement efforts are needed.

 

Reactive and Proactive Data

Collecting data, voice of the customer or otherwise, requires a sample collection plan. It is important to know what you want to know, how to get the information, where to get the information, who to get the information from, and other details. You begin this process by knowing what you are trying to learn from the data.

Reactive Data

Business receive reactive data after the customer has experienced the product or service. Many times businesses get reactive data whether they want it or not through complaints, returns, and credits. This data is normally easy to obtain and can help to define what the defects are and how frequently they are occurring.

Sources of reactive data are customer complaints, technical support calls, product returns, repair service hits, customer service calls, sales figures, warranty claims, web site hits, surveys, and the like. Most businesses make it a point to track this information and make it available to process improvement teams.

Reactive data can be used to find out what aspects of the product or service the customers are having issues with, what needs are not being met, and what the customers may be expecting from the business in the future (new services, products, and features). The danger with reactive data is that some customers will tell the business about the defect by not buying from that business again. This insidious problem can sneak up on an unsuspecting organization. A business should never assume that they have all pertinent reactive data.

Proactive Data

Data that is collected before the customer experiences their first, or next, encounter with the business’ product or service is proactive data. An example of this type of data would be the information collected in a market research effort regarding potential new products or services.

Sources of proactive data are interviews with potential customers, focus groups, surveys, market research, and benchmarking. This type of data can be difficult to obtain. Customer surveys and focus groups can miss customer segments or ask the wrong questions. Market research may be expensive, hard to obtain, or be unreliable for the business’ customer base. Proactive data collection requires careful planning.

A business can use reactive data to point the way to where proactive data collection will do the most good. This helps to focus data collection activities on important customer issues. Without this focus, the business will be shooting in the dark. Consider, for example, asking customers what color of widget they prefer when the sharpness of the widget is their real concern. Not only will dull widgets turn away customers (regardless of color), asking the wrong questions will indicate that the business is out of touch with its customers. The customer may feel that a business is not focusing on their needs (and they would be right in this case) and buy from a competitor instead.

Proactive data helps focus the business on the important issues of the future. The future could be anything from the next customer visit, to consideration of where the business is investing their research and development dollars. Where reactive data helps a business to define defects in the customer’s language, proactive data helps to prevent defects before they affect the customer. Both data types are important and depend on each other for the synergy to improve the customer’s satisfaction level.

Practical Application of Hypothesis Testing

By following a consistent format the Six Sigma team and its customers can better understand and explain hypothesis test results and conclusions. Reviewers know exactly where to look for information, which will increase their confidence in the results. This is an example format to use.

Practical Problem

This is a statement that describes the practical question to be answered by the test. It is written in process owner or customer language and states what is being asked.  It is phrased as a question.

Statistical Problem

This is a statement that describes the specific hypothesis test that will be used along with a definition of the “null” and “alternate” hypotheses for the test. The statement is written in the specific statistical terms required by the hypothesis test being used.

Statistical Solution

This is a statement that describes the solution to the statistical problem. It too is written in the specific statistical terms required by the hypothesis test used.

Practical Definition of the Statistical Solution

This is a statement that describes the statistical solution in practical terms.  It is written as a statement and answers the practical problem question in step one. Process owner or customer language is used. No elaboration is allowed.  Just the specific answer to the specific question posed in step one.

Example:

Practical Problem:

The vender promised service in an average of 5 minutes. Is this a true statement?

Statistical Problem:

Single population t-Test with H0: m = 5.

Ha: Service time does not average 5 minutes. Confidence interval equals 95%

Statistical Solution:

P = 0.0000, H0 is rejected because P < 0.05.

Practical Definition of Statistical Solution:

The service time does not average five minutes.

Hypothesis testing does not establish the why or how. Other process knowledge will help answer these questions. Note that the way the test is set up, it does not indicate whether the actual average service time is greater than or less than 5 minutes.  The test can be restructured to look at one side of the data’s distribution, or other process information can be used to determine the direction from 5 minutes the distribution’s actual mean really is.

Intuition and Data Analysis

The analysis of data is now, and always has been, problematic. We are not machines. Our thinking is affected by intuition and experience, which are not empirical in nature. In business, Six Sigma or not, the ability to see information from both an empirical perspective and from the perspective of human stake holders (not empirical), is critical to quality decision making.

Let me give you an example of a non-empirical, intuitive/experiential perspective. If you have ever returned to a playground that you knew when you were young, you may have remembered the sliding board being really high, but now it seems small.  It did not shrink. Your perspective changed.  This is a fundamental rule of human thought. We learn through experiment (experience). These learnings change the way we view the world. In other words, the context of our information changes.

This is both good and bad.  We learn not to put our hand on a hot stove, to look both ways before crossing the street and to not insert keys into electrical wall sockets, because of severe negative consequences. This is the result of the power of observation.

This works well on simple systems where results are not ambiguous, and are easy to understand and predict. On systems where there is complexity and results are not easy to predict, you must “peel the onion” with your observations. When evaluating a complex system, intuition must be used very carefully.

Dr. Daryl Bren does a magic trick with his students on their last day of class with him.  The magic trick is really a lesson. He attempts to demonstrate his ability to read a student’s mind by way of giving information about their personal history that he could not otherwise know. He is always successful and his students grapple with what to think about this intriguing skill. Their intuition, based on trust in their professor, compels them to want to believe. Once he has given them enough time, he tells them how he accomplished the feat. Basically he knew who he was going to” read” and collaborated with their family ahead of time, without the student knowing. It is a trick, not magic. He then delivers the punch line, never substitute your intuition for real data.

Another story.  Several years ago I had an employee that told me that he had started a rumor about the possibility of a major management shakeup. Two weeks later he came to my office, excited, saying that he had it on good source that there was going to be a major management shakeup. He even had details (facts?), which his intuition bought into.. I had to remind him that he, in fact, started that rumor two weeks earlier and fell victim to it. Intuition, in the absence of fact, will nearly always lead to incorrect conclusions.

This is not to say that intuition is not important. Intuition is a critical evaluation tool, and just like any other tool, must be used properly. Intuition can indicate that either your perspective or the data is skewed in some way.  Maybe both are skewed.  Intuition will point to what needs a reality check or more information.

This is just another case where balance and perspective play important roles in our lives. In reality, what I am talking about is finding the “why” behind a set of data or “facts”. Successful Six Sigma Projects and quality business decisions depend on it.

Two Dimensional Thinking

 

A two-dimensional thinker sees the world as a polarized place. Who you are and what you believe becomes categorical. It is either one way or the other. These individuals can see facts, but truth eludes them because the facts are generally considered without context.
The problem with two dimensional thinkers is that they skew, or misinterpret, facts in order force them into a two dimensional framework. As a result, they frequently have “the facts”, but do not know, or are misrepresenting, the truth. This is how marketers sell their ideas, products or services. They build context around a set of facts so that the listener’s interpretation is guided to the desired conclusion. As you watch and listen to the world around you, see if you can see this take place.  How much of what you hear is fact and how much is context? Does the context pass the reality test?
Context defines truth by giving facts relevance. Conflict between people is generally the result of two dimensional thinking. This is demonstrated by the win/lose attitude of the conflicting parties. Both sides bend contextual information to fit their argument. Resolution can usually be gained by getting to a win/win attitude, which is based upon the understanding that there is an alternate solution to the conflict that the win/lose mentality cannot see. The alternate solution is typically based upon a more honest contextual framework.
All of this makes two dimensional thinkers less effective in problem resolution, listening and leadership. These areas of human thought require the ability to see things from differing perspectives. The “why” of a situation is just as important as the “What”, and the “why” is generally contextual in nature, not categorical.
Moving beyond two dimensional thinking involves accepting that most words and events in our lives have meanings that are subject to interpretation. We call this perspective.  You have heard the saying, “One man’s trash is another man’s treasure.” The world looks different from different perspectives.
Seeing the world from different perspectives involves tying facts to context that may be separate from your own reality.  One of the best ways to accomplish this is by listening. Stephen Covey stated it nicely by saying we must “seek first to understand, then to be understood.” Understanding is a continuous process, not a categorical one. Try, sometime, to truly listen to someone. Your ears, eyes and mind are open, but your mouth is shut. Allow yourself to evaluate alternate perspectives for the purpose of understanding. This is not about losing your own perspective or replacing it, although that may happen.  It is simply a matter of seeking to see a situation through someone else’s eyes.
Six Sigma, based solely upon statistics, is two dimensional in nature. It tells us the “what” but not the “why”. When contextual information is paired with statistical results, the “why” becomes a part of the dialog. By understanding contextual information, we are able tie causality to defects and improve processes. This way the human element is part of the picture.

Designing of Products and Services

Last week I posted a piece on using a form, fit and function analysis in reverse engineering. This type of analysis can also be used in product or service design. The starting point is different, but the analysis works the same way. In reverse engineering, the form, fit and function analysis starts with a product or service and works backward to determine how something works. In the design of products and services the process starts with a customer need and works toward a solution.

The questions that need to be answered in design work are similar to the reverse engineering questions. The need to repeat the steps of the analysis is also similar.  The main difference is that in reverse engineering, the product or service is the focus, but in design, the customer is the focus.

There are areas of overlap in a form, fit and function analysis. This is the natural result of moving through the form, fit and function steps in the analysis process. Additionally, the steps are cyclic in that the analysis is repeated with increasing levels of detail. This “drilling down” to more granular knowledge of how something works, or should work, allows for a more robust design of a new, or refined, product or service.

As in the previous post, the questions in each category are framed around the interrogative, “What”. To repeat the analysis cycle to gain better detail, the “why” must also be discovered.  Also, a mind map tool is useful in documenting progress.

Form:

  • What customer need is the product or service addressing?
  • What does a solution look like to the customer?
  • What is the assumed skill level of the user of the product or service?
  • What tools and knowledge are typically, easily, at hand for the customer to use with the product or service?
  • What is the history of the customer need?
  • What other solutions are already available to meet the customer’s need?

Fit:

  • In what specific situation(s) is the product or service intended to be used?
  • What are the specific features of the product or service that the customer will consider critical to quality?
  • Who will use this product or service? (Who is the customer?)

Function:

  • Looking at the product or service’s internal processes, what will it do?
  • Looking that product or service’s internal processes, how does it do it?

The above questions are a starting point and will get more specific as more knowledge is gained. It is simply a matter repeating the analysis cycle until it makes sense to move forward on a prescribed course of action.

There is a lot more detail to the form, fit and function method of designing products and services than this post can cover. To learn more, check out my Lean Six Sigma book titled, “Lean and Mean Process Improvement”.

Reverse Engineering

This post deals specifically with the form, fit and function method of reverse engineering. This is a general methodology and a good starting point. A more specific methodology may be needed for specific types of projects. Reverse engineering is an important process in Lean Six Sigma. We may not call it reverse engineering, but that is what it is. Please bear in mind that this post is a general, not a detailed, description of this methodology.

There are areas of overlap in a form, fit and function analysis. This is the natural result of moving through the form, fit and function steps in the analysis process. Additionally, the steps are cyclic in that the analysis is repeated with increasing levels of detail. This “drilling down” to more granular knowledge of how something works, or should work, allows for a more robust design of a new, or refined, product or service.

At the core of this analysis process is the strategy of documenting what you know separately from what you assume. The purpose of the next cycle of analysis is to move assumptions from the assumed category to the fact category (or eliminate them). At the end of each cycle, there will be an increase in what is known and a new set of assumptions for the next cycle. Assumptions stay assumptions until they are resolved to fact or eliminated.

The form, fit and function analysis is similar to a forensic analysis of failures. The steps may have different names, but the drilling down process is the same. The key is to avoid errant leaps of logic that lead to incorrect conclusions. The analysis is repeated at increasing levels of detail, as the detail is discovered. The analysis moves us from assumption to fact.

You will notice that the questions in each category below are framed around the interrogative, “What”. To repeat the analysis cycle to gain better detail, the “why” must also be discovered.  A “mind map” is a good tool to use in documenting the progress made in the various analysis cycles.

Form:

  • What is the purpose of the product or service?
  • What assumptions are built into the design of the product or service?
  • What is the assumed skill level of the user of the product or service?
  • What other tools or knowledge are needed to use the product or service?
  • What is the development history of the product or service? (What product or service does it replace and why?)

Fit:

  • In what specific situation(s) is the product or service intended to be used?
  • What are the specific capabilities of the product or service?
  • What are the specific capabilities lacking in the product or service?

Function:

  • Looking at the product or service’s internal processes, what does it do?
  • Looking that product or service’s internal processes, how does it do it?

The above questions are a starting point and will get more specific as more knowledge is gained. It is simply a matter repeating the analysis cycle until it makes sense to move forward on a prescribed course of action.

There is a lot more detail to the form, fit and function method of reverse engineering than this post can cover. To learn more, check out my Lean Six Sigma book titled, “Lean and Mean Process Improvement”.

Lean Six Sigma and Chaos

One of the fundamental flaws with process improvement programs is the assumption that all aspects of a business environment are determinant and predictable to a high degree of precision. Certainly some business systems and functions fall into this highly predictable category and fit well into the various quality programs we have seen.
What happens, though, when you try to apply Six Sigma tools to a process or function that is indeterminate? The answer is that incorrect conclusions can be drawn. To be clear, predictions that have a higher precision than the evaluated process or function is capable of, need to be viewed with suspicion. Examples of indeterminate systems are the weather and search engine impressions that a keyword receives on a periodic basis.
The internet, like the weather is an indeterminate system. With indeterminate systems, macro (low precision) predictions can be made reliability (hot in summer, cold in winter) because at the macro level indeterminate systems demonstrate repeatable cyclic behavior. At the micro level, though, this repeatable cyclic behavior becomes less consistent and less reliable. For more on this read the work of Edward Lorenz regarding chaos and weather prediction.
Getting back to the internet, economic systems are indeterminate. This does not mean that Six Sigma tools cannot be applied to indeterminate systems like internet search engine key word impressions. It is instead a matter of using the right tool for the job. In indeterminate systems, since you cannot control or adequately predict all of the variables in the system being worked on, a Six Sigma project team will focus on less precise factors (macro). This means statistical inferences that have much higher standard deviation parameters and may even defy statistical evaluation altogether.
With indeterminate systems, the Six Sigma team will be trying to reduce uncertainties surrounding the system and determine the boundaries associated with these uncertainties. We have to realize that we cannot increase the precision of an indeterminate system beyond the system’s natural state. We can, though, control the precision of how we react to the system’s behavior.
With internet impressions, you may not be able to predict search engine behavior very far into the future, but you can calibrate how you will act to take advantage of what you see. For example, you can build a website that is robust enough to deal with the uncertainty of web searches on the internet. You can also take more frequent measurements of key word impressions and use pay per click tools to react to the impression “terrain”.
Basically, what I am saying is that with determinate systems, Six Sigma teams can work directly on the process to reduce variation and improve performance. With indeterminate systems, the team must work with the uncertainty that exists outside the process to improve performance.

If you Aren’t Measuring It You Aren’t Managing It

A favorite axiom in management is, “If you aren’t measuring it, you aren’t managing it”. Just as driving a car with your eyes closed will result in disaster, running a business without some sort of performance feedback will result in business disaster.

The collection and use of data is important because things are rarely what they seem to be. Data helps us separate what is really happening from what we think is happening (or what we want to be happening). When we make decisions based on how things feel or how they have always been, we are operating in the “as we think it is” world. This is a prescription for disaster. The successful business operates in the real world. We call this the “as-is” world.

The measure phase of a Six Sigma process improvement project focuses on characterizing the current performance of a business process, which is the current reality. In this phase, the Six Sigma project team is trying to accomplish two things. First is to establish an “as-is” performance measurement for the process. Second, is to use the data to begin looking for potential causes of defects.

Some of the important activities of the Measure phase are:

  • Developing a data collection plan and following it
  • Performing a measurement system analysis
  • Calculating performance indicators for the process from the data collected
  • Control charting

The objective is to measure the process’ impact on the customer’s CTQ (Critical to Quality) issues. The result is the characterization of the process’ performance from the customer’s perspective. This becomes the process’ story in the “as-is” world.