Lean Six Sigma

We don’t have control, we have choices. The best we can do is improve our method of making choices and hope for good results.

This is the rub. We generally set ourselves up for disappointment and failure because we make emotional choices and don’t get the results we want. We assume that our environment is predictable and that the universe behaves according to a fixed process that coincides nicely with our expectations. The plain truth is that you don’t know what you don’t know. And sometimes, you are not even aware that you don’t know.
This may seem like a bunch of idle chatter, but the concept impacts businesses everyday. This is why Six Sigma and Lean are so important in our global economy. Reducing variability and making processes more predictable improves the quality of our decision making. Less emotion and more critical analysis.
I did an experiment while participating in a March Madness Basketball Pool this past year. I entered three different brackets. One bracket was based upon selecting my favorite teams, one was based upon my “gut feelings” about who would win, and the last I did using statistical data from experts in the college basketball world.
These are the typical decision making strategies seen everyday in business. The emotional decision, the gut feeling, and the critically analyzed decision. In the case of my brackets, the one based on my emotions (favorite teams) fared the worst. The Bracket based upon my gut feelings did marginally better. The bracket based upon statistical research did really well.
This is the point behind Six Sigma and Lean. Moving toward data based decisions.  It doesn’t  mean that using gut feelings is always bad. There is a time and place for everything. When properly implemented, Six Sigma and Lean will reduce variation in your processes and make them more predictable. This in turn increases the quality of our choices.

Lean Readiness Assessment

One of the problems with Lean applications, Six Sigma, Kaizen, 5-S, etc., is that they get applied without an adequate understanding of the target business. The result is a failure of the tool to “take”, and any improvements gained are short lived. Within a few days, things start sliding back to what the “normal” used to be.

The missing step is a readiness assessment. A thorough understanding of the business and its culture must be coupled with a thorough understanding of the Lean tool being used, in order to provide the best chance of success. This readiness assessment takes time to develop, requires good listening skills, and business acumen.

I worked with a business recently that wanted me to lead a Kaizen event at one of their facilities. As part of the agreement, I asked for time to do a readiness assessment before plans were finalized for the event. What I found were two fatal problem areas. First, the management culture was top down, command and control. The employees felt very little empowerment and the senior management team agreed with that assessment. Second, the employees at the target facility did not know what Kaizen was and were only vaguely aware that “some kind of event” was going to take place.

I advised that the senior leadership involved at the targeted facility be trained in shifting from a supervisory management approach to a leadership model of management. I also advised that all employees at the facility be trained in what Kaizen is and what it would mean to them.

These adjustments took three months to complete, at which time I did another readiness assessment. This time all that needed changed was the Kaizen strategy. It needed to be tuned to the targeted business and culture. So we were ready to go, right?

Right! But with one significant development. Of the 10 items that senior management had on their list of needed improvements, only 2 remained.  The other 8 corrected themselves naturally through cultural change and the training that had taken place. Little did this business know that they had just gone through a 3 month Kaizen event that changed both processes and culture.

Here is the clincher. This whole three month process only required about 8 hours of my time as a consultant. The business itself did the heavy lifting. It doesn’t always work out this way, but given the opportunity, most businesses will at least try to make necessary adjustments.  If they don’t, nothing you can do as a consultant will make a difference anyway.

Waste Reduction and 5-S

Waste can take many forms. There is waste of time, material, human resources, etc., all of which result in a waste of money for the business and its customers. Time and material is easy to understand, even if not always easy to see. The waste of human resources is more insidious.

Everything is interconnected and waste is usually found to be both the result of other waste and the cause of other waste. The ability to see both the big picture and the little picture at the same time is important. Fixing waste in one area that creates waste somewhere else is called sub-optimization and is counterproductive. Solid leadership and a shared vision will save the day in any waste reduction initiative.

There is a relationship between the eight wastes we have all heard about and the 5-S tool we have also heard about. In this brief post, I will try to explain how 5-S can address all seven wastes. Let’s start with a description of the seven wastes.

Seven Wastes:

  1. Transport: Un-necessary movement of material for production.
  2. Inventory: Raw material, work in progress, and finished product not being processed.
  3. Motion: Un-necessary motion of people or equipment.
  4. Waiting: Raw material, work in progress, and finished product not being processed waiting for the next process step.
  5. Over Production: Production ahead of demand.
  6. Over Processing: Poor process, tool or product design that creates activity that is not productive.
  7. Defects: Inspecting for or correcting defects anywhere in the process.
  8. Under Utilization of Human Resources: Under-trained or under-utilized employees

 

5-S is not just a tool to makes things look better. This tool will also make things work better and produce less waste. Like all tools it must be calibrated to the situation. If you understand the wastes being produced in your processes or business, 5-S can be made to target these wastes and wipe them out. So what are the 5-S’s?

5-S

  1. Sort: Necessary vs. un-necessary material, data, equipment, etc. Prevention of cleanliness and mess producing problems. Addresses wastes 2, 5, 7
  2. Set In Order: Place for everything and everything in its place. Addresses wastes 1,3, 4
  3. Shine: Clean work space. Addresses wastes 2, 4, 6
  4. Standardize: Rules to standardize the sort, set in order, and shine efforts across the work space. Housekeeping, inspections, and workplace arrangement are shared and used across the work place. Addresses wastes 1,2,3,4,5,6,7,8
  5. Sustain: Culturalize the standards so as to eliminate the root causes of problems in the other 5-S categories. Addresses wastes 1,2,3,4,5,6,7,8

In order to use the 5-S tool correctly, the improvement team will calibrate it to the processes and areas where it is being applied. Applying 5-S to an office setting will have a completely different look and feel than applying the tool to a manufacturing floor. As the team looks for waste they also adjust the 5-S tool to directly address specific aspects of the work space.

As waste is reduced and the work space becomes more standardized, hidden waste producing activities become more visible. This is why the 5-S tool is considered cyclic.  The new wastes are addressed as they are discovered by the initial 5-S iteration. At the same time, the team will document larger issues that will require a more focused Six Sigma team effort later.

The result is reduced cycle time, reduced inventory dollars, increased productivity, and increased utilization of resources.  The business will see increase profit directly to the bottom line as a result of satisfied customers. This is because the customer’s perceived value of the product or service increases and the inherent value born by the producer decreases.  When you plug these value changes into the profit formula below, good things happen.

Profit = Perceived value (customer) – Inherent value (cost to deliver).

See my book, “Lean and Mean Process Improvement”, for more information.

Value Stream Analysis Case Study

Value stream analysis is an examination of the sequence of activities required to design, produce and deliver a product or service. It involves an analysis of way pieces of the value stream interact with each other.  Some of these pieces are:

– The people who perform the tasks and their knowledge and skills

– Tools and technology used to perform and support the value stream tasks

– Physical facilities and environment in which the value stream resides

– Organization and culture of the enterprises which owns the value stream

– Values and beliefs that dictate the corporate culture and behaviors of the owners of the value stream

– Communications channels and the way in which information is disseminated

A current project that I have been working on involves the development of firmware for a communication interface device.  We were moving too slowly and costs were piling up due to delays. Since there were three companies involved in the project (owner, firmware development, server/deployment development) the assumption was that finding improvements would be difficult. That is where I came in.

There are three ways for the businesses involved to shorten the value stream and get to market quicker.

  • Hire or move additional programmers into the development team.
  • Change the management of the development stream.
  • Improve the processes in the value stream.

In this case, adding resources is the easiest to do, the most costly, and least effective means of getting the project done faster. Changing the management structure of the project will only work if the new management would change the development process. The last option, process improvement, is the least costly, maybe the most difficult to implement, but also the most effective solution to the time line and cost problems.

As it turned out, two of the possible changes were made. There was a change in management of the project and the new leadership set out to improve the development process. The chart below shows two value streams.

The top map describes how the development project worked at the time of the changes.  The bottom map describes the new process.

In the former process, there were two review steps to quality check the work done by Tech 1. The wait time between steps was not much of a problem, but the second review step created a huge hidden factory.  There are 60 iterations of this process flow required to complete the 60 different communications functions. Each of the iterations could take as much as 215 hours of time to complete.  That would be 537 work days needed to get to completion. If additional resources were to be brought in, the time lie would shorten, but the overall cost of the project would not change.

The new management team elected to make the following changes:

  • Reassign some of the tasks from the firmware developer to the device owner’s engineering team. Specifically the analysis steps.  This is really nothing more that researching each of the functions ahead of time instead of just prior to the development activities. It also lowered the cost of the project to the device owner through the use of their inside engineers.
  • Combine Tech 1’s coding with the review step involving Tech 3. This moved Tech 3 from a part time asset to a full time asset on the project and had them working simultaneously instead of sequentially.
  • Testing the new code would be done on a local server instead of the production server.  This eliminated Tech 4 from the process until the developed code was finished and tested locally.

The result was the empowerment of Techs 1 and 3 to write and review code without the involvement of the deployment team. Coding errors and other unforeseen issues were caught earlier. This made these problems quicker, easier and cheaper to fix.

Another result was the device owner getting out front of the development with research on each of the 60 functions weeks before the project reached those milestones. This effectively moved 8 hours of cost and time from each of the 60 iterations.

Overall, the new process uses 105 hours for each of the 60 functions. The new time line became 262 work days. Between the use of internal engineers and the reduction in work hours, the cost savings was $800,442 dollars.

Old                         New

Work Days          537                         262

Cost                       $1,565,892           $765,450

I will not say that everyone is happy. Delays and finger pointing have damaged the relationships between the owners and the developers. This will not be changed easily. On the other hand, getting the device into a commercially viable state will make everyone happy. The sooner the better in this case.

One major take away from this exercise is that even a simple application of value stream mapping can make a difference.  Never assume that there is no opportunity for improvement.

Value Stream Analysis

I have been working on a value stream analysis case study. I believe that you will find it useful and thought provoking. We used the VSA to define weaknesses in a process’s work flow, made changes, and documented the improvement that resulted. Cycle time was the metric we were pursuing.

Unfortunately, due to a death in the family, I will not get it finished this week. I apologize, but family does come first. We’ll take more next week.

Have an Opinion?

Everyone has something to say.  Diversity of opinion and perspective is a good thing. In the age of the internet, there is no excuse for not having a venue to express yourself. Leanmeanprocessimprovement.com is a place to post your ideas and perspectives on Six Sigma, Lean, Business Management, and Personal Development.

If you want to contribute comments to existing posts on this website, please register as a user.

If you want to author posts on this website, register as a user and contact me at walt.m@att.net, to get access to author status. You will get full credit for your posts, networking opportunities, and my appreciation.

Lean and Mean Process Improvement CD and Audio Book

Lean and Mean Process Improvement is now available on CD as a PDF along with an assortment of Six Sigma Tools. Email me at walt.m@att.net for details on how to purchase this CD.

Work started last week on converting Lean and Mean Process Improvement to an audio book. This work is in progress. I will post notification about availability as soon as it is ready. If you email me at walt.m@att.net, I will notify you when it is ready for distribution.

Reactive and Proactive Data

Collecting data, voice of the customer or otherwise, requires a sample collection plan. It is important to know what you want to know, how to get the information, where to get the information, who to get the information from, and other details. You begin this process by knowing what you are trying to learn from the data.

Reactive Data

Business receive reactive data after the customer has experienced the product or service. Many times businesses get reactive data whether they want it or not through complaints, returns, and credits. This data is normally easy to obtain and can help to define what the defects are and how frequently they are occurring.

Sources of reactive data are customer complaints, technical support calls, product returns, repair service hits, customer service calls, sales figures, warranty claims, web site hits, surveys, and the like. Most businesses make it a point to track this information and make it available to process improvement teams.

Reactive data can be used to find out what aspects of the product or service the customers are having issues with, what needs are not being met, and what the customers may be expecting from the business in the future (new services, products, and features). The danger with reactive data is that some customers will tell the business about the defect by not buying from that business again. This insidious problem can sneak up on an unsuspecting organization. A business should never assume that they have all pertinent reactive data.

Proactive Data

Data that is collected before the customer experiences their first, or next, encounter with the business’ product or service is proactive data. An example of this type of data would be the information collected in a market research effort regarding potential new products or services.

Sources of proactive data are interviews with potential customers, focus groups, surveys, market research, and benchmarking. This type of data can be difficult to obtain. Customer surveys and focus groups can miss customer segments or ask the wrong questions. Market research may be expensive, hard to obtain, or be unreliable for the business’ customer base. Proactive data collection requires careful planning.

A business can use reactive data to point the way to where proactive data collection will do the most good. This helps to focus data collection activities on important customer issues. Without this focus, the business will be shooting in the dark. Consider, for example, asking customers what color of widget they prefer when the sharpness of the widget is their real concern. Not only will dull widgets turn away customers (regardless of color), asking the wrong questions will indicate that the business is out of touch with its customers. The customer may feel that a business is not focusing on their needs (and they would be right in this case) and buy from a competitor instead.

Proactive data helps focus the business on the important issues of the future. The future could be anything from the next customer visit, to consideration of where the business is investing their research and development dollars. Where reactive data helps a business to define defects in the customer’s language, proactive data helps to prevent defects before they affect the customer. Both data types are important and depend on each other for the synergy to improve the customer’s satisfaction level.

Reverse Engineering

This post deals specifically with the form, fit and function method of reverse engineering. This is a general methodology and a good starting point. A more specific methodology may be needed for specific types of projects. Reverse engineering is an important process in Lean Six Sigma. We may not call it reverse engineering, but that is what it is. Please bear in mind that this post is a general, not a detailed, description of this methodology.

There are areas of overlap in a form, fit and function analysis. This is the natural result of moving through the form, fit and function steps in the analysis process. Additionally, the steps are cyclic in that the analysis is repeated with increasing levels of detail. This “drilling down” to more granular knowledge of how something works, or should work, allows for a more robust design of a new, or refined, product or service.

At the core of this analysis process is the strategy of documenting what you know separately from what you assume. The purpose of the next cycle of analysis is to move assumptions from the assumed category to the fact category (or eliminate them). At the end of each cycle, there will be an increase in what is known and a new set of assumptions for the next cycle. Assumptions stay assumptions until they are resolved to fact or eliminated.

The form, fit and function analysis is similar to a forensic analysis of failures. The steps may have different names, but the drilling down process is the same. The key is to avoid errant leaps of logic that lead to incorrect conclusions. The analysis is repeated at increasing levels of detail, as the detail is discovered. The analysis moves us from assumption to fact.

You will notice that the questions in each category below are framed around the interrogative, “What”. To repeat the analysis cycle to gain better detail, the “why” must also be discovered.  A “mind map” is a good tool to use in documenting the progress made in the various analysis cycles.

Form:

  • What is the purpose of the product or service?
  • What assumptions are built into the design of the product or service?
  • What is the assumed skill level of the user of the product or service?
  • What other tools or knowledge are needed to use the product or service?
  • What is the development history of the product or service? (What product or service does it replace and why?)

Fit:

  • In what specific situation(s) is the product or service intended to be used?
  • What are the specific capabilities of the product or service?
  • What are the specific capabilities lacking in the product or service?

Function:

  • Looking at the product or service’s internal processes, what does it do?
  • Looking that product or service’s internal processes, how does it do it?

The above questions are a starting point and will get more specific as more knowledge is gained. It is simply a matter repeating the analysis cycle until it makes sense to move forward on a prescribed course of action.

There is a lot more detail to the form, fit and function method of reverse engineering than this post can cover. To learn more, check out my Lean Six Sigma book titled, “Lean and Mean Process Improvement”.

Lean Six Sigma and Chaos

One of the fundamental flaws with process improvement programs is the assumption that all aspects of a business environment are determinant and predictable to a high degree of precision. Certainly some business systems and functions fall into this highly predictable category and fit well into the various quality programs we have seen.
What happens, though, when you try to apply Six Sigma tools to a process or function that is indeterminate? The answer is that incorrect conclusions can be drawn. To be clear, predictions that have a higher precision than the evaluated process or function is capable of, need to be viewed with suspicion. Examples of indeterminate systems are the weather and search engine impressions that a keyword receives on a periodic basis.
The internet, like the weather is an indeterminate system. With indeterminate systems, macro (low precision) predictions can be made reliability (hot in summer, cold in winter) because at the macro level indeterminate systems demonstrate repeatable cyclic behavior. At the micro level, though, this repeatable cyclic behavior becomes less consistent and less reliable. For more on this read the work of Edward Lorenz regarding chaos and weather prediction.
Getting back to the internet, economic systems are indeterminate. This does not mean that Six Sigma tools cannot be applied to indeterminate systems like internet search engine key word impressions. It is instead a matter of using the right tool for the job. In indeterminate systems, since you cannot control or adequately predict all of the variables in the system being worked on, a Six Sigma project team will focus on less precise factors (macro). This means statistical inferences that have much higher standard deviation parameters and may even defy statistical evaluation altogether.
With indeterminate systems, the Six Sigma team will be trying to reduce uncertainties surrounding the system and determine the boundaries associated with these uncertainties. We have to realize that we cannot increase the precision of an indeterminate system beyond the system’s natural state. We can, though, control the precision of how we react to the system’s behavior.
With internet impressions, you may not be able to predict search engine behavior very far into the future, but you can calibrate how you will act to take advantage of what you see. For example, you can build a website that is robust enough to deal with the uncertainty of web searches on the internet. You can also take more frequent measurements of key word impressions and use pay per click tools to react to the impression “terrain”.
Basically, what I am saying is that with determinate systems, Six Sigma teams can work directly on the process to reduce variation and improve performance. With indeterminate systems, the team must work with the uncertainty that exists outside the process to improve performance.