Your E-Learning development program may well be in trouble, and applying the Six Sigma methodology might be the solution. In this real-world Six Sigma — play-by-play — case study about E-Learning development, you will learn from the Black Belt what tools were used throughout define, measure, analyze, improve and control phases of a project to reduce development time by 50% and save almost $300,000 per year.
- What is the true cost of your E-Learning development?
- If a vendor develops your E-Learning, what additional costs are you incurring as a result of time spent reviewing and approving the products that the vendor develops?
- What are the critical requirements of your E-Learning customers?
- Are you meeting those requirements?
- What are the critical business imperatives that need to be addressed with the E-Learning that you currently deliver?
- Are you meeting those imperatives?
- In articulating your E-Learning vision, are you speaking a language that any business manager can understand?
If the answer to any of these questions is either “no” or “I don’t know,” your E-Learning program may well be in trouble, and applying Six Sigma methodologies might be the solution to many of the problems that you are facing. Below is a case study detailing how the Customer Training Department of The Depository Trust & Clearing Corporation (DTCC) applied Six Sigma methodologies to its E-Learning development and, as a result, validated and exceeded critical customer and business requirements for E-Learning:
- Reduced E-Learning development rework by 81%
- Reduced the annual cost of developing E-Learning by 30%, and
- Saved the company $282,000.
Case Study Background — The Business Case
The Depository Trust & Clearing Corporation is the largest financial services post-trade infrastructure in the world, with operating facilities in multiple locations in the U.S. and overseas. With the company’s growth, it was becoming more and more difficult for the Customer Training Department to satisfy the increasing volume of customer training requests. A growing product line presented challenges in finding instructors who were certified on each of the products, and the need to rapidly deploy updated information was becoming more and more crucial. The leadership of the Customer Training Department was convinced that E-Learning would allow the company to satisfy its customer training requests, reduce the cost of training, and address the issue of trainer certification. Senior management was sold on the idea, an E-Learning strategy was developed and DTCC became one of the financial service industry’s early adopters of E-Learning.
Like many other training organizations, the Customer Training Department spent significant money and resources researching and purchasing a learning management system (LMS), procuring a variety of authoring software packages, and training personnel to use the new software. Monies were allocated to teach training professionals how to develop and deliver E-Learning. A benchmarking study was undertaken in order to ensure that the E-Learning was “done right,” a goal to convert all of the core instructor led courses into self paced electronic courses was set, a budget was established, a vendor was commissioned to assist with the project, and the E-Learning journey began.
Two years into the program concerns arose. Only two (of the six core) instructor-led courses had been converted for self-paced delivery. Two still uncompleted courses had been in development for close to a year. The consulting budget that was set aside to convert the core courses was totally spent. The staff of the customer training department openly expressed frustration about 1) the amount of time and rework associated with developing E-Learning programs, 2) the cost of E-Learning development and 3) the quality of the E-Learning that was being developed. In order to address these issues, additional funds were invested to implement project management methodologies, certify staff as project management professionals, and to improve the skills of the department’s instructional designers. After some initial optimism with these measures, the frustration returned. The leadership of Customer Training, which was still committed to fixing the E-Learning “problems,” volunteered to participate in the company’s second round of Six Sigma projects. An initial project charter was developed. A cross-functional team was assembled and the Six Sigma DMAIC project began.
The Organizational Structure
To fully appreciate the challenges that the Customer Training Six Sigma Team faced, it is important to have an understanding of the structure of the organization (as it pertained to E-Learning). The customer-training department is responsible for delivering training services to participants or customers of DTCC. These customers are financial industry firms. The end users of this training are the employees of these firms. Training requests came to Customer training in one of three ways:
- The member firm or participant would contact their relationship manager at DTCC with a training request. The relationship manager would in turn contact the customer-training department.
- Product management would identify a training need and then request that the customer training department develop a program.
- The customer training department would examine trend analysis information from service desk calls and develop training based on that data.
In all three cases however the development costs of the E-Learning was charged to the budget of the appropriate product manager who ultimately had the authority to make decisions on content, look and feel, as well as instructional strategies.
The Customer Training Department itself was comprised of four groups: an Information Product Group responsible for all customer documentation, a Training Group responsible for delivery of all instructor led programs, a Project Management Group that was responsible for delivering all Customer Training projects on time and within budget, and an Instructional Technologies Group responsible for E-Learning development, instructional design, and Learning Management System administration. A typical E-Learning deliverable would require review, approval and sign off by each of these groups, as well as approval by Product Management.
What’s Important — The Define Phase
The Six Sigma team started the project with some advantages. The Customer Training Department’s previous experience with project management provided some initial data with which to start. Many Six Sigma projects start with no analytical data whatsoever. At the project kickoff the draft charter was distributed to team members, and the work began refining it and filling in the missing pieces. The initial data seemed to indicate that there was tremendous opportunity to reduce rework, development time, and development cost. Rework accounted for up to 60% of the total development time. The development time was as much as 75% above the ASTD benchmark of about 200 hours of development time per hour of E-Learning, and the development costs were impacted by both rework and development time.
As the team worked to perfect the charter, discussions arose around what the industry standard for E-Learning consisted of, and which benchmark to use. It was finally agreed that the ASTD would be the benchmark. The next point of discussion centered on the project’s scope. After much debate the team finally agreed that the scope of the project would be from the time a training need was identified until the E-Learning deliverable was approved by product management. The team then agreed on a preliminary project plan with an activity schedule and some initial milestones.
Process Mapping
With the charter approved, the team focused its work on documenting and analyzing the E-Learning development process as it currently existed. This was accomplished by developing a variety of maps and charts to give a graphical representation of what was really taking place in the E-Learning development process. The first map that the team developed was a supplier, inputs, process, and output, customer diagram (SIPOC). This diagram, with its few details allowed the team to get a high level picture of what the major development steps were. As the team went through the exercise of developing this map, it became clear that the Customer Training Department was developing E-Learning without any direct input from the end user of the product.
The next graphical depiction that the team developed was a top down chart. This map created a simple picture of the E-Learning development process identifying two levels of detail. The first level showed the major steps in the process, with the second level showing the sub processes under each step. This chart gave a little more detail about the existing process even though it did not show delays, decision points and feedback loops.
The most difficult and time consuming graphical depiction of the E-Learning process was the functional deployment process map. The team began developing this map with the expectation that it would be a fairly easy process since the team had access to project plans, historical data, and subject matter experts to document what was believed to be occurring in the E-Learning development process. As the work to develop the map began, it became quite obvious that what was written on paper and in project plans was not what was actually taking place. This exercise made it clear that not only was there tremendous variation in the E-Learning development process, but the people who were involved in E-Learning development were not quite sure of what tasks should be completed, who was responsible for completing them, and when they should take place.
The functional deployment map took almost four meetings (eight hours) to complete. It required nineteen, two-and-one-half by two-foot easel pads and encompassed all the walls of a medium sized conference room. This detailed map displayed all of the steps in the E-Learning development process in sequential order. It illustrated where each step was performed, and who was involved. The map clearly displayed that the process contained a series of checks and rechecks that virtually guaranteed rework.
Qualitative Analysis
With the cross-functional process map complete, the team then performed a qualitative analysis of the process. The members looked at every step in the E-Learning development process and classified each task as customer value added, operational value, not customer value added, or not operational value added. (Customer value added being defined as an activity that: the customer recognizes as valuable, changes the product toward something that the customer expects, or is done right the first time. Operational value added activities are activities that are required by contract or other laws and regulations, done right the first time, or required to sustain the workplace ability to perform customer value added activities).
The team found that there were a significant number of non-value added activities (NVA) in the E-Learning development process. Many of these NVAs were the reviews and approvals required by the various groups within the Customer Training Department and Product Management. Some of these reviews and approvals were required regardless of whether the reviewer or approving authority was a stakeholder in the deliverable. Removing these activities could potentially reduce the time and cost associated with E-Learning development. Many of the steps that are currently accepted as “best practices” in the training industry were also identified as non-value added activities when Six Sigma methodology was applied.
Quick Wins
The non-value added activities were then all categorized based on whether they were easy to implement, fast to implement, cheap to implement, within the team’s control, and easily reversible. The NVAs that met all of these conditions were identified as quick wins.
The team then did a failure mode and effects analysis (FMEA) on each of the activities. A FMEA is a process that is used to determine what failures are occurring and what their impact and frequencies are. The FMEA validated that removing a step would ultimately do the process more good then harm, and also ensured that the proper controls were in place so that removing the step would not cause the process to fail. With the FMEA complete, the team agreed to implement ten quick wins. These quick wins basically removed redundant meetings and multiple levels of reviews and approvals from the process.
Voice of the Customer
With the quick wins identified the team began to work on defining customer requirements. Six Sigma accomplishes this first by identifying the voice of the customer (VOC). The VOC is then converted to key customer issues, which are in turn converted to critical customer requirements (CCR), or specific measurable targets. The Customer Training Six Sigma Team captured the VOC by reviewing feedback from E-Learning course evaluations and sending out surveys simply asking E-Learning customers what was important to them. The team then took the customer statements and identified the underlying issues behind them. Those issues were then translated into critical customer requirements. The table below shows some of the critical customer requirements that were uncovered.
Using this approach to gather customer requirements was key to the success of the project. This disciplined process prevented individual prejudices from skewing the data. The Customer Training Team found that many of the factors that team members thought were important to end users, were not major contributors to customer satisfaction or dissatisfaction. This finding was important since the process map uncovered that much of the E-Learning development time was being spent on issues that the team now knew were not important to the end user. (Critical customer requirements are overall requirements for the E-Learning programs, not a specific needs analysis, or task analysis to identify what the content of a specific E-Learning course program should be).
Critical To The Process Issues
The team now knew what customers wanted. The next imperative was to clearly identify business requirements or critical to process issues. The broad, cross functional make-up of the Six Sigma team allowed it to quickly identify the Customer Training E-Learning business requirements. These requirements were 1) E-Learning development time be within 10% of the industry standard, 2) that rework be less than 10% of the overall E-Learning development time, and 3) that all E-Learning deliverable be completed at or below the budgeted cost.
Critical To Quality
The critical customer requirements and the critical to process requirements were then consolidated into a single list that identified all of the measurable criteria that needed to be met in order for the E-Learning deliverable to be considered at Six Sigma quality. Six Sigma calls this consolidated list the critical to quality (CTQ).
With the CTQs now identified the next step for the team was to move into the next phase of the DMAIC model and measure how the customer training department was doing against those requirements.
How Are We Doing? — The Measure Phase
To ensure the credibility of the data that would be collected, the team now needed to develop a data collection plan. This plan would dictate what data would be collected, where it would be collected from, how it would be collected, who would collect it, when it would be collected, and most importantly operational definitions or clear understandable descriptions of what was to be observed and measured. Once the plan was completed the measurement began. Below is a sample data collection plan.
Once the data collection was complete, the results were put into pareto charts, run charts, and histograms which gave the team a visual representation of the state of E-Learning. This representation had both good and bad news. The graphical depiction of how end users felt about the E-Learning deliverables was the good news. The data showed that customers were fairly pleased with what they were receiving. This picture was quite different than what was expected by many team members who initially felt that there were quality issues with E-Learning deliverables.
The data also showed however that there was tremendous opportunity for improvement around business requirements or critical to process issues. The build time of half of the E-Learning developed was more than 10% above the industry standard.
Seventy four percent of the E-Learning developed — which represents more than 25% of the total development time — was rework.
The team now had a baseline measure of how the process was performing against both customer as well as business requirements. It had also identified improvement opportunities. With this information now validated, the team then updated the project goals to: “reduce the annual costs of rework in developing E-Learning by 75% of the opportunity, and to reduce the remaining development time (above the industry standard) by 50% of opportunity.” The team then moved into the analyze phase which would allow to pinpoint, and verify exactly what was wrong with the process.
What’s Wrong? — The Analyze Phase
One of the first tools used during the analyze phase was the process capability analysis. The team wanted to see if the current process even had the capability of meeting the critical to quality (CTQ) requirements. A process capability was done to measure the ability of the process to meet both total development time requirements as well as percent rework requirements.
The process capability analysis for total development time showed that the current process did not have the ability to meet the critical requirement of being within ten percent of the industry standard or about 220 hours of development time per hour of E-Learning. Even increasing the upper specification limit to 250 hours would have the process failing to meet the requirements fifty six percent of the time. Although the mean or average development time was two hundred and seventy two hours, the standard deviation was one hundred and thirty six hours which verified the amount of variability in the process.
The process capability analysis for rework showed similar results. The current development process did not have the ability to meet the critical requirement of limiting rework to ten percent of overall development time. Increasing the upper specification limit to twenty five percent would have the process failing to meet the requirements seventy four percent of the time. Although the mean or average amount of rework was thirty seven percent, the standard deviation was eighteen percent, which again verified the amount of variability in the process.
A more concerning discovery that was uncovered during the analyze phase was that while the Customer Training Department was spending upwards of 360 hours to design and develop E-Learning programs in house, it was spending the area of 400 hours reviewing and approving E-Learning programs that were being developed by vendors. Translated into dollars, a one hour E-Learning course was costing the product manager $14,000 more than if it were developed in the 200 hour ASTD standard (about $31,000) in-house. Paying a vendor to develop a one hour E-Learning course was costing product management $71,000. ($71,000 was the cost of the vendor plus the cost of 400 hours spent by customer training personnel to review and approve material.)
With all of this baseline data now available the team became extremely energized and anxious to move into the improve phase and generate solutions for the problems. There was still more analysis to be done in the analyze phase, however.
Root Cause Analysis
Although Six Sigma relies heavily on qualitative data and statistical analysis, it has tools that take into account the feelings, hunches, and experiences of team members and subject matter experts. The team next embarked on identifying the root causes of the rework and additional development time. Much of the information about these causes was based on the experiences and recollections of various team members. A series of tools was used to convert this anecdotal data into statistical data, and then to validate the data. First a cause and effect or fishbone diagram was developed. This diagram allowed the team to explore and graphically display all of the possible causes of both rework, and development time. It also enabled the team to focus on the content of the problem and not on the history of the problem or the differing personal interests of the team members. In short, it forced the team to focus on causes and not symptoms.
Once all potential root causes of both rework and additional development time were identified, the team then used multi-voting to derive a prioritized list of the root causes and their impact on E-Learning development. The results of this prioritization were displayed in a Pareto chart.
The team then validated its root cause findings by sending out surveys to the designers who worked on the E-Learning programs. The results of the survey verified the root cause analysis. Next a regression analysis was done on the historical data that was available. This analysis showed a correlation between the number of resources involved in the design of E-Learning and the amount of rework. As the number of people doing E-Learning design increased so did the amount of rework and the percentage of rework. This was quite eye opening since much of the current thinking about E-Learning development (SAAVY for example) recommends getting many people involved with the design of E-Learning. The data that the team collected clearly indicated the financial ramifications of that type of model.
The regression analysis also showed that the more resources there were on a project, the more overall hours, the more rework, and the higher the cost. The strongest correlation however was the amount of resources in the design phase with the rework occurring during E-Learning development. Putting the data through Multi-Vari charts and Pareto charts verified the results of the regression analysis.
At this point in the process many team members were concerned about the amount of analysis that was being performed. The team was confident, however, that as a result of the data any changes that would be made could be validated and justified.
With the root causes identified and verified and the data validated, the team was ready to move into the improve phase where it would generate solutions for the validated causes of rework and additional development time.
What Do We Need To Do? — The Improve Phase
Up until this point, the team was focused on gaining greater levels of understanding of the deficiencies affecting the current E-Learning development process. This focus gave the team an understanding and validation of the root causes of the deficiencies. The goal of the improve stage is to find and implement solutions that will eliminate those causes. To accomplish this, the team identified, evaluated, selected and developed solutions using a variety of traditional and non-traditional idea generation tools.
The team generated solutions using a variety of tools including traditional brainstorming, affinity diagrams, and a tool called random word that allows teams to approach problems from different perspectives as opposed to patterned ways of thinking. The team also employed Edward De Bono’s six thinking hats technique.
Once the ideas were generated the team then evaluated the solutions and selected the ones that would have the greatest impact on the goals of the project. To accomplish this, the team developed a cause and effect matrix. This tool allowed the team to:
- Remove any solutions that were show stoppers,
- Consider the organizational fit of each solution,
- Narrow the list down,
- Develop a solution selection matrix,
- Weight the evaluation criteria,
- Determine the impact the solution would have on the project’s goals,
- Evaluate the time benefit of the solution, and
- Evaluate the cost impact of the solutions and finally evaluate other impacts.
With the solutions selected, the team then did a FMEA on the solutions. The FMEA validated that controls to make the solutions successful were in place. These controls also became indicators that would allow the team to identify problems and make adjustments long before the rework or additional development time occurred.
Updating the Process
The next step for the team was to take the solutions and use them to update the process as it currently existed. The original nineteen page process map was so inundated with decision points, reviews and approvals, that the team agreed to develop a new process map, using the solutions that had been generated, as well as what was now known about the process as a guide. The new process map was developed in less than two hours, and required only six pages. It highlighted not only each task in the process, but who was responsible, accountable, who needed to be informed, who had to review or sign-off on the step. It also showed the controls that were in place to insure that tasks were done right the first time.
The analysis of the solutions projected that there would be an 81% decrease in rework, as well as an 81% decrease in the development time that was above the ASTD benchmark. Overall development time was also projected to be reduced by 30%. These improvements would translate into an annual savings of $282,000.
The Results
The team now had statistical projections indicating the benefit that should be realized based on the Six Sigma solutions. The only way to truly verify that the new process would produce the projected results would be to put it into practice. The new process was applied to three E-Learning projects with the following results.
As the table above clearly shows, using the process developed with Six Sigma Methodology allowed the Customer Training Department to develop E-Learning programs in significantly less time than the industry average — while meeting all of the CTQs of the business and customers.
A comparison of critical to process issues before and after the Six Sigma improvements were implemented shows the dramatic improvements even more clearly.
Finally, a process capability analysis on the new process shows that it now had the ability to meet the development time requirements 84% of the time. And that the standard deviation was now 18 hours as opposed to the 145 hours prior to applying Six Sigma methodology.
How Do We Guarantee Performance — The Control Phase
The team had validated the improvements, and documented that they worked. The team also verified that the new development process was stable and capable of meeting the CTQs. The last phase of Six Sigma is the control phase, where the performance of the process is routinely measured to ensure that critical customer requirements continue to be met.
The root cause analysis that was performed during the project identified for the team which key outputs needed to be measured. The failure mode and effects analysis uncovered the action to be taken in the event that a measured output was outside of its control limit. A tracking log was set up to measure these outputs, and the results of this log are reported to executive management every six months.
A Final Thought
Developing E-Learning the Six Sigma way has allowed DTCC’s Customer Training Department to both identify and exceed critical customer and business requirements for E-Learning, reduce overall development time by close to fifty percent, reduce annual development costs by $282,000, and articulate E-Learning in a way that business managers understand. Perhaps just as important as the savings and the identification of customer and business requirements is the ability that the Customer Training Department now has to maintain this high level of performance.
Deepa Singh
Business Developer
Web Site:-http://www.gyapti.com
Blog:- http://gyapti.blogspot.com
Email Id:-deepa.singh@soarlogic.com
No comments:
Post a Comment