Which of the following was a key strategy for successful healthcare organizations in the past

Try the new Google Books

Check out the new look and enjoy easier access to your favorite features

Which of the following was a key strategy for successful healthcare organizations in the past

Care coordination in the primary care practice involves deliberately organizing patient care activities and sharing information among all of the participants concerned with a patient's care to achieve safer and more effective care.

The main goal of care coordination is to meet patients' needs and preferences in the delivery of high-quality, high-value health care. This means that the patient's needs and preferences are known and communicated at the right time to the right people, and that this information is used to guide the delivery of safe, appropriate, and effective care.

There are two ways of achieving coordinated care: using broad approaches that are commonly used to improve health care delivery and using specific care coordination activities.

Examples of broad care coordination approaches include:

  • Teamwork.
  • Care management.
  • Medication management.
  • Health information technology.
  • Patient-centered medical home.

Examples of specific care coordination activities include:

  • Establishing accountability and agreeing on responsibility.
  • Communicating/sharing knowledge.
  • Helping with transitions of care.
  • Assessing patient needs and goals.
  • Creating a proactive care plan.
  • Monitoring and followup, including responding to changes in patients' needs.
  • Supporting patients' self-management goals.
  • Linking to community resources.
  • Working to align resources with patient and population needs.

Why Is Care Coordination Important?

Care coordination is identified by the Institute of Medicine as a key strategy that has the potential to improve the effectiveness, safety, and efficiency of the American health care system. Well-designed, targeted care coordination that is delivered to the right people can improve outcomes for everyone: patients, providers, and payers.

Although the need for care coordination is clear, there are obstacles within the American health care system that must be overcome to provide this type of care. Redesigning a health care system in order to better coordinate patients' care is important for the following reasons:

  • Current health care systems are often disjointed, and processes vary among and between primary care sites and specialty sites.
  • Patients are often unclear about why they are being referred from primary care to a specialist, how to make appointments, and what to do after seeing a specialist.
  • Specialists do not consistently receive clear reasons for the referral or adequate information on tests that have already been done. Primary care physicians do not often receive information about what happened in a referral visit.
  • Referral staff deal with many different processes and lost information, which means that care is less efficient.

How Can Care Coordination Be Put Into Action?

Applying changes in the general approach and everyday routines of a medical practice can be overwhelming, even when it is obvious that the changes will improve patient care and provider efficiency. Fortunately, there are resources available for those who are interested in learning how to take a coordinated care approach to primary care practice.

The Care Coordination Quality Measure for Primary Care (CCQM-PC) builds on previous AHRQ work to develop a conceptual framework for care coordination. The CCQM-PC is intended to fill a gap in the care coordination measurement field by assessing the care coordination experiences of adults in primary care settings. It was developed, cognitively tested, and piloted with patients from a diverse set of 13 primary care practices to comprehensively assess patient perceptions of the quality of their care coordination experiences. The CCQM-PC is designed to be used in primary care research and evaluation, with potential applications to primary care quality improvement. Guidance regarding the fielding of the survey is provided in addition to the full survey, which is in the public domain and may be customized and used without additional permission.

Care Coordination Measures Atlas—June 2014 Update. Since publication of the original Atlas in 2011, many new care coordination measures have been developed. Appendix IVa (

Which of the following was a key strategy for successful healthcare organizations in the past
 PDF - 8.56 MB ) in this Update presents many new measures emphasizing primary care. Twenty-six new EHR-based measures are identified that can help professionals meet Medicaid and Medicare EHR Incentive Programs criteria. The measures are mapped to the conceptual framework introduced in the original Atlas and included in the Update. A new section on emerging trends in the field also has been added to the Update.

Care Management: Implications for Medical Practice, Health Policy, and Health Services Research. Care Management Issue Brief. This issue brief highlights key strategies to enhance existing or emerging care management programs and summarizes recommendations for decisionmakers in practice and policy, as well as for future research.

AHRQ has assembled additional resources to help clinicians, clinical teams, and health care administrators measure care coordination and learn more about how to incorporate care coordination into routine primary care practice. Visit the PCMH Resource Center to view the following papers, briefs, and other resources:

  • Care Coordination Accountability Measures for Primary Care Practice.
  • The Roles of Patient-Centered Medical Homes and Accountable Care Organizations in Coordinating Patient Care.
  • Coordinating Care in the Medical Neighborhood: Critical Components and Available Mechanisms.
  • Coordinating Care for Adults With Complex Care Needs in the Patient-Centered Medical Home: Challenges and Solutions.
  • Prospects for Care Coordination Measurement Using Electronic Data Sources.

The following AHRQ Annual Conference presentations on care coordination are also available:

Care Transitions: Navigating the Health Care System—2011

The necessity for quality and safety improvement initiatives permeates health care.1, 2 Quality health care is defined as “the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge”3 (p. 1161). According to the Institute of Medicine (IOM) report, To Err Is Human,4 the majority of medical errors result from faulty systems and processes, not individuals. Processes that are inefficient and variable, changing case mix of patients, health insurance, differences in provider education and experience, and numerous other factors contribute to the complexity of health care. With this in mind, the IOM also asserted that today’s health care industry functions at a lower level than it can and should, and it put forth the following six aims of health care: effective, safe, patient-centered, timely, efficient, and equitable.2 The aims of effectiveness and safety are targeted through process-of-care measures, assessing whether providers of health care perform processes that have been demonstrated to achieve the desired aims and avoid those processes that are predisposed toward harm. The goals of measuring health care quality are to determine the effects of health care on desired outcomes and to assess the degree to which health care adheres to processes based on scientific evidence or agreed to by professional consensus and is consistent with patient preferences.

Because errors are caused by system or process failures,5 it is important to adopt various process-improvement techniques to identify inefficiencies, ineffective care, and preventable errors to then influence changes associated with systems. Each of these techniques involves assessing performance and using findings to inform change. This chapter will discuss strategies and tools for quality improvement—including failure modes and effects analysis, Plan-Do-Study-Act, Six Sigma, Lean, and root-cause analysis—that have been used to improve the quality and safety of health care.

Efforts to improve quality need to be measured to demonstrate “whether improvement efforts (1) lead to change in the primary end point in the desired direction, (2) contribute to unintended results in different parts of the system, and (3) require additional efforts to bring a process back into acceptable ranges”6 (p. 735). The rationale for measuring quality improvement is the belief that good performance reflects good-quality practice, and that comparing performance among providers and organizations will encourage better performance. In the past few years, there has been a surge in measuring and reporting the performance of health care systems and processes.1, 7–9 While public reporting of quality performance can be used to identify areas needing improvement and ascribe national, State, or other level of benchmarks,10, 11 some providers have been sensitive to comparative performance data being published.12 Another audience for public reporting, consumers, has had problems interpreting the data in reports and has consequently not used the reports to the extent hoped to make informed decisions for higher-quality care.13–15

The complexity of health care systems and delivery of services, the unpredictable nature of health care, and the occupational differentiation and interdependence among clinicians and systems16–19 make measuring quality difficult. One of the challenges in using measures in health care is the attribution variability associated with high-level cognitive reasoning, discretionary decisionmaking, problem-solving, and experiential knowledge.20–22 Another measurement challenge is whether a near miss could have resulted in harm or whether an adverse event was a rare aberration or likely to recur.23

The Agency for Healthcare Research and Quality (AHRQ), the National Quality Forum, the Joint Commission, and many other national organizations endorse the use of valid and reliable measures of quality and patient safety to improve health care. Many of these useful measures that can be applied to the different settings of care and care processes can be found at AHRQ’s National Quality Measures Clearinghouse (http://www.qualitymeasures.ahrq.gov) and the National Quality Forum’s Web site (http://www.qualityforum.org). These measures are generally developed through a process including an assessment of the scientific strength of the evidence found in peer-reviewed literature, evaluating the validity and reliability of the measures and sources of data, determining how best to use the measure (e.g., determine if and how risk adjustment is needed), and actually testing the measure.24, 25

Measures of quality and safety can track the progress of quality improvement initiatives using external benchmarks. Benchmarking in health care is defined as the continual and collaborative discipline of measuring and comparing the results of key work processes with those of the best performers26 in evaluating organizational performance. There are two types of benchmarking that can be used to evaluate patient safety and quality performance. Internal benchmarking is used to identify best practices within an organization, to compare best practices within the organization, and to compare current practice over time. The information and data can be plotted on a control chart with statistically derived upper and lower control limits. However, using only internal benchmarking does not necessarily represent the best practices elsewhere. Competitive or external benchmarking involves using comparative data between organizations to judge performance and identify improvements that have proven to be successful in other organizations. Comparative data are available from national organizations, such as AHRQ’s annual National Health Care Quality Report1 and National Healthcare Disparities Report,9 as well as several proprietary benchmarking companies or groups (e.g., the American Nurses Association’s National Database of Nursing Quality Indicators).

More than 40 years ago, Donabedian27 proposed measuring the quality of health care by observing its structure, processes, and outcomes. Structure measures assess the accessibility, availability, and quality of resources, such as health insurance, bed capacity of a hospital, and number of nurses with advanced training. Process measures assess the delivery of health care services by clinicians and providers, such as using guidelines for care of diabetic patients. Outcome measures indicate the final result of health care and can be influenced by environmental and behavioral factors. Examples include mortality, patient satisfaction, and improved health status.

Twenty years later, health care leaders borrowed techniques from the work of Deming28 in rebuilding the manufacturing businesses of post-World War II Japan. Deming, the father of Total Quality Management (TQM), promoted “constancy of purpose” and systematic analysis and measurement of process steps in relation to capacity or outcomes. The TQM model is an organizational approach involving organizational management, teamwork, defined processes, systems thinking, and change to create an environment for improvement. This approach incorporated the view that the entire organization must be committed to quality and improvement to achieve the best results.29

In health care, continuous quality improvement (CQI) is used interchangeably with TQM. CQI has been used as a means to develop clinical practice30 and is based on the principle that there is an opportunity for improvement in every process and on every occasion.31 Many inhospital quality assurance (QA) programs generally focus on issues identified by regulatory or accreditation organizations, such as checking documentation, reviewing the work of oversight committees, and studying credentialing processes.32 There are several other strategies that have been proposed for improving clinical practice. For example, Horn and colleagues discussed clinical practice improvement (CPI) as a “multidimensional outcomes methodology that has direct application to the clinical management of individual patients”33 (p. 160). CPI, an approach lead by clinicians that attempts a comprehensive understanding of the complexity of health care delivery, uses a team, determines a purpose, collects data, assesses findings, and then translates those findings into practice changes. From these models, management and clinician commitment and involvement have been found to be essential for the successful implementation of change.34–36 From other quality improvement strategies, there has been particular emphasis on the need for management to have faith in the project, communicate the purpose, and empower staff.37

In the past 20 years, quality improvement methods have “generally emphasize[d] the importance of identifying a process with less-than-ideal outcomes, measuring the key performance attributes, using careful analysis to devise a new approach, integrating the redesigned approach with the process, and reassessing performance to determine if the change in process is successful”38 (p. 9). Besides TQM, other quality improvement strategies have come forth, including the International Organization for Standardization ISO 9000, Zero Defects, Six Sigma, Baldridge, and Toyota Production System/Lean Production.6, 39, 40

Quality improvement is defined “as systematic, data-guided activities designed to bring about immediate improvement in health care delivery in particular settings”41 (p. 667). A quality improvement strategy is defined as “any intervention aimed at reducing the quality gap for a group of patients representative of those encountered in routine practice”38 (p. 13). Shojania and colleagues38 developed a taxonomy of quality improvement strategies (see Table 1), which infers that the choice of the quality improvement strategy and methodology is dependent upon the nature of the quality improvement project. Many other strategies and tools for quality improvement can be accessed at AHRQ’s quality tools Web site (www.qualitytools.ahrq.gov) and patient safety Web site (www.patientsafety.gov).

Quality improvement projects and strategies differ from research: while research attempts to assess and address problems that will produce generalizable results, quality improvement projects can include small samples, frequent changes in interventions, and adoption of new strategies that appear to be effective.6 In a review of the literature on the differences between quality improvement and research, Reinhardt and Ray42 proposed four criteria that distinguish the two: (1) quality improvement applies research into practice, while research develops new interventions; (2) risk to participants is not present in quality improvement, while research could pose risk to participants; (3) the primary audience for quality improvement is the organization, and the information from analyses may be applicable only to that organization, while research is intended to be generalizable to all similar organizations; and (4) data from quality improvement is organization-specific, while research data are derived from multiple organizations.

The lack of scientific health services literature has inhibited the acceptance of quality improvement methods in health care,43, 44 but new rigorous studies are emerging. It has been asserted that a quality improvement project can be considered more like research when it involves a change in practice, affects patients and assesses their outcomes, employs randomization or blinding, and exposes patients to additional risks or burdens—all in an effort towards generalizability.45–47 Regardless of whether the project is considered research, human subjects need to be protected by ensuring respect for participants, securing informed consent, and ensuring scientific value.41, 46, 48

Quality improvement projects and studies aimed at making positive changes in health care processes to effecting favorable outcomes can use the Plan-Do-Study-Act (PDSA) model. This is a method that has been widely used by the Institute for Healthcare Improvement for rapid cycle improvement.31, 49 One of the unique features of this model is the cyclical nature of impacting and assessing change, most effectively accomplished through small and frequent PDSAs rather than big and slow ones,50 before changes are made systemwide.31, 51

The purpose of PDSA quality improvement efforts is to establish a functional or causal relationship between changes in processes (specifically behaviors and capabilities) and outcomes. Langley and colleagues51 proposed three questions before using the PDSA cycles: (1) What is the goal of the project? (2) How will it be known whether the goal was reached? and (3) What will be done to reach the goal? The PDSA cycle starts with determining the nature and scope of the problem, what changes can and should be made, a plan for a specific change, who should be involved, what should be measured to understand the impact of change, and where the strategy will be targeted. Change is then implemented and data and information are collected. Results from the implementation study are assessed and interpreted by reviewing several key measurements that indicate success or failure. Lastly, action is taken on the results by implementing the change or beginning the process again.51

Six Sigma, originally designed as a business strategy, involves improving, designing, and monitoring process to minimize or eliminate waste while optimizing satisfaction and increasing financial stability.52 The performance of a process—or the process capability—is used to measure improvement by comparing the baseline process capability (before improvement) with the process capability after piloting potential solutions for quality improvement.53 There are two primary methods used with Six Sigma. One method inspects process outcome and counts the defects, calculates a defect rate per million, and uses a statistical table to convert defect rate per million to a σ (sigma) metric. This method is applicable to preanalytic and postanalytic processes (a.k.a. pretest and post-test studies). The second method uses estimates of process variation to predict process performance by calculating a σ metric from the defined tolerance limits and the variation observed for the process. This method is suitable for analytic processes in which the precision and accuracy can be determined by experimental procedures.

One component of Six Sigma uses a five-phased process that is structured, disciplined, and rigorous, known as the define, measure, analyze, improve, and control (DMAIC) approach.53, 54 To begin, the project is identified, historical data are reviewed, and the scope of expectations is defined. Next, continuous total quality performance standards are selected, performance objectives are defined, and sources of variability are defined. As the new project is implemented, data are collected to assess how well changes improved the process. To support this analysis, validated measures are developed to determine the capability of the new process.

Six Sigma and PDSA are interrelated. The DMAIC methodology builds on Shewhart’s plan, do, check, and act cycle.55 The key elements of Six Sigma is related to PDSA as follows: the plan phase of PDSA is related to define core processes, key customers, and customer requirements of Six Sigma; the do phase of PDSA is related to measure performance of Six Sigma; the study phase of PDSA is related to analyze of Six Sigma; and the act phase of PDSA is related to improve and integrate of Six Sigma.56

Application of the Toyota Production System—used in the manufacturing process of Toyota cars57—resulted in what has become known as the Lean Production System or Lean methodology. This methodology overlaps with the Six Sigma methodology, but differs in that Lean is driven by the identification of customer needs and aims to improve processes by removing activities that are non-value-added (a.k.a. waste). Steps in the Lean methodology involve maximizing value-added activities in the best possible sequence to enable continuous operations.58 This methodology depends on root-cause analysis to investigate errors and then to improve quality and prevent similar errors.

Physicians, nurses, technicians, and managers are increasing the effectiveness of patient care and decreasing costs in pathology laboratories, pharmacies,59–61 and blood banks61 by applying the same principles used in the Toyota Production System. Two reviews of projects using Toyota Production System methods reported that health care organizations improved patient safety and the quality of health care by systematically defining the problem; using root-cause analysis; then setting goals, removing ambiguity and workarounds, and clarifying responsibilities. When it came to processes, team members in these projects developed action plans that improved, simplified, and redesigned work processes.59, 60 According to Spear, the Toyota Production System method was used to make the “following crystal clear: which patient gets which procedure (output); who does which aspect of the job (responsibility); exactly which signals are used to indicate that the work should begin (connection); and precisely how each step is carried out”60 (p. 84).

Factors involved in the successful application of the Toyota Production System in health care are eliminating unnecessary daily activities associated with “overcomplicated processes, workarounds, and rework”59 (p. 234), involving front-line staff throughout the process, and rigorously tracking problems as they are experimented with throughout the problem-solving process.

Root cause analysis (RCA), used extensively in engineering62 and similar to critical incident technique,63 is a formalized investigation and problem-solving approach focused on identifying and understanding the underlying causes of an event as well as potential events that were intercepted. The Joint Commission requires RCA to be performed in response to all sentinel events and expects, based on the results of the RCA, the organization to develop and implement an action plan consisting of improvements designed to reduce future risk of events and to monitor the effectiveness of those improvements.64

RCA is a technique used to identify trends and assess risk that can be used whenever human error is suspected65 with the understanding that system, rather than individual factors, are likely the root cause of most problems.2, 4 A similar procedure is critical incident technique, where after an event occurs, information is collected on the causes and actions that led to the event.63

An RCA is a reactive assessment that begins after an event, retrospectively outlining the sequence of events leading to that identified event, charting causal factors, and identifying root causes to completely examine the event.66 Because it is a labor-intensive process, ideally a multidisciplinary team trained in RCA triangulates or corroborates major findings and increases the validity of findings.67 Taken one step further, the notion of aggregate RCA (used by the Veterans Affairs (VA) Health System) is purported to use staff time efficiently and involves several simultaneous RCAs that focus on assessing trends, rather than an in-depth case assessment.68

Using a qualitative process, the aim of RCA is to uncover the underlying cause(s) of an error by looking at enabling factors (e.g., lack of education), including latent conditions (e.g., not checking the patient’s ID band) and situational factors (e.g., two patients in the hospital with the same last name) that contributed to or enabled the adverse event (e.g., an adverse drug event). Those involved in the investigation ask a series of key questions, including what happened, why it happened, what were the most proximate factors causing it to happen, why those factors occurred, and what systems and processes underlie those proximate factors. Answers to these questions help identify ineffective safety barriers and causes of problems so similar problems can be prevented in the future. Often, it is important to also consider events that occurred immediately prior to the event in question because other remote factors may have contributed.68

The final step of a traditional RCA is developing recommendations for system and process improvement(s), based on the findings of the investigation.68 The importance of this step is supported by a review of the literature on root-cause analysis, where the authors conclude that there is little evidence that RCA can improve patient safety by itself.69 A nontraditional strategy, used by the VA, is aggregate RCA processes, where several simultaneous RCAs are used to examine multiple cases in a single review for certain categories of events.68, 70

Due the breadth of types of adverse events and the large number of root causes of errors, consideration should be given to how to differentiate system from process factors, without focusing on individual blame. The notion has been put forth that it is a truly rare event for errors to be associated with irresponsibility, personal neglect, or intention,71 a notion supported by the IOM.4, 72 Yet efforts to categorize individual errors—such as the Taxonomy of Error Root Cause Analysis of Practice Responsibility (TERCAP), which focuses on “lack of attentiveness, lack of agency/fiduciary concern, inappropriate judgment, lack of intervention on the patient’s behalf, lack of prevention, missed or mistaken MD/healthcare provider’s orders, and documentation error”73 (p. 512)—may distract the team from investigating systems and process factors that can be modified through subsequent interventions. Even the majority of individual factors can be addressed through education, training, and installing forcing functions that make errors difficult to commit.

Errors will inevitably occur, and the times when errors occur cannot be predicted. Failure modes and effects analysis (FMEA) is an evaluation technique used to identify and eliminate known and/or potential failures, problems, and errors from a system, design, process, and/or service before they actually occur.74–76 FMEA was developed for use by the U.S. military and has been used by the National Aeronautics and Space Administration (NASA) to predict and evaluate potential failures and unrecognized hazards (e.g., probabilistic occurrences) and to proactively identify steps in a process that could reduce or eliminate future failures.77 The goal of FMEA is to prevent errors by attempting to identifying all the ways a process could fail, estimate the probability and consequences of each failure, and then take action to prevent the potential failures from occurring. In health care, FMEA focuses on the system of care and uses a multidisciplinary team to evaluate a process from a quality improvement perspective.

This method can be used to evaluate alternative processes or procedures as well as to monitor change over time. To monitor change over time, well-defined measures are needed that can provide objective information of the effectiveness of a process. In 2001, the Joint Commission mandated that accredited health care providers conduct proactive risk management activities that identify and predict system weaknesses and adopt changes to minimize patient harm on one or two high-priority topics a year.78

Developed by the VA’s National Center for Patient Safety, the health failure modes and effects analysis (HFMEA) tool is used for risk assessment. There are five steps in HFMEA: (1) define the topic; (2) assemble the team; (3) develop a process map for the topic, and consecutively number each step and substep of that process; (4) conduct a hazard analysis (e.g., identify cause of failure modes, score each failure mode using the hazard scoring matrix, and work through the decision tree analysis);79 and (5) develop actions and desired outcomes. In conducting a hazard analysis, it is important to list all possible and potential failure modes for each of the processes, to determine whether the failure modes warrant further action, and to list all causes for each failure mode when the decision is to proceed further. After the hazard analysis, it is important to consider the actions needed to be taken and outcome measures to assess, including describing what will be eliminated or controlled and who will have responsibility for each new action.79

Fifty studies and quality improvement projects were included in this analysis. The findings were categorized by type of quality method employed, including FMEA, RCA, Six Sigma, Lean, and PDSA. Several common themes emerged: (1) what was needed to implement quality improvement strategies, (2) what was learned from evaluating the impact of change interventions, and (3) what is known about using quality improvement tools in health care.

Substantial and strong leadership support,80–83 involvement,81, 84 consistent commitment to continuous quality improvement,85, 86 and visibility,87 both in writing and physically,86 were important in making significant changes. Substantial commitment from hospital boards was also found to be necessary.86, 88 The inevitability of resource demands associated with changing process required senior leadership to (1) ensure adequate financial resources87–89 by identifying sources of funds for training and purchasing and testing innovative technologies90 and equipment;91 (2) facilitate and enable key players to have the needed time to be actively involved in the change processes,85, 88, 89 providing administrative support;90 (3) support a time-consuming project by granting enough time for it to work;86, 92 and (4) emphasize safety as an organizational priority and reinforce expectations, especially when the process was delayed or results were periodically not realized.87 It was also asserted that senior leaders needed to understand the impact of high-level decisions on work processes and staff time,88 especially when efforts were underway to change practice, and that quality improvement needed to be incorporated into systemwide leadership development.88 Leadership was needed to make patient safety a key aspect of all meetings and strategies,85, 86 to create a formal process for identifying annual patient safety goals for the organization, and to hold themselves accountable for patient safety outcomes.85

Even with strong and committed leadership, some people within the organization may be hesitant to participate in quality improvement efforts because previous attempts to create change were hindered by various system factors,93 a lack of organization-wide commitment,94 poor organizational relationships, and ineffective communication.89 However the impact of these barriers were found to be lessened if the organization embraced the need for change,95 changed the culture to enable change,90 and actively pursued institutionalizing a culture of safety and quality improvement. Yet adopting a nonpunitive culture of change took time,61, 90 even to the extent that the legal department in one hospital was engaged in the process to turn the focus to systems, not individual-specific issues.96 Also, those staff members involved in the process felt more at ease with improving processes, particularly when cost savings were realized and when no layoff policies were put in place to protect job security even when efficiencies were realized.84

The improvement process needed to engage97 and involve all stakeholders and gain their understanding that the investment of resources in quality improvement could be recouped with efficiency gains and fewer adverse events.86 Stakeholders were used to (1) prioritize which safe practices to target by developing a consensus process among stakeholders86, 98 around issues that were clinically important, i.e., hazards encountered in everyday practice that would make a substantial impact on patient safety; (2) develop solutions to the problems that required addressing fundamental issues of interdisciplinary communication and teamwork, which were recognized as crucial aspects of a culture of safety; and (3) build upon the success of other hospitals.86 In an initiative involving a number of rapid-cycle collaboratives, successful collaboratives were found to have used stakeholders to determine the choice of subject, define objectives, define roles and expectations, motivate teams, and use results from data analyses.86 Additionally, it was important to take into account the different perspectives of stakeholders.97 Because variation in opinion among stakeholders and team members was expected99 and achieving buy-in from all stakeholders could have been difficult to achieve, efforts were made to involve stakeholders early in the process, solicit feedback,100 and gain support for critical changes in the process.101

Communication and sharing information with stakeholders and staff was critical to specifying the purpose and strategy of the quality initiative;101 developing open channels of communication across all disciplines and at all levels of leadership/staff, permitting the voicing of concerns and observations throughout the process of creating change;88 ensuring that patients and families were appropriately included in the dialogue; ensuring that everyone involved felt that he or she was an integral part of the health care team and was responsible for patient safety; sharing lessons learned from root-cause analysis; and capturing attention and soliciting buy-in by sharing patient safety stories with staff and celebrating successes, no matter how small.85 Yet in trying to keep everyone informed of the process and the data behind decisions, some staff had difficulty accepting system changes made in response to the data.89

The successful work of these strategies was dependent upon having motivated80 and empowered teams. There were many advantages to basing the work of the quality improvement strategies on the teamwork of multidisciplinary teams that would review data and lead change.91 These teams needed to be comprised of the right staff people,91, 92 include peers,102 engage all of the right stakeholders (ranging from senior managers to staff), and be supported by senior-level management/leadership.85, 86 Specific stakeholders (e.g., nurses and physicians) had to be involved81 and supported to actually make the change, and to be the champions103 and problem-solvers within departments59 for the interventions to succeed. Because implementing the quality initiatives required substantial changes in the clinician’s daily work,86 consideration of the attitude and willingness of front-line staff for making the specific improvements59, 88, 104 was needed.

Other key factors to improvement success were implementing protocols that could be adapted to the patient’s needs93 and to each unit, based on experience, training, and culture.88 It was also important to define and test different approaches; different approaches can converge and arrive at the same point.81 Mechanisms that facilitated staff buy-in was putting the types and causes of errors in the forefront of providers’ minds, making errors visible,102 being involved in the process of assessing work and looking for waste,59 providing insight as to whether the improvement project would be feasible and its impact measurable,105 and presenting evidence-based changes.100 Physicians were singled out as the one group of clinicians that needed to lead106 or be actively involved in changes,86 especially when physician behaviors could create inefficiencies.84 In one project, physicians were recruited as champions to help spread the word to other physicians about the critical role of patient safety, to make patient safety a key aspect of all leadership and medical management meetings and strategies.85

Team leaders and the composition of the team were also important. Team leaders that emphasized efforts offline to help build and improve relationships were found to be necessary for team success.83, 93 These teams needed a dedicated team leader who would have a significant amount of time to put into the project.84 While the leader was not identified in the majority of reports reviewed for this paper, the team on one project was co-chaired by a physician and an administrator.83 Not only did the type and ability of team leaders affect outcomes, the visibility of the initiative throughout the organization was dependent upon having visible champions.100 Multidisciplinary teams needed to understand the numerous steps involved in quality improvement and that there were many opportunities for error, which essentially enabled teams to prioritize the critical items to improve within a complex process and took out some of the subjectivity from the analysis. The multidisciplinary structure of teams allowed members to identify each step from their own professional practice perspective, anticipate and overcome potential barriers, allowed the generation of diverse ideas, and allowed for good discussion and deliberations, which together ultimately promoted team building.100, 107 In two of the studies, FMEA/HFMEA was found to minimize group biases by benefiting from the diversity within multidisciplinary composition of the team and enabling the team to focus on a structured outline of the goals that needed to be accomplished.107, 108

Teams needed to be prepared and enabled to meet the demands of the quality initiatives with ongoing education, weekly debriefings, review of problems solved and principles applied,84 and ongoing monitoring and feedback opportunities.92, 95 Education and training of staff95, 80, 95, 101, 104 and leadership80 about the current problem, quality improvement tools, the planned change in practice intervention, and updates as the project progressed were key strategies.92 Training was an ongoing process91 that needed to focus on skill deficits82 and needed to be revised as lessons were learned and data was analyzed during the implementation of the project.109 The assumption could not be made that senior staff or leadership would not need training.105 Furthermore, if the team had no experience with the quality tools or successfully creating change, an additional resource could have been a consultant or someone to facilitate the advanced knowledge involved in quality improvement techniques.106 Another consideration was using a model that intervened at the hospital-community interface, coupled with an education program.97

The influence of teamwork processes enabled those within the team to improve relationships across departments.89 Particular attention needed to be given to effective team building,110 actively following the impact of using the rapid-cycle (PDSA) model, meeting frequently, and monitoring progress using outcome data analysis at least on a monthly basis.86 Effective teamwork and communication, information transfer, coordination among multiple hospital departments and caregivers, and changes to hospital organization culture were considered essential elements of team effectiveness.86 Yet the impact of team members that had difficulty in fully engaging in teamwork because of competing workloads (e.g., working double shifts) was dampened.97 Better understanding of each other’s role is an important project outcome and provides a basis for continuing the development of other practices to improve outcomes.97 The work of teams was motivated through continual sharing of progress and success and celebration of achievements.87

Teamwork can have many advantages, but only a few were discussed in the reports reviewed. Teams were seen as being able to increase the scope of knowledge, improve communication across disciplines, and facilitate learning about the problem.111 Teams were also found to be proactive,91 integrating tools that improve both the technical processes and organizational relationships,83 and to work together to understand the current situation, define the problem, pathways, tasks, and connections, as well as to develop a multidisciplinary action plan.59 But teamwork was not necessarily an easy process. Group work was seen as difficult for some and time consuming,111 and problems arose when everyone wanted their way,97 which delayed convergence toward a consensus on actions. Team members needed to learn how to work with a group and deal with group dynamics, confronting peers, conflict resolution, and addressing behaviors that are detrimental.111

As suggested by Berwick,112 the leaders of the quality improvement initiatives in this review found that successful initiatives needed to simplify;96, 104 standardize;104 stratify to determine effects; improve auditory communication patterns; support communication against the authority gradient;96 use defaults properly; automate cautiously;96 use affordance and natural mapping (e.g., design processes and equipment so that the easiest thing to do is the right thing to do); respect limits of vigilance and attention;96 and encourage reporting of near hits, errors, and hazardous conditions.96 Through the revision and standardization of policies and procedures, many of these initiatives were able to effectively realize the benefit of making the new process easier than the old and decrease the effect of human error associated with limited vigilance and attention.78, 80–82, 90–92, 94, 96, 102, 103, 113, 114

Simplification and standardization were found to be effective as a forcing function by decreasing reliance on individualized decisionmaking. Several initiatives standardized medication ordering and administration protocols,78, 87, 101, 103, 106–108, 109, 114–116 realizing improvements in patient outcomes, nurse efficiency, and effectiveness.103, 106, 108, 109, 114–116 One initiative used a standardized form for blood product ordering.94 Four initiatives improved pain assessment and management by using standardized metrics and assessment tools.80, 93, 100, 117 In all of these initiatives, simplification and standardization were effective strategies.

Related to simplification and standardization is the potential benefit of using information technology to implement checks, defaults, and automation to improve quality and reduce errors, in large part to embedding forcing functions to remove the possibility of errors.96, 106 The effects of human error could be mitigated by using necessary redundancy, such as double-checking for certain types of errors; this was seen as engaging the knowledge and abilities of two skilled practitioners61, 101 and was used successfully to reduce errors associated with dosing.78 Information technology was successfully used to (1) decrease the opportunity for human error through automation;61 (2) standardize medication concentrations78 and dosing using computer-enabled calculations,115, 116 standardized protocols,101 and order clarity;116 (3) assist caregivers in providing quality care using alerts and reminders; (4) improve medication safety (e.g., implementing bar coding and computerized provider order entry); and (5) track performance through database integration and indicator monitoring. Often workflow and procedures needed to be revised to keep pace with technology.78 Using technology implied that organizations were committed to investing in technology to enable improvement,85 but for two initiatives, the lack of adequate resources for data collection impacted analysis and evaluation of the initiative.93, 97

Data and information were needed to understand the root causes of errors and near errors,99 to understand the magnitude of adverse events,106 to track and monitor performance,84, 118 and to assess the impact of the initiatives.61 Reporting of near misses, errors, and hazardous conditions needs to be encouraged.96 In part, this is because error reporting is generally low and is associated with organizational culture106 and can be biased, which will taint results.102 Organizations not prioritizing reporting or not strongly emphasizing a culture of safety may have the tendency to not report errors that harm patients or near misses (see Chapter 35. “Evidence Reporting and Disclosure”). Using and analyzing data was viewed as critical, yet some team members and staff may have benefited from education on how to effectively analyze and display findings.106 Giving staff feedback by having a transparent process39 of reporting findings82 was viewed as a useful trigger that brought patient safety to the forefront of the hospital.107 It follows then that not having data, whether because it was not reported or not collected, made statistical analysis of the impact of the initiative115 or assessing its cost-benefit ratio not possible.108 As such, multi-organizational collaboration should have a common database.98

The meaning of data can be better understood by using measures and benchmarks. Repeated measurements were found to be useful for monitoring progress,118 but only when there was a clear metric for measuring the degree of success.83 The use of measures could be used as a strategy to involve more clinicians and deepened their interest, especially physicians. Using objective, broader, and better measures was viewed as being important for marking progress, and provided a basis for “a call to action” and celebration.106 When measures of care processes were used, it was asserted that there was a need to demonstrate the relationship between specific changes to care processes and outcomes.61

When multiple measures were used, along with better documentation of care, it was easier to assess the impact of the initiative on patient outcomes.93 Investigators from one initiative put forth the notion that hospital administrators should encourage more evaluations of initiatives and that the evaluations should focus on comprehensive models that assess patient outcomes, patient satisfaction, and cost effectiveness.114 The assessment of outcomes can be enhanced by setting realistic goals, not unrealistic goals such as 100 percent change,119 and by comparing organizational results to recognized State, regional, and national benchmarks.61, 88

The cost of the initiative was an viewed as important factor in the potential for improvement, even when the adverse effects of current processes were considered as necessitating rapid change.106 Because of this, it is important to implement changes that are readily feasible106 and can be implemented with minimal disruption of practice activities.99 It is also important to consider the potential of replicating the initiative in other units or at other sites.99 One strategy to improve the chances of replication is to standardize processes, which will most likely incur some cost.106 In some respects, the faster small problems were resolved, the faster improvements could be replicated throughout the entire system.84, 106 Recommendations that did not incur costs or had low costs and could be demonstrated to be effective were implemented expeditiously.93, 107 A couple of investigators stated that their interventions decreased costs and patients’ length of stay,103 but did not present any data to verify those statements. It was also purported that the costs associated with change will be recouped either in return on investment or in reduced patient risk (and thus reduced liability costs).61

Ensuring that those implementing the initiative receive education is critical. There were several examples of this. Two initiatives that targeted pain management found that educating staff on pain management guidelines and protocols for improving chronic pain assessment and management improved staff understanding, assessment and documentation, patient and family satisfaction, and pain management.80, 93 Another initiative educated all staff nurses on intravenous (IV) site care and assessment, as well as assessment of central lines, and realized improved patient satisfaction and reduced complications and costs.109

Despite the benefits afforded by the initiatives, there were many challenges that were identified in implementing the various initiatives:

  • Lack of time and resources made it difficult to implement the initiative well.82

  • Some physicians would notaccept the new protocol and thwarted implementation until they had confidence in the tool.103

  • Clear expectations were lacking.86

  • Hospital leadership was not adequately engaged.86

  • There was insufficient emphasis on importance and use of measures.86

  • The number and type of collaborative staffing was insufficient.86

  • The time required for nurses and other staff to implement the changes was underestimated.120

  • The extent to which differences in patient severity accounted for results could not be evaluated because severity of illness was not measured.89

  • Improvements associated with each individual PDSA cycle could not be evaluated.89

  • The full impact on the costs of care, including fixed costs for overhead, could not be evaluated.89

  • Failure to consider the influence of factors such as fatigue, distraction, time pressures.82

  • The Hawthorne effect may have caused improvements more so than the initiative.118

  • Many factors were interrelated and correlated.96

  • There was a lack of generalizability because of small sample size.93, 119

  • Addressing some of the problems created others (e.g., implementing computerized physician order entry (CPOE)).110

  • Targets set (e.g., 100 percent of admissions) may have been too ambitious and were thus always demanding and difficult-to-achieve service improvements.119

Despite the aforementioned challenges, many investigators found that it was important to persevere and stay focused because introducing new processes can be difficult,84, 100 but the reward of quality improvement is worth the effort.84 Implementing quality improvement initiatives was considered time consuming, tedious, and difficult for people who are very action oriented; it required an extensive investment of resources (i.e., time, money, and energy);94 and it involved trial and error to improve the process.91 Given theses and other challenges, it was also important to celebrate the victories.84

Other considerations were given to the desired objective of sustaining the changes after the implementation phase of the initiative ended.105 Investigators asserted that improving quality through initiatives needed to be considered as integral in the larger, organizationwide, ongoing process of improvement. Influential factors attributed to the success of the initiatives were effecting practice changes that could be easily used at the bedside;82 using simple communication strategies;88 maximizing project visibility, which could sustain the momentum for change;100 establishing a culture of safety; and strengthening the organizational and technological infrastructure.121 However, there were opposing viewpoints about the importance of spreading the steps involved in creating specific changes (possibly by forcing changes into the redesign of processes), rather than only relying on only adapting best practices.106, 121 Another factor was the importance of generating enthusiasm about embracing change through a combination of collaboration (both internally and externally)103 and healthy competition. Collaboratives could also be a vehicle for encouraging the use of and learning from evidence-based practice and rapid-cycle improvement as well as identifying and gaining consensus on potentially better practices.86, 98

Quality tools used to define and assess problems with health care were seen as being helpful in prioritizing quality and safety problems99 and focusing on systems,98 not individuals. The various tools were used to address errors and growing costs88 and to change provider practices.117 Several of the initiatives used more than one of the quality improvement tools, such as beginning with root-cause analysis then using either Six Sigma, Toyota Production System/Lean, or Plan-Do-Study-Act to implement change in processes. Almost every initiative included in this analysis performed some type of pretesting/pilot testing.92, 99 Investigators and leaders of several initiatives reported advantages of using specific types of quality tools. These are discussed as follows:

Root-cause analysis was reported to be useful to assess reported errors/incidents and differentiate between active and latent errors, to identify need for changes to policies and procedures, and to serve as a basis to suggest system changes, including improving communication of risk.82, 96, 102, 105

Six Sigma/Toyota Production System was reported to have been successfully used to decrease defects/variations59, 61, 81 and operating costs81 and improve outcomes in a variety of health care settings and for a variety of processes.61, 88 Six Sigma was found to be a detailed process that clearly differentiated between the causes of variation and outcome measures of process.61 One of the advantages of using Six Sigma was that it made work-arounds and rework difficult because the root causes of the preimplementation processes were targeted.59, 88 Additionally, investigators reported that the more teams worked with this strategy, the better they became at implementing it and the more effective the results.84 Yet it was noted that to use this strategy effectively, a substantial commitment of leadership time and resources was associated with improved patient safety, lowered costs, and increased job satisfaction.84 Six Sigma was also an important strategy for problem-solving and continuous improvement; communicating clearly about the problem; guiding the implementation process; and producing results in a clear, concise, and objective way.59

Plan-Do-Study-Act (PDSA) was used by the majority of initiatives included in this analysis to implement initiatives gradually, while improving them as needed. The rapid-cycle aspect of PDSA began with piloting a single new process, followed by examining results and responding to what was learned by problem-solving and making adjustments, after which the next PDSA cycle would be initiated. The majority of quality improvement efforts using PDSA found greater success using a series of small and rapid cycles to achieve the goals for the intervention, because implementing the initiative gradually allowed the team to make changes early in the process80 and not get distracted or sidetracked by every detail and too many unknowns.87, 119, 122 The ability of the team to successfully use the PDSA process was improved by providing instruction and training on the use of PDSA cycles, using feedback on the results of the baseline measurements,118 meeting regularly,120 and increasing the team’s effectiveness by collaborating with others, including patients and families,80 to achieve a common goal.87 Conversely, some teams experienced difficulty in using rapid-cycle change, collecting data, and constructing run charts,86 and one team reported that applying simple rules in PDSA cycles may have been more successful in a complex system.93

Failure modes and effects analysis (FMEA) was used to avoid events and improve or maintain the quality of care.123 FMEA was used prospectively to identify potential areas of failure94 where experimental characterization of the process at the desired speed of change could be assessed,115 and retrospectively to characterize the safety of a process by identifying potential areas of failure, learning about the process from the staff’s point of view.94 Using a flow chart of the process before beginning the analysis got the team to focus and work from the same document.94 Information learned from FMEA was used to provide data for prioritizing improvement strategies, serve as a benchmark for improvement efforts,116 educate and provide a rationale for diffusion of these practice changes to other settings,115 and increase the ability of the team to facilitate change across all services and departments within the hospital.124 Using FMEA facilitated systematic error management, which was important to good clinical care in complex processes and complex settings, and was dependent upon a multidisciplinary approach, integrated incident and error reporting, decision support, standardization of terminology, and education of caregivers.116

Health failure modes and effects analysis (HFMEA) was used to provide a more detailed analysis of smaller processes, resulting in more specific recommendations, as well as larger processes. HFEMA was viewed as a valid tool for proactive analysis in hospitals, facilitating a very thorough analysis of vulnerabilities (i.e., failure modes) before adverse events occurred.108 This tool was considered valuable in identifying the multifactoral nature of most errors108 and the potential risk for errors,111 but was seen as being time consuming.107 Initiatives that used HFMEA could minimize group biases through the multidisciplinary composition of the team78, 108, 115 and facilitate teamwork by providing a step-by-step process,107 but these initiatives required a paradigm shift for many.111

From the improvement strategies and projects assessed in this review, several themes emerged from successful initiatives that nurses can use to guide quality improvement efforts. The strength of the following practice implications is associated with the methodological rigor and generalizability of these strategies and projects:

  1. The importance of having strong leadership commitment and support cannot be overstated. Leadership needs to empower staff, be actively involved, and continuously drive quality improvement. Without the commitment and support of senior-level leadership, even the best intended projects are at great risk of not being successful. Champions of the quality initiative and quality improvement need to be throughout the organization, but especially in leadership positions and on the team.

  2. A culture of safety and improvement that rewards improvement and is driven to improve quality is important. The culture is needed to support a quality infrastructure that has the resources and human capital required for successfully improving quality.

  3. Quality improvement teams need to have the right stakeholders involved.

  4. Due to the complexity of health care, multidisciplinary teams and strategies are essential. Multidisciplinary teams from participating centers/units need to work closely together, taking advantage of communication strategies such as face-to-face meetings, conference calls, and dedicated e-mail listservs, and utilize the guidance of trained facilitators and expert faculty throughout the process of implementing change initiatives when possible.

  5. Quality improvement teams and stakeholders need to understand the problem and root causes. There must be a consensus on the definition of the problem. To this end, a clearly defined and universally agreed upon metric is essential. This agreement is as crucial to the success of any improvement effort as the validity of the data itself.

  6. Use a proven, methodologically sound approach without being distracted by the jargon used in quality improvement. The importance given to using clear models, terms, and process is critical, especially because many of the quality tools are interrelated; using only one tool will not produce successful results.

  7. Standardizing care processes and ensuring that everyone uses those standards should improve processes by making them more efficient and effective—and improve organizational and patient outcomes.

  8. Evidence-based practice can facilitate ongoing quality improvement efforts.

  9. Implementation plans need to be flexible to adapt to needed changes as they come up

  10. Efforts to change practice and improve the quality of care can have multiple purposes, including redesigning care processes to maximize efficiency and effectiveness, improving customer satisfaction, improving patient outcomes, and improving organizational climate.

  11. Appropriate use of technology can improve team functioning, foster collaboration, reduce human error, and improve patient safety.

  12. Efforts need to have sufficient resources, including protected staff time.

  13. Continually collect and analyze data and communicate results on critical indicators across the organization. The ultimate goal of assessing and monitoring quality is to use findings to assess performance and define other areas needing improvement.

  14. Change takes time, so it is important to stay focused and persevere.

Given the complexity of health care, assessing quality improvement is a dynamic and challenging area. The body of knowledge is slowly growing in this area, which could be due to the continued dilemma as to whether a quality improvement initiative is just that or whether it meets the definition of research and employs methodological rigor—even if it meets the requirements for publication. Various quality improvement methods have been used since Donabedian’s seminal publication in 1966,27 but only recently has health care quality improvement used the Six Sigma methodology and published findings; when it has, it has been used only on a single, somewhat isolated component of a larger system, making organizational learning and generalizability difficult. Because of the long standing importance of quality improvement, particularly driven by external sources (e.g., CMS and the Joint Commission) in the past few years, many quality improvement efforts within organizations have taken place and are currently in process, but may not have been published and therefore not captured in this review, and may not have necessarily warranted publication in the peer-reviewed literature. With this in mind, researchers, leaders and clinicians will need to define what should be considered generalizable and publishable in the peer-reviewed literature to move the knowledge of quality improvement methods and interventions forward.

While the impact of many of the quality improvement projects included in this analysis were mentioned in terms of clinical outcomes, functional outcomes, patient satisfaction, staff satisfaction, and readiness to change, cost and utilization outcomes and measurement is important in quality improvement efforts, especially when variation occurs. There are many unanswered questions. Some key areas are offered for consideration:

  • How can quality improvement efforts recognize the needs of patients, insurers, regulators, patients, and staff and be successful?

  • What is the best method to identify priorities for improvement and meet the competing needs of stakeholders?

  • What is the threshold of variation that needs to be attained to produce regular desired results?

  • How can a bottom-up approach to changing clinical practice be successful if senior leadership is not supportive or the organizational culture does not support change?

In planning quality improvement initiatives or research, researchers should use a conceptual model to guide their work, which the aforementioned quality tools can facilitate. To generalize empirical findings from quality improvement initiatives, more consideration should be given to increasing sample size by collaborating with other organizations and providers. We need to have a better understanding of what tools work the best, either alone or in conjunction with other tools. It is likely that mixed methods, including nonresearch methods, will offer a better understanding of the complexity of quality improvement science. We also know very little about how tailoring implementation interventions contributes to process and patient outcomes, or what the most effective steps are that cross intervention strategies. Lastly, we do not know what strategies or combination of strategies work for whom and in what context, why they work in some settings or cases and not others, and what the mechanism is by which these strategies or combination of strategies work.

Whatever the acronym of the method (e.g., TQM, CQI) or tool used (e.g., FMEA or Six Sigma), the important component of quality improvement is a dynamic process that often employs more than one quality improvement tool. Quality improvement requires five essential elements for success: fostering and sustaining a culture of change and safety, developing and clarifying an understanding of the problem, involving key stakeholders, testing change strategies, and continuous monitoring of performance and reporting of findings to sustain the change.

To identify quality improvement efforts for potential inclusion in this systematic review, PubMed and CINAL were searched from 1997 to present. The following key words and terms were used: “Failure Modes and Effects Analysis/FMEA,” “Root Cause Analysis/RCA,” “Six Sigma,” “Toyota Production System/Lean,” and “Plan Do Study Act/PDSA.” Using these key words, 438 articles were retrieved. Inclusion criteria included reported processes involving nursing; projects/research involving methods such as FMEA, RCA, Six Sigma, Lean, or PDSA; qualitative and quantitative analyses; and reporting patient outcomes. Projects and research were excluded if they did not involve nursing on the improvement team, did not provide sufficient information to describe the process used and outcomes realized, nursing was not directly involved in the patient/study outcomes, or the setting was in a developing country. Findings from the projects and research included in the final analysis were grouped into common themes related to applied quality improvement.