Skip Navigation


How To Think About Evidence When Deciding Whether To Adopt an Innovation


By Brian S. Mittman, PhD, U.S. Department of Veterans Affairs and Kaiser Permanente Southern California, Innovations Exchange Editorial Board member; John Øvretveit, PhD, The Karolinska Institute, Innovations Exchange Expert Panel member; Paul Plsek, MS, Paul E. Plsek & Associates, Innovations Exchange Editorial Board member; Susanne Salem-Schatz, ScD, HealthCare Quality Initiatives

Endorsed by the other members of the Editorial Board of the AHRQ Health Care Innovations Exchange: Debbie Chang, MPH, Nemours; Tamra Minnier, RN, MS, FACHE, University of Pittsburgh Medical Center; Veronica Nieva, PhD (Editor-in-Chief), Westat; Herbert Smitherman, Jr., MD, MPH, FACP, Wayne State University School of Medicine.


Health care leaders are increasingly interested in locating and implementing care delivery and policy innovations that will yield improvements in performance. This trend has been accelerated by Federal health care reform and related economic pressures, and by consumer and payer expectations for improved quality.

One challenge in deciding whether to implement an innovation is the need to assess its likely value and benefits in a local health system or delivery setting. The evidence-based medicine movement has highlighted the need to use rigorous evidence of effectiveness when selecting and implementing clinical treatments such as drugs and devices. Evidence-based medicine involves an assessment of the strength of evidence of previous research to determine its value for clinical decisionmaking. This generally means assessing how certain we can be that the results of a study or series of studies are valid and accurate, and that the treatment (rather than other factors) produced the observed effects. The strength of evidence is typically based on the number of studies and the quality of the research design and methods—the evidence hierarchy1—used in each study.

However, this approach for grading the strength of evidence of effectiveness has been criticized as unhelpful when applied to research evaluating complex social interventions, including health care delivery and policy innovations.2 Such research involves multiple, interacting, adaptable intervention components that target a complex array of behaviors of individuals and organizations.3 Although decisionmakers need guidance in assessing the strength of evidence of research evaluating such interventions, the usual methods for making this assessment, involving the standard hierarchy of research designs commonly used in evidence-based medicine, have limitations and may even be misleading.

In this perspective article, issues regarding the interpretation and use of effectiveness evidence about care delivery and policy innovations are examined, and an alternative approach to thinking about the information required when making decisions about the adoption of such innovations is outlined.

The Issues


Regulatory and clinical decisions to approve or use an innovative drug or device generally rely on the findings and strength of evidence of previous research evaluating the innovation. Rigorous evidence (generally understood to mean a number of randomized controlled trials of the same intervention, with high levels of internal validity) is viewed as highly predictive of the future effectiveness of an innovation, because clinical interventions are generally fixed and stable, and are assumed to have stable and robust effects across settings and time.

Although this assumption is often reasonable for clinical treatments, it is less valid for complex social interventions such as innovations in health care delivery and policy (see Table 1). Research has found that the effects of these innovations are much more highly variable, and are dependent on their context and how they are implemented and adapted. For example, a “gold standard” randomized controlled trial of a new information technology–supported patient self-management innovation may provide high-quality evidence of its effectiveness in the settings studied, but does not offer useful information regarding its likely effectiveness in another setting with different reimbursement schemes, information technology resources and expertise, or other staff and resources necessary to implement and support the program. These variations in contextual factors and implementation processes make generalization of research findings more difficult, and limit the external validity of any individual study. Furthermore, although the effects of a clinical intervention typically involve a limited number of variables that are easy to measure, such as patient characteristics and therapeutic dosage, the effectiveness of care delivery and policy interventions is likely to be more influenced by many variables that can be difficult to quantify. Those variables include organizational factors (e.g., formal and informal policies, procedures, culture, leadership, budget sufficiency, and staff capacity and expertise) and local and regional external factors (market conditions, professional norms, and external stakeholder engagement and expectations).4 The effects of these factors can often dominate the main effect of the innovation, thereby reducing the value and relevance of findings from previous research designed to estimate that main effect. For example, if an organization that is considering the adoption of an innovation has weaker leadership, culture, budgets, or staff capacity than organizations involved in previous research evaluating the innovation, that organization is likely to see significantly weaker benefits from the innovation than did past adopters. Accordingly, an adoption decision in this organization that is based primarily on past experience and research evidence from other organizations will probably lead to disappointing results.

Other considerations raise additional questions about the value of standard randomized controlled trial designs for evaluating complex social interventions. For example, these interventions are often altered as they are implemented to take account of local constraints and resources, and for other reasons. These adaptations, which may be beneficial or harmful, are different from the ways in which clinical treatments are modified for individual patients. Most clinical interventions (such as drugs and devices) are largely fixed, and those that can be modified are typically evaluated in a manner that maximizes fidelity and minimizes adaptations. Complex social interventions, in contrast, can usually be adapted to match local conditions and constraints, often leading to better outcomes. Because previous research evaluating these innovations generally limits adaptations (to maximize internal validity of the research design), this research offers limited guidance to decisionmakers interested in adapting innovations to increase their suitability and effectiveness in new settings.

How do we assess if an innovation will be effective in our service, setting, or area?

How can decisionmakers determine whether—and how—to adopt, implement, and manage an innovation informed by others’ experience and research? A simple evidence rating based on the standard hierarchy of evidence provides only limited information regarding the likely effectiveness of complex social interventions. Instead, health care organizations and policy leaders who are considering whether to adopt an innovation in a setting that differs from those studied in the past need to apply a more comprehensive and nuanced approach to evidence. A decisionmaker’s interest in implementing an innovation should be based on an estimate of the likelihood of success or benefit of the innovation in that decisionmaker’s setting. The likelihood of success is based on a range of factors, including:

  • The magnitude of benefit (relative to cost) achieved by previous adopters, and the “robustness” or consistency of that benefit across a wide range of settings.
  • The degree of confidence that a similarly high benefit can be achieved by the potential adopter. Specific information that can support this assessment may include:
    • The internal validity of reported estimates of past benefit, to support confidence that the benefits were due to the innovation. (Traditionally, this has been labeled strength of evidence.)
    • The availability of rich data from past studies describing key contextual and implementation factors (such as organizational resources), thereby permitting the potential adopter to assess similarity of context and to identify requirements for success.
  • The availability of detailed guidance to enable the potential adopter to use effective implementation processes and to adapt the innovation to enhance benefits. This includes the need for guidance in measuring and monitoring results, and for refining and adapting the innovation or its implementation in response to evidence of poor outcomes (i.e., outcomes that are less favorable than those reported by past adopters).
The Innovations Exchange provides A Decisionmaker’s Guide to Adopting Innovations that presents a partial guide to the role of evidence in decisionmaking about adoption. Innovation adopters should rely on that document plus additional guidance that reflects recent research findings about other factors that are likely to influence the complicated process of adopting a service delivery or policy innovation. Adopters must recognize that the effectiveness of an innovation is likely to vary considerably from site to site, and will be influenced by factors such as:
  • The characteristics, resources, and capabilities of the adopting organization
  • The adoption and implementation process and how it is managed
  • Management activities designed to change the organization and its culture to better meet the requirements of the innovation
  • Adaptation and incremental refinement of the innovation based on ongoing collection of evaluation data
Research has provided only limited guidance about specific factors to consider. This is unfortunate, because it is likely that some factors are more important than others for successful implementation and adaptation of different innovations. For example, the key factors influencing success in implementing a falls prevention bundle in a nursing home will be different from those that are important in implementing a physician-targeted computerized decision support system in a group practice.

With these considerations in mind, the potential adopter should seek to identify innovations that have good potential based on past experience (ideally from multiple studies across diverse settings), and select an innovation that is likely to be a good fit for the adopting organization (based on the organization’s past experience). The adopter should then focus on organizational factors that are thought to support successful adoption, and create a systematic way to monitor, adapt, and refine the innovation based on its actual, demonstrated operation and results within the organization.

Conclusions

The current strength of evidence rating provided by the Innovations Exchange represents only one of many key factors to consider in adoption decisionmaking. Research on complex social interventions shows that decisionmakers need to consider other factors concerning the context and implementation of a care delivery or policy innovation in a specific setting, based on information provided in the innovation profile and other research. We have noted some of the factors to consider, but we encourage additional inquiry to generate insights to better guide health care organizations in selecting, adopting, and successfully implementing service delivery and policy innovations.

Many decisions about the adoption of an innovation are likely to draw on local knowledge about locally available skills and attitudes towards the innovation. Organizations will need to develop their own implementation approaches based on past evidence of innovation benefits, and then use ongoing evaluation to monitor and refine the innovation to achieve desired outcomes. We encourage all users of the Innovations Exchange to share their own experiences with innovation adoption, using the comments tab that is available on every innovation profile. Such comments could provide valuable insights for others who are considering how they might implement innovations in new settings.

About the Authors

Brian S. Mittman, PhD, is past director (and currently senior adviser) of the VA Center for Implementation Practice and Research Support, and a senior research scientist at the Kaiser Permanente Southern California Department of Research and Evaluation. He is a member of the Editorial Board of the AHRQ Health Care Innovations Exchange.

John Øvretveit, PhD, is Professor of Health Innovation, Implementation, and Evaluation, and Director of Research at the Medical Management Centre, Karolinska Institutet, Stockholm, Sweden. He is a member of the Expert Panel of the AHRQ Health Care Innovations Exchange.

Paul Plsek, MS, Paul E. Plsek & Associates, is an internationally recognized consultant on innovation in complex organizations. He is a member of the Editorial Board of the AHRQ Health Care Innovations Exchange.

Susanne Salem-Schatz, ScD, is Principal, HealthCare Quality Initiatives.

Disclosure Statement: The authors reported having no financial interests or business/professional affiliations relevant to the work described in this article.

Table 1: Characteristics of Simple Versus Complex Social Interventions

Key Feature or Dimension Simple Intervention Complex Social Intervention
Number of components One or very few Many
Intervention components Largely fixed, stable Highly variable and adaptable across sites and across time
Target of intervention Human physiology; individual behaviors of clinicians or patients Multiple behaviors of individuals and/or institutions
Outcome(s) of interest One or very few Many
Degree of influence of contextual, external factors Low High

Footnotes

1 Atkins D, Best D, Briss PA, et al. Grading quality of evidence and strength of recommendations. BMJ. 2004;328(7454):1490. [PubMed]
2 Pawson R, Greenhalgh T, Harvey G, et al. Realist review—a new method of systematic review designed for complex policy interventions. J Health Serv Res Policy. 2005;10 Suppl 1:21-34. [PubMed]
3 Craig P, Dieppe P, Macintyre S, et al. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008;337:a1655. [PubMed]
4 Shekelle PG, Pronovost PJ, Wachter RM, et al. Advancing the science of patient safety. Ann Intern Med. 2011;154:693-6. [PubMed]


 

Last updated: June 19, 2013.