Skip Navigation
Events & Podcasts >
November 2009
Learning From Disappointment: When Your Innovation Falls Short
Listen to the Audio File

View Slides

Read Transcript

Listen to the audio file (Flash player download External Web Site Policy)

Panel Slides

Jump to Slides:

Web event Host: Judi Consalvo, AHRQ Center for Outcomes and Evidence
1 2 3 4 5 6 7 8

Moderator: Paul E. Plsek, MS, Innovations Exchange Editorial Advisory Board member
9 10 11 12 13 14

Innovators:

Dennis M. Manning, MD, Mayo Clinic
15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37

Cathleen Colón-Emeric, MD, MHS, Duke University Medical Center
38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61

Slide 1

Text Description Follows

Learning from Disappointment: When Your Innovation Falls Short

A Public Webinar
November 17, 2009
2:00 – 3:30 PM (ET)

Back to top

Slide 2

Text Description Follows

What Is the Health Care Innovations Exchange?

Searchable database of service innovations

  • Includes successes and attempts
  • Wide variety of sources ? including unpublished materials
  • Vetted for effectiveness and applicability to patient care delivery
  • Categorized for ease of use: extensive browse and search functions
  • Innovators’ stories and lessons learned
  • Expert commentaries

Learning opportunities

  • Learning Networks: A chance to work with others to address shared concerns
  • Educational content
  • Web Events featuring innovators, experts, and adopters

Back to top

Slide 3

Text Description Follows

Participatory Learning Events

January 2010: Chat on Change
March 2010: Webinar – “Ensuring Cultural Competence Across Care Settings”
Future Chats and Webinars: What would be most useful to you?

Back to top

Slide 4

Text Description Follows

Using the Audio Broadcast Feature

You have joined an event using audio broadcasting. You will see a dialog box at the upper-left of your screen. This dialog box controls the audio being transmitted to your computer speakers.

You will not need to dial into the teleconference to hear the audio. However, if you are experiencing challenges with the audio, you can get the teleconference info by clicking the ‘Request Phone’ icon on the Participants panel.

Back to top

Slide 5

Text Description Follows

Chatting

Type your comments or questions in the chat box and click on send.

Back to top

Slide 6

Text Description Follows

Need Help?

No sound from computer speakers? Notify the WebEx moderator via chat.

Trouble with your connection or slides not moving? Log out and log back in.

Back to top

Slide 7

Text Description Follows

Today’s Topic: Learning from Disappointment: When Your Innovation Falls Short

Moderator: Paul Plsek

Back to top

Slide 8

Text Description Follows

Questions or Comments

Contact us: info@innovations.ahrq.gov
Subscribe to receive e-mail updates: http://innovations.ahrq.gov/contact_us.aspx

Back to top

Slide 9

Text Description Follows

Learning From Innovation Attempts

Paul Plsek

Consultant on innovation and complex systems
Chair of Innovation, Virginia Mason Medical Center, Seattle
Director, NHS Academy for Large-Scale Change, UK
Member, Editorial Board, AHRQ Innovation Exchange
paulplsek@DirectedCreativity.com

Back to top

Slide 10

Text Description Follows

'Failure' is an integral part of learning and innovation...

"I have not failed. I've just found 10,000 ways that won't work."
Thomas Alva Edison

"If it fails, admit it frankly and try another. But above all, try something."
Franklin D. Roosevelt

Back to top

Slide 11

Text Description Follows

Attempts, scientific method, and quality improvement

  • Essence of scientific method is the effort to disprove a hypothesis
  • Learning from 'failed' experiments is highly regarded
  • But... 'Unsuccessful' research is underreported in literature
  • Improvement science relies on the Shewhart/Deming Plan, Do, Study, Act (PDSA) cycle
  • 'Study' leads to insight
  • But... Failure to reflect adequately on plans and 'failed' attempts is common

Back to top

Slide 12

Text Description Follows

Innovation Funnel

Graphic illustrating the 'Innovation Funnel' from 100 ideas generated only 10 are harvested, 5 are developed, 2 are tested, and only 1 idea is implemented. Numbers are based on typical ratios across a variety of industries.

Back to top

Slide 13

Text Description Follows

Learning from innovation attempts

Outcome equals Idea plus Context plus Observation

  • Bad idea? Good and bad components in a multifaceted idea?
  • Unsupportive or inappropriate context?
  • Sub-optimal observation (measurement) method?

Reflective practice...

  • What did we learn about the idea, context and observation method?
  • What might we do differently?
  • What is the next step for this idea?

Issues: Organizational attention and resources

Back to top

Slide 14

Text Description Follows

Web Event Agenda

Cathleen Colón-Emeric, MD, MHSc:
Voluntary Osteoporosis Educational Modules for Nursing Home Physicians and Staff

Dennis Manning, MD:
Education and Reminder Card for Those on Multiple Medications

Discussion

Back to top

Slide 15

Text Description Follows

Durable Display at Discharge (3D): Two decade quest for a better medication regimen-reminder tool

Dennis M. Manning MD FACP FACC FHM
Research Chair, Hospital Internal Medicine
Mayo Clinic Rochester

Assistant Professor of Medicine
Mayo Clinic College of Medicine
November 17, 2009

Back to top

Slide 16

Text Description Follows

Problem: In solo private practice (Geriatric & CV Med) 1985-2001 author encountered the occasional outpatient (or family)

  • totally bewildered, by a
  • complicated medication regimen
  • usually >> 6 Medications (some new)
  • Low Health Literacy

Back to top

Slide 17

Text Description Follows

Innovation 1985-2001: Personalized medication reminder card

Asked printer to design & imprint a grid onto white cardboard (500 @ < $100) w/ columns:

  • Medication (name & unit strength)
  • Appearance of Pill (a space to which one tablet can be affixed by clear adhesive tape)
  • Reason (indication)
  • 5 time intervals (7 AM and 1,3,6,11 PM)

Back to top

Slide 18

Text Description Follows

="“The Personalized Medication Reminder Card” chart listing various medications being taken with columns included for “MEDICATION”, “APPEARANCE OF PILL”, “REASON”, “7AM”, “1PM”, “3PM”, “6PM”, and “11PM”. Example: The medication “Metoprolol 50 mg” is listed as being administered for “Blood Pressure + Heart”, “1 and _” pills are to be taken at “7AM” and “6PM”. Courtesy of Dennis M. Manning M.D."

Back to top

Slide 19

Text Description Follows

Results: Author’s testimony “Never failed to elicit immediate gratitude, sense of relief, and profession of better understanding”

  • Level of Evidence (EBM): Class X “Anecdotage”
  • Advantage: Cheap, available now
  • Disadvantages: MD time (4-8 min) & Legibility

Back to top

Slide 20

Text Description Follows

2003: Mayo Clinic
Patient Safety Committee

  • Authors DMM (experience in PA practice) & ARW (experience in Health Services Research in Florida)
  • Studying dilemma’s of patient-family understanding of complicated medication regimens

Back to top

Slide 21

Text Description Follows

Background

  • Patient satisfaction with Discharge a national issue
  • Discharge medication understanding is a satisfaction and a safety issue
  • Hospitalized patients are commonly discharged with a daunting regimen of several medications – some new
  • In addition to personalized teaching and literature, patients often need an individualized written list of prescribed medications
  • For the past 8 years, Mayo Clinic Rochester nurses have employed a Discharge Summary-linked tool: Patient Medication Worksheet (PMW)

Back to top

Slide 22

Text Description Follows

Hospital Discharge – Patient Medication Worksheet

="“Hospital Discharge – Patient Medication Worksheet” chart listing various medications being taken with columns including “AM – Morning”, “PM – Afternoon”, “PM – Evening”, “AM – Evening”, “Special Instructions” with respective hour slots. Example: the medication “Centrum silver one tablet by mouth daily” is listed as being administered at “7” under “AM – Morning”.

Back to top

Slide 23

Text Description Follows

2004 Innovation: Upgraded version of Durable Display

  • Formal appearance
  • Printed
  • Added:
    • Column: Comments & Cautions (900 Meds in db)
    • Row: Medications You Should No Longer Take

Back to top

Slide 24

Text Description Follows

Durable Display at Discharge

“Durable Display at Discharge” chart listing various medications with columns including “Medication”, “Display”, “Purpose”, “Time to Take Medications” (with respective time slots “Morning”, “Noon”, “After Noon”, “Evening”, and “Bed Time”), and “Comments and Cautions”. Example: the medication “WARFARIN (COUMADIN) 2.5 MG TAB” is being administered as a “Blood Thinner”, “1 and _ pills” are to be taken in the “Evening” and patients are to “Report any unusual bruising or bleeding right away, keep diet consistent”.

Back to top

Slide 25

Text Description Follows

Barriers to Implementation

  • IS & IT resources & other priorities
  • Cost of programmer time
  • 3D generation time (5-20 min depending on programming improvements) by RN, MD or RPh
  • No EBM-basis (Key committees: published data?)
  • ARW scholarly query: “Looks nice, but what if it back-fires & makes Patient more confused?

Back to top

Slide 26

Text Description Follows

Hypotheses: Compared to PMW subjects, 3D subjects will demonstrate:

  • Higher satisfaction
  • Greater understanding of medications on a 3 question “Pop Quiz”
    • How many times per day do you take (med X)?
    • Regarding (med Y) what special instruction, caution or comment was given to you?
    • What is the reason for which you take (med Z)?
  • Lower self-reported errors

Back to top

Slide 27

Text Description Follows

Quality Improvement Report

Screenshot of article titled “3D: a toll for medication discharge education” with authors “Dennis M Manning, John G O’Meara, Arthur R Williams, Ahmed Rahman, Karyl J Tammel, Danica Myhre, Lisa C Carter”. Background, Methods, Results, and Conclusions of article are summarized and depicted on screenshot.

Back to top

Slide 28

Text Description Follows

Methods

  • After IRB approval, 302 consenting adult patients discharged from general medical units
  • with 4 scheduled medications
  • Randomized to PMW vs. 3D
  • Excluded: unable to hear over phone; unable to read; non-English speaking; nursing home patients
  • Queried at 7-14 days via telephone (Mayo Survey Research Center personnel blinded to hypotheses) Satisfaction via 5 point Likert Scale; and understanding via personalized 3-query Pop Quiz
  • Comparisons with Mann-Whitney Test

Back to top

Slide 29

Text Description Follows

Phone Survey (3D=76; PMW=59) Results: No difference in

  • Demographics (Gender and Age)
  • Time for RN to complete medication education (8 minutes)
  • Satisfaction
  • Self-reported medication errors

Back to top

Slide 30

Text Description Follows

Results: Greater understanding of medications in 3D cohort (Mann-Whitney Test: p < 0.03; gamma = 0.32)

Bar graph illustrating better understanding of medications in subjects using 3D cohort versus subjects using the Personalized Medication Worksheet (PMW). Understanding was measured using a 3 query Pop Quiz. An estimated 3 subjects answered all queries incorrectly when using the PMW, about 18 subjects answered 1 out of 3 queries correctly, and about 34 subjects answered 2 out of 3 queries correctly. However, less than 5 subjects answered 3 out of 3 queries correctly when the PMW was used. An estimated 2 subjects answered all queries incorrectly when using the 3D cohort, about 20 subjects answered 1 out of 3 queries correctly, and about 36 subjects answered 2 out of 3 queries correctly. Up to 18 subjects answered 3 out of 3 queries correctly when the 3D cohort was used.

Back to top

Slide 31

Text Description Follows

Conclusions & Future

  • 3D does not improve patient satisfaction, nor self-reported errors, compared with PMW
  • 3D appears to promote greater understanding of prescribed medications
  • Further study:
    • Patients with > 8 medications?
    • On return encounters, if Staff time (required for Med Reconciliation) diminished?

Back to top

Slide 32

Text Description Follows

Internal Impact of RCT Publication

  • 3D endorsed by:
    • Nursing Practice Comm
    • Patient Education Comm
  • IS & IT Leadership: many projects in queue. 3D does not blend w/ current platform & thus not a high priority

Back to top

Slide 33

Text Description Follows

Lessons from Failure (to implement) & Next Steps

  • If current tool is pretty good (PMW)… ergo little “delta”
  • If subjects had > 8 Meds, perhaps more “delta”
  • A smallish “delta” does not an implementation make!
  • Hosp Discharge studies: difficult to retain subjects, by phone query alone
  • Compatibility w/ IS platform is key
  • Redirect target audience? Perhaps explore outside industry (Retail Pharmacy chain) - may be interested in development for competitive edge in marketplace

Back to top

Slide 34

Text Description Follows

Unexpected positive findings

  • When showing 3D around campus, many Staff were highly enthusiastic about the tool for their use and workflow
  • Pre-Operative Evaluation (POE) Clinic staffers were particularly wishful patients would bring the 3D tool to their Pre-Op visit where meticulous Med Recon is challenging
  • Hypothesis: 3D format preferred for Staff comprehension & time-savings?

Back to top

Slide 35

Text Description Follows

Recommendations to Others

  • IS/IT resources are limited, and may be committed long in advance
  • IS/IT compatibility is key for e-tool
  • Human Factors & Ergonomics is a helpful scientific discipline
  • Inexpensive, low-tech 3D tool suffices
  • High-tech 3D versions are hard to make but we may find some motivated vendor in future
  • When you get positive feedback, regroup w/ supporters and try again...Persevere!

Back to top

Slide 36

Text Description Follows

Selected References

  • Esposito L. The effects of medication education on adherence to medication regimens in an elderly population. J of Adv Nursing 1995:21; 935 – 943
  • Grymonpre R. Medication reminder card for the elderly. Can J Hosp Pharm 1991. 44(2): 55-62
  • Holloway A. Patient knowledge and information concerning medication on discharge from hospital. J Adv Nursing 1996; 24(6): 1169 – 74
  • Markey B. Medication discharge planning for the elderly. Patient Educ Couns. 1987; 9(3): 241 – 9
  • Martens KH. An ethnographic study of the process of medication discharge education (MDE). J Adv Nursing 1998; 27:341-348
  • Masoudi FA et al. The complexity and cost of drug regimens of older patients hospitalized with heart failure in the United States 1998-2001. Arch Intern Med 2005; 165:2069-2076
  • National Council on Patient Information and Education (NCPIE), Agency for Healthcare Research and Quality (AHRQ) of US Dept of Health and Human Services. Your Medicine: Play it safe. 2003. AHRQ Pub 03-0019. http://www.ahrq.gov/consumer/safemeds/safemeds.pdf
  • Schneider J et al. A medication discharge planning program: measuring the effect on readmissions. Clin Nurs Res. 1993; 2(1): 41 – 53

Back to top

Slide 37

Text Description Follows

Discussion

Durable Display at Discharge (3D): Two decade quest for a better medication regimen-reminder tool

Dennis M. Manning MD FACP FACC FHM
Research Chair, Hospital Internal Medicine
Mayo Clinic Rochester

Assistant Professor of Medicine
Mayo Clinic College of Medicine

Back to top

Slide 38

Text Description Follows

S. P. O. F. Secondary Prevention of Osteoporotic Fractures in Long Term Care Facilities

Cathleen Colón-Emeric, MD, MHS for the SPOF Collaborators

Back to top

Slide 39

Text Description Follows

Background

  • 60-90% nursing home residents have osteoporosis - Fracture rate 13/100 person-yrs
  • 60-75% hip fracture patients spend time in nursing home
  • Only 1/3 of residents with known osteoporosis or hip fracture receive treatment

Back to top

Slide 40

Text Description Follows

Study Aims

  • To develop a multi-modal osteoporosis intervention targeted to medical providers, nurses, and administrators
  • To test whether the intervention changes prescription of fracture prevention therapies in intervention homes compared to control homes receiving a delayed intervention

Back to top

Slide 41

Text Description Follows

Facility Selection and Randomization

  • Nursing homes in NC or AZ with >10 eligible residents
    • Osteoporosis diagnosis OR hip fracture in last 180 d
    • Ambulatory, no life limiting diagnoses
    • Length of stay >4 weeks
  • Clusters of homes with same MD, blocked by state were randomized into early or delayed intervention arm
  • Goal to enroll 128 homes 67/249 eligible nursing homes enrolled

Back to top

Slide 42

Text Description Follows

Intervention

  • Educational Modules
    • Case Based
    • Medical staff, nursing staff
    • Paper or Internet, links to evidence and tools
    • CME/CEU credit
    • Focused on identifying risk factors, osteoporosis evaluation, fracture prevention strategies

Back to top

Slide 43

Text Description Follows

SPOF

Screenshot of Secondary Prevention of Osteoporotic Fractures (SPOF) webpage, tool to help identify risk factors, provide osteoporosis evaluation, and fracture prevention strategies. Patient Case Question example is shown with associated answer selections.

Back to top

Slide 44

Text Description Follows

Intervention

  • Audit and Feedback
    • Provided to Administrator
    • Compare home’s performance to peers on quality measures for osteoporosis
    • Provided suggested target performance based on the 90th percentile “Achievable Benchmark of Care (ABCs)”

Back to top

Slide 45

Text Description Follows

Sample Audit and Feedback Report

Sample Audit and Feedback Report. Bar graph illustrating sample audit and feedback report. Various nursing home facilities are ranked in amount of Pharmaceutical Protection provided for residents with known Osteoporosis or Fractures. 0.47 Pharmaceutical Protection is placed in the 75th percentile, 0.26 Pharmaceutical Protection is the median, with 0.18 Pharmaceutical Protection in the 25th percentile.

Back to top

Slide 46

Text Description Follows

Intervention

  • Osteoporosis Teleconferences
    • Focused on designing Osteoporosis QI strategies
    • Medical and Nursing Staff
    • Audiotaped, CD sent to all MDs, Directors of Nursing
  • Academic Detailing
    • State opinion leaders called MD offices
    • Encourage participation, answer questions

Back to top

Slide 47

Text Description Follows

Study Timeline

Timeline illustrating Colón-Emeric’s study: a pre-intervention period was held from January to June 2004 in which a baseline medical records abstraction was conducted, followed by the creation of a control group and an intervention group. From January to September 2005 the intervention group received an audit and feedback; educational modules were received by nursing homes, teleconferences on osteoporosis prevention strategies were held, and reminder faxes, calls, and letters were sent. A follow-up medical records abstraction was conducted.

Back to top

Slide 48

Text Description Follows

Data Collection

  • Data sources
    • Chart, abstracted by RNs blinded to Rx group
    • Minimum Data Set
  • Main outcome = Any Fracture Protection
    • Osteoporosis medication except Calcium/Vitamin D
    • Or Hip protectors
  • Secondary measures = Osteoporosis evaluation variables, Calcium/Vitamin D

Back to top

Slide 49

Text Description Follows

Anlysis

  • Facility level
  • Test change in proportion of residents receiving any fracture protection in intervention vs. control facilities
    • Multivariable GEE models to account for nesting
    • Test time*group interaction
  • Power
    • 80% power to detect a 17% change
    • 95% power to detect a 20% change

Back to top

Slide 50

Text Description Follows

Selected Baseline Characteristics

Facilities Intervention n=34 Control n=33
For profit (%) 76 82
Bed Size 110 109
Residents Intervention n=293 Control n=313
Women (%) 85 87
Black (%)* 5.5 2.2
Age (mean)* 83 86
Previous fx (%)* 17 25
Dysphagia (%)* 15 21
Tobacco (%)* 25 17

Back to top

Slide 51

Text Description Follows

During the Intervention Period…

  • Low provider participation in intervention group (n=33 homes)
    • RN modules – 4 homes
    • MD modules – 7 homes
    • Academic detailing - 12 homes
    • Teleconferences – 12 homes
  • To bolster engagement
    • CD of teleconference
    • Additional post cards, calls
    • Additional audit and feedback disseminatino

Back to top

Slide 52

Text Description Follows

Baseline Treatment Rates

Pie chart representing fracture protection treatments received by nursing home residents : 25% received no therapy, 36% received Calcium only, 19% received Bisphosphonate, 15% received Calcitonin, and 4% received other medication.

Back to top

Slide 53

Text Description Follows

Baseline Treatment Rates

Bar graph illustrating the change in proportion of residents using fracture protection treatments post pre-intervention: BMD testing decreased in both Intervention and Control groups, 0-1% of residents from both Intervention and Control groups continued to take Vitamin D, only 8% of Control group residents continued to take Calcium while 10% of Intervention group residents continued their Calcium intake, only 1% of Control group residents continued to take Bisphosphonate while 7% of Intervention group residents continued their Bisphosphonate intake, and both Control and Intervention group residents decreased their intake of Calcitonin. Only 1% of Control group residents continued any form of fracture protection treatment differing from the 7% of Intervention group residents that continued other forms of fracture protection treatment.

Back to top

Slide 54

Text Description Follows

Effect of Provider Participation

Modules
% Change in Any Fracture Prevention Participation + % Change in Any Fracture Prevention Participation -

% Change in Any
Fracture Prevention
OR (95% CI)

Nurse Module
20
2
5.2 (1.0-20.0)
MD Module
13
2
4.5 (1.9-12.0)
MD Contact
Academic Detailer
12
4
4.5 (1.1-18.2)

Back to top

Slide 55

Text Description Follows

Results Summary

  • Small, non-significant change in osteoporosis management
    • Power limited by low recruitment
    • Low participation by physicians, nurses and NH administrators
  • Significant improvement when MD or RN engaged in the intervention

Back to top

Slide 56

Text Description Follows

Why did the innovation not ?result in the expected change?

  • Traditional QI approaches may not be effective in nursing homes
  • Need to find better ways to engage providers
  • Unaddressed barriers: length of stay, cost, regulatory oversight

Back to top

Slide 57

Text Description Follows

Lessons Learned

  • What works in one clinical setting may not work in another
    • Consider patient population, staff roles and training, payment issues, computer access, competing programs and issues
  • Even within settings, each site is unique
    • Conduct feasibility pilot tests in a “challenging” site

Back to top

Slide 58

Text Description Follows

Lessons Learned

  • Implementation research requires patience and flexibility
    • Design protocol with latitude to adapt intervention in mid-study if needed
    • Monitor participation, consider interim analysis
    • Maintain methodologic rigor

Back to top

Slide 59

Text Description Follows

Next Steps

  • Systems interventions
    • Electronic guideline-based order sets
    • Prompts to link providers to order sets
    • “Opt-out” automatic intervention not requiring individual provider behavior change. Screenshot of electronic guideline-based order set; provides lists of laboratory evaluations, pharmacologic treatments, and other treatments for implementation.

Back to top

Slide 60

Text Description Follows

Collaborators and Funding

  • Kenneth Lyles
  • Kenneth Saag
  • Deborah Levine
  • Jeffrey Curtis
  • Paul House
  • Anna Schenck
  • Mary Fermazin
  • Aiyuan Xie
  • Joel Gorospe
  • Kristi Oliver
  • Funded by grants from the Alliance for Bone Health, Merck, CMS 500-02-AZ02 AZ0020
  • Faculty support Paul B. Beeson AG024787

    Back to top

    Slide 61

    Text Description Follows

    Discussion

    S. P. O. F. Secondary Prevention of Osteoporotic Fractures in Long Term Care Facilities.
    Cathleen Colón-Emeric, MD, MHS for the SPOF Collaborators

    Back to top

    Transcript

    Judi Consalvo (AHRQ Host): Good afternoon. On behalf of the Agency for Health Care Research and Quality, I’d like to welcome you to our Webinar entitled, Learning from Disappointment: When Your Innovation Fails Short.

    My name is Judi Consalvo, and I’m a program analyst in AHRQ Center for Outcomes and Evidence. We’re very excited about today’s topic, and glad to see that you share our enthusiasm. We will be calling you in a few minutes to get a better feel for who has joined us today, and as you can see from your screen, we are doing things a little differently during this Webinar.

    To foster even greater exchange, we invite you to chat with each other, and with our panelists about your mutual interests and attempts. Now more on our new chat feature in a minute.

    What is the Health Care Innovations Exchange? Since some of you may be new to AHRQ’s Health Care Innovations Exchange, I’ll take just a minute to give you an overview before I introduce today’s moderator.

    The Health Care Innovations Exchange is a comprehensive program designed to accelerate the development and adoption of innovations in health care delivery. This program supports the Agency’s mission to improve the safety, effectiveness, patient centeredness, timeliness, efficiency, and equity of care, with a particular emphasis on reducing disparities in health care and health among racial, ethnic, and socioeconomic groups.

    The Innovations Exchange has the following components:

    • Searchable innovations. These are profiles of successful and attempted innovations that describe the innovative activity, its impact, how the innovator developed and implemented it, and other useful information for deciding whether to adopt the innovation.
      We now have over 350 profiles within our database, and new profiles and content are added every two weeks. We have searchable quality tools. The Innovation Exchange presently hosts over 1400 practical tools that can help you assess, measure, promote, and improve the quality of health care. New tools are also added every two weeks, and we have learning opportunities.
    • Many resources describe the process of innovation and adoption, and ways to enhance your organization’s receptivity to innovative approaches to care. Resources include expert commentary, articles and perspectives, and adoption guides.
    • Networking opportunities. You can interact with innovators and organizations that have adopted innovations to learn new approaches to delivering care and developing effective strategies, and share information.
      Hosting comments on specific innovations is one way to connect with innovators. Types of comments include asking questions or responding to questions about how an innovation works, and mentioning additional resources and lessons learned from adopting, implementing, and sustaining an innovation. Next slide.

    This Webinar is part of a series of participatory learning events of Webinars to support you in developing and adopting innovations in health care delivery. We invite you to take a look at archives materials from our most recent Webinar on the implementation of the Arizona Medical Information Exchange. It can be found on our website at www.innovations.ahrq.gov.

    Our next event, in January, will be part of our new chats on change, which are online conversations that spotlight a prominent issue in health care delivery that is very much in the news. And in March, we will have another Webinar on a topic that is always of interest to innovators. The Webinar will be entitled, Insuring Cultural Competence Across Care Settings. We hope you will join us for all of these events. We also welcome your thoughts on other topics we could address with you.

    At the end of today’s event, your computer will automatically take you to a brief evaluation form. Please be sure to complete the form as your comments will help us to plan future events that meet your needs. You can also e-mail your comments and ideas to us at info@innovations.ahrq.gov.

    Now before I turn this over to our moderator, I’d like to give our speakers a sense of who we have out in the audience today. Please answer the polling question you will see on your screen.

    Okay, would you describe yourself as an innovator, a physician, a nurse, an other clinician, a policy maker, health administrator? Whatever fits for you, whatever the appropriate category is, we’d appreciate if you would fill that out, and we’ll get back to that in a little while.

    So while we gather your responses, I want to first mention the audio broadcast feature. You are listening to this Webinar through your computer speakers. If you are experiencing technical issues, you can access teleconference information by clicking on the Request Phone icon on the participants’ panel.

    I would also like to talk a bit more about our new Chat Teacher. Please use it throughout the Webinar to share insights, comments, suggestions, and questions, with our panelists and with each other. We will publish the Chat as part of our transcript of this Webinar.

    So let me also reiterate a few ground rules. We really look forward to a lively exchange on patient provider e-mail. However, we do reserve the right to edit or delete inappropriate comments. Lastly, endorsement of commercial products is not permitted. Next slide.

    While we don’t anticipate any technical problems, I’d like to give you a few tips in case you experience any difficulty.

    First, with the sound coming through your computer speakers, please notify the moderator via the Chat Teacher. If you have any trouble with the slides, or your connecting to the Webinar, try pressing F5 to refresh your screen. You can also click on Help, at the top of your screen, or send a note to the moderator, using the Chat Teacher, and someone will get back to you.

    We are recording this event so that anyone who couldn’t make it today, or who needs to leave the Webinar early, can listen to the recording or read the transcript. You will be able to find links to a downloadable recording, the slides, any transcript on the AHRQ Health Care Innovations Exchange website in just a few weeks. In fact, if you’d like to download the slides for today’s presentation, you can find them on our website now at www.innovations.ahrq.gov.

    All right. So let’s see if we can look at your responses to my earlier question. Okay. We’ve designated this Webinar to be useful to a broad spectrum of participants, but it’s helpful to know who we’re really talking to. So from what I can see, there are the, 34% of you are in the nursing category; and other, 33%; researchers, 16%; health administrators, 10%; and looks like physicians and policy makers are 3%. So this is great. Thank you. We have a great cross section here. Next slide.

    So we proudly present this Webinar because we appreciate the importance of innovation. We recognize that innovations may not succeed from the start, but we do believe there is much to be learned from the attempt.

    With that very brief introduction, I’d like to introduce our moderator for today’s discussion. He is Paul Plsek. Paul is an internationally recognized consultant on innovation in complex organizations. A former research engineer at Bell Laboratories, and Director of Corporate Quality Planning at AT&T, he now operates his own consulting practice.

    Paul is the developer of the concept of erected creativity. His work can be described as helping organizations think better. Paul is also an active member of the Expert Panel for the Innovations Exchange. We are extremely please to have him with us today to guide us through this topic. Paul?

    Paul Plsek (Moderator): Well thank you, Judi, and welcome everyone from around the country, and perhaps around the world. I’m speaking to you from Seattle where the weather’s absolutely gorgeous. I hope the weather is as nice wherever you are as it is here.

    As Judi said, I play a variety of roles around innovation and change in systems, and work in the US, and do a lot of work in the UK as well, and it’s really great to see the interest in innovation in health care.

    There’s always been a lot of innovation in health care, and it’s, but it’s been particularly focused on kind of pharmaceuticals and devices, and those sorts of things. We often kind of take 21st century technology though, and put it in a sort of a 1960’s, 1970’s, service delivery, and so it’s great to see people really thinking about quality improvement and innovation. How do we innovate in the way we deliver services, not just in the content of those services. If I could have my next slide, Stephanie.

    Failure, which is the topic of today—and we just really wan to emphasize that word because no one like to talk about failure—but actually failure is the best sign that you’re actually innovating. If you’re doing thing and you’ve never failed, you’re probably not pushing the envelope very far.

    Another quote that I could have put up here on this side, is the famous one from Thomas Watson, the founder of IBM, who once said, “The fastest way to increase your success, is to double your rate of failure.” Every time you fail, you have a chance to learn what doesn’t work, as Thomas Edison did. Next slide.

    It’s also the notion of attempting, failing, and learning. It is really central, isn’t it, to the scientific method and to quality improvement in a pragmatic sense. The essence of a scientific method is to try to disprove something, isn’t it? That we have a null hypothesis.

    Learning from failures, really highly regarded, but kind of socially it’s, it doesn’t seem as acceptable, does it, because we know from a variety of studies that unsuccessful research is underreported in literature. And so it’s also true in the pragmatic approach that quality improvement brings. Plan, do, study, act cycles, and the study is about learning from what you did and what you planned. It’s about having some insight, and it’s often said that you can learn more from what doesn’t work than you can learn from what does work, and yet quality improvement practitioners will say often that one of the hardest things to get people to do is the S in the PDSA cycle, to actually study and reflect on things that didn’t go as you wished them to. So I think our topic here today is right in line with what it ought to be. It’s just that it’s not as socially acceptable as it should be.

    When you think of innovation in other industries, on my next slide, what’s often thought of is a sort of a funnel in which failure plays a really important role. These are typical numbers that go across a variety of industries in the Innovation literature, but for an idea that’s implemented a new product in the marketplace, a new service, there’s generally about 100 ideas that were written down on a brainstorming flipchart sheet somewhere. That hundred to one ratio is a pretty, is a pretty doable number across a variety of things.

    But even if you take that 100 ideas on a flipchart sheet and say let’s just pick the top ten, of those, there’s probably about a 50% sort of dropout rate when you really start to think about which ones can we develop in a practical sort of way, which ones make sense for us. And then as you develop the idea, something that originally looked good on paper, turns out to be not so good when you really stop and think about it, and maybe out of those five ideas, less than half will actually go on to be tested.

    And then it’s commonly accepted in other industries that of the ideas that you test of the new products, the new services, the new business models, the new ways of doing work, a 50% dropout rate, a 50% failure rate is not uncommon. If you’re really pushing the envelope and trying something new, you should expect that about half of those things won’t succeed in the traditional sense.

    What’s important is to pause and to learn. They’re opportunities for learning. As Thomas Edison said in that initial quote on the first slide, he learned 10,000 ways not to build the light bulb until he found the right one.

    The stage gate that’s commonly talked about in other industries, is about how do you apply resources in a successive sort of way, putting more and more resources on the best ideas as they come out. But the lesson from this slide, the point that I want you to take away from it, is that 50% failure rates are fairly common in innovation processes in a variety of other industries, and we shouldn’t be surprised at all to find that that’s what we might have in a health care setting.

    So how do we go about learning from innovation attempts? On the next slide, please. It’s important to understand that when you have an outcome that you might be tempted to call a failure, that outcome is the culmination of the idea itself, the context in which you were trying to implement the idea, and, in fact, the way you tried to measure or judge the outcome, the observation technique. So if you’re going to learn from innovation attempts, there’s a variety of questions that you might ask. If I didn’t get the outcome I wanted, is it because it’s a bad idea? Yeah, maybe, but maybe it’s an idea that’s kind of multifaceted, that has some good parts to it and some bad parts to it.

    When you think of quality improvement work and service innovation, service delivery innovation, nearly everything you do is a multifaceted idea. I mean, you might think you’re just changing one thing, I’ve developed a checklist for something, and that’s my intervention. I’ll try the checklist and I’ll study its impact. But it’s not just the checklist. It’s the training that goes along with the checklist for the staff to be able to use it properly. It’s the – there’s a whole bunch of other factors there. And so it’s useful to kind of reflect on maybe some of the components of the idea are worth moving on to a next stage while I abandon some of the other ones.

    How about contacts? Did I end up being in an unsupportive or inappropriate context? It’s a good idea, just didn’t work in the setting I was in. Maybe even things totally beyond your control to change in funding sources, or a sudden crisis in the organization around the, around budgets and things that kind of took away the leadership attention that you had on your idea. Maybe it’s a good idea; we just need to wait and try it again in a better study. And importantly, it’s important to understand that you might just not have had a sensitive enough observation or measurement technique to determine whether it was a good enough idea.

    Often when we’re trying innovative things, we have unexpected or unintended consequences that we didn’t think about when we set up our measurement or observation system to begin with. So don’t discard an idea simply because it doesn’t give you the outcome that you expected it to.

    When you stop and think about the idea itself, the context, and the observation, what do we learn about the idea, what do we learn about a context, what do we learn about how to observe, what might we do differently. It’s the A in that PDSA cycle. What action are we going to take on the study that we just did, based on the intervention. What’s our next step.

    Of course there may be all sorts of issues around having to rekindle the organization’s attention. It’s, it can sometimes be difficult to go back to something like that with a second round of an idea that they perceive didn’t go so well the first time, but there’s lots of other issues that I think we’ll uncover as we get in and draw presentations to that.

    Well that’s just a little bit about innovation itself, and kind of how it’s viewed in a variety of other industries.

    My role in this conference is to kind of moderate, to introduce our two speakers, and then to kind of serve as your voice in asking them questions. That Chat window that’s on the right hand side of your screen is something that I want you to input into. Down at the bottom there’s a little area where you can type into, and then send to all panelists, and we’ll see those questions. If you don’t ask any questions, then you’re going to be left with whatever I’m interested in, in talking to them when the presentations are over.

    Please, please, please, throughout the presentation, send in your questions through that chat window, and I’ll be moderating them and will introduce them at the appropriate place.

    We have two wonderful speakers, and two great examples of attempts that are documented on the AHRQ Innovation Exchange.

    Dr. Dennis Manning is Research Chair of Hospital Internal Medicine Division at the Mayo Clinic in Rochester. He’s, he has served as Director of Quality and Patient Safety in his organization, and is on the faculty of the Institute for Health Care Improvement, and he’ll be telling us about some work that he did with reminder cards for people on multiple, on multiple meds.

    Our first speaker, though, is Cathleen Colón-Emeric. Cathleen is an Associate Professor of Medicine at Duke University Medical Center, and her research focuses on translating evidence based guidelines for geriatric care into clinical practice in field nursing facilities, in a particularly important kind of context to work in that is often different, and she’ll be talking to us about some work that she’s done with voluntary osteoporosis education models for nursing home physicians and staff.

    Let me remind you, please, chat, chat in your questions. We’ll stop after each presentation, just to ask some specific questions around the content of that one, and then we’ll also save some time at the end to have some general discussion about what they’ve learned from the attempt at innovation. But with that, let me turn over to Cathleen, and…

    Cathleen Colón-Emeric (Innovator): Thanks very much. I’m excited to be able to share with you our study today. The secondary prevention of osteoporotic fractures in long-term care was really a big collaboration between investigators at Duke University, at Duke University at Alabama, and the state quality improvement organizations in North Carolina and Arizona. Next slide please.

    As a little bit of background, between 60 and 90% of nursing home residents have osteoporosis, and that results in one of the highest fracture rates in any population—about 13 fractures per 100 person years in a nursing home.

    In addition to this, about 60 or 75% of folks after a hip fracture, will spend at least some time in a nursing facility, receiving rehabilitation, and this offers a prime opportunity to initiate secondary fracture prevention strategies in that population. Despite this prime population, however, only about a third of the residents with known osteoporosis or a recent hip fracture, receive any sort of osteoporosis treatment at nursing homes. Next slide please.

    So the aim of this study was to develop a multimodal osteoporosis intervention that was targeted to medical providers, nurses and administrators, and to test whether the intervention changed prescription of fracture prevention therapies in intervention homes compared to control homes who received a delayed intervention. Next slide please.

    We invited all the nursing homes in North Carolina or Arizona with at least ten eligible residents, to participate in the study. The residents had to have either a diagnosis of osteoporosis already established, or a hip fracture within the last 180 days. They had to be ambulatory or transfer independent, and they couldn’t have any life limiting diagnoses that would preclude treating them with osteoporosis therapies. We also wanted them to have a length of stay of at least four weeks, so they had an opportunity to be evaluated and treated for osteoporosis.

    Now because of single physician, they see patients in multiple nursing homes, we actually randomize clusters of homes with the same physician into either an early or delayed intervention arm to try to minimize contamination. We also block randomized by state so that there were equal number of homes in intervention and control arms in both North Carolina and Arizona.

    Our original goal was to enroll 128 homes, and we were actually powered at that level to detect a change in fracture rates in the nursing homes. Unfortunately, despite multiple calls and visits and letters and arm twisting, we were only able to enroll 67 out of 249 eligible nursing facilities into the study. Now, for that reason, we changed our primary outcome measure from fracture outcome to a quality indicator outcome, which I’ll describe a little bit later. Next slide please.

    Our intervention consisted of multifaceted, very typical quality improvement intervention, but it had been shown to be effective in ambulatory outpatient settings. One component of that intervention setting were educational modules. These were case based, and there were separate modules—one targeted for the medical staff, and one targeted for the nursing staff. They could be completed on paper or over the Internet, and the Internet version contained links to the evidence and to various tools. For example, fall prevention tool kits.

    Physicians and nurses received education, continuing education credit for completing the modules, and the content really focused on identifying osteoporosis risk fractures, doing an osteoporosis diagnostic evaluation, and evidence based fracture prevention strategies. Next slide please.

    Here’s a screenshot showing you one of the educational, pieces, from our educational model for the physician group. Next slide please.

    Another aspect of our introduction included audit and feedback reports, which were provided to all of the nursing home administrators. These compared that particular home’s performance to their peers on a variety of quality measures for osteoporosis care, and we also provided suggested target performance for that facility, based on the performance of the 90th percentile home in their peer group, something called the achievable benchmark of care, or ABC measure. Next slide please.

    On this slide you can see a sample audit and feedback report that we gave to the administrators. We would tell them, for example, that they were facility number 2, they were currently ten percent of their residents were receiving any pharmaceutical prevention for osteoporosis. That put them below the 20th percentile compared to their peers. And their goal performance would be about a 50% of their residents receiving any pharmaceutical protection. Next slide please.

    We also did two osteoporosis teleconferences that were focused on designing osteoporosis quality improvement strategies for the nursing home. These were for combined medical and nursing staff together, and we did eventually audiotape that and send CDs of the audio conferences to all the physicians and directors of nursing in the intervention group.

    And finally, we did academic detailing. States opinion leaders called all the physicians’ offices in the intervention group, encouraging their participation in the project, answering questions about the study and about osteoporosis treatment in nursing home residents in general. Next slide please.

    Our city timeline is here. We had a six month pre intervention period where we did baseline medical records abstraction to see their baseline performance on the osteoporosis quality indicators. We had a nine month intervention period, and you can see multiple time contact points doing the audit and feedback, educational modules, teleconferences, academic detailing sessions, and lots of reminders, faxes, calls, and letters sent. We then did a follow-up medical records abstraction for the preceding six months, and then finally we repeated the intervention for the control arm. Next slide please.

    Our data was collected by nurses who were blinded to the treatment assignment. They abstracted the residents’ medical records, and also used the minimum data stats, which is a standardized data collection tool that’s used by all nursing homes in the country on all of their residents.

    Our main outcome measure was a composite measure that we called any fracture protection, and that included the prescription of an osteoporosis medication other than calcium and vitamin D. We felt that calcium and vitamin D alone was necessary, but not sufficient to protect against fractures, or the use of external hip protectors to try and prevent hip fractures.

    Our secondary measures included osteoporosis evaluation variables. Things like did they order a DXA scan, did they have a vitamin D level in prescription of calcium and vitamin D supplements. Next slide please.

    Our analysis was at the facility level, and we were looking at the change in proportion of residents receiving any fracture protection, any intervention, compared to the controlled facility. We used multivariable GEE models and we tested time group interactions, and although we didn’t have the power to look for fracture changes as we had originally intended, we had about an 80% power to detect a 17% improvement in any fracture prevention in the intervention homes, and a 95% power to detect 20% change, and we felt that these were clinically significant improvements in performance. Next slide please.

    Here I’m showing you some baseline characteristics of our intervention and control facilities. There were several specifically significant differences. The two that I particularly point out to you were that more patients in the control facilities had a history of prior fracture—25% compared to 17%—and more residents in the control homes have dystasia, which might limit some of the osteoporosis medicines that they could receive—21% compared to 15%. We did correct for these imbalances in our analysis. Next slide please.

    So while we were merrily going about our intervention period, we realized that we had very little provider participation in the intervention group. Of the 33 homes that were in the intervention group, only four homes had one or more nurses complete the nursing educational nodule, and only seven homes had one or more physicians complete the MD module.

    We were only able to directly communicate with a physician in 12 homes that there were academic detailing sessions, and only 12 homes had at least one participant in the teleconferences. So fairly minimal participation.

    To try to bolster engagement, we actually added to our protocol, sending out the CD, the teleconference, as I described earlier, and we added a couple of additional rounds of postcards and phone calls to the administrator, director of nursing, and physician, to try to encourage their participation. And finally, we did one extra round of audit and feedback dissemination to the nursing home administrators than we had originally planned. Next slide please.

    The baseline treatment rates had lots of room for improvement. Recall again that these are very high-risk patients who have known osteoporosis or recent fracture. Twenty-five percent of them are on no therapy at all; 36% were getting calcium and vitamin D only; about 19% were on bisphosphonate, which is probably the standard of care for this group; 15% were on another osteoporosis medication, calcitonin; and 4% were on another osteoporosis medication. There was tremendous variability within the homes, with some homes having almost 80% of their patients on some fracture protection, and others having zero percent of their patients on any fracture protection. Next slide please.

    And this slide shows our main results. To orient you, the vertical axis is the percent change in the proportion of residents that are receiving any fracture protection in the pre to post period. The light blue are the intervention homes, and the dark blue are the control homes. So if there was zero change, there was really no difference in the intervention control period, and you can see that there was really no change for bone mineral density testing, vitamin D levels, or prescription of calcitonin in the intervention and control facilities in the pre and post facility.

    Both intervention and control facilities improved to a similar extent in providing calcium and vitamin D prescriptions. The intervention homes improved about 8% more than the control homes in providing bisphosphonates and any fracture protection, but this was not statistically significant, and it’s probably not a clinically significant or a clinically meaningful improvement in osteoporosis care either. Next slide please.

    Because we had such limited provider participation, we decided in the post talk analysis to look back to see if we could detect any impact of whether or not the participating made a difference in osteoporosis care. So here in the plus column, we’re looking at homes in which at least one member of the staff participated in the intervention. At least one nurse completed the nurse module, at least one physician completed the physician module, and so on.

    In the minus column, we’re comparing them to the control facilities, plus the intervention facilities where there was no participation in that aspect of the intervention, and you can see that those homes that actually did participate in the intervention were significantly more likely to improve their, any fracture prevention therapy than those homes that didn’t have participation with an odds ratin of four to five for those. Twenty percent improvement in those that have at least nurse complete the nurse module, compared to two percent change in those that didn’t. So it looked as though if you could engage the providers in the intervention, they were much more likely to improve their care, compared to those that we couldn’t engage. Next slide please.

    So in summary, what we saw were small non-significant changes in osteoporosis management in our intervention facilities. Although our power was limited by low recruitment of the homes to begin with, we really felt that it was the low participation by physicians, nurses, and nursing home administrators, that was the main driver of our unsuccessful attempt.

    We did see significant improvement when a physician or a nurse engaged in the intervention, but it recalled that these folks who are engaging in the intervention are likely to be quite different than the providers who aren’t engaging, and it’s not entirely clear whether it was our intervention or their motivation, which really led to the significant improvement. Next slide please.

    So why did our innovation not result in the change that we expected, based on its use in other practice settings? Well, there are a couple of options, a couple of reasons why that may have been so. One is that the traditional quality improvement approaches that we used may not be effective in the nursing home setting. They haven’t been extensively tested in that setting, and it’s a very different environment and different system than an outpatient clinic or an inpatient hospital.

    Another explanation is that we might need to find better ways to engage providers. Our intervention might be fine, but the engagement is the really key step. And finally, it’s clear that there were unaddressed barriers to providing osteoporosis care that our intervention didn’t address.

    We know from previous research about osteoporosis care in nursing homes that short length of stay of the patients, particularly the post hop fracture patients, may limit a provider’s willingness and opportunity to provide osteoporosis prescriptions. The cost of the medications may be a barrier, particularly if the nursing home has to foot the bill for that cost on their Medicare patients. And finally, regulatory oversight is a big issue in nursing homes, especially in the number of medications, and when you’re talking about osteoporosis, you’re talking about adding at least calcium, vitamin D, and another medication. So that’s an additional three pills to their already large medication burden, and they get scored on that by the state, so that’s another issue that we weren’t able to address with our innovation. Next slide please.

    So we learned a lot from this innovation attempt, and I’ll share some of that with you.

    First, we learned that what works in one clinical setting may not work in another, and you can’t take it for granted without doing appropriate testing. We need to consider patient population differences, staff roles and training, payment issues, whether there’s computer access for staff, and what are the competing programs and issues facing that system as you’re trying to adopt an innovation that works in one setting to another setting.

    We also learned that even within the nursing home setting, each site is unique, and I believe we made the mistake of conducting some of our feasibility pilot testing in a academically affiliated, very friendly, very motivated, very well resourced nursing facility where they were able to do all of the things we asked them to just fine. We probably would have been better off to conduct our pilot test in a more challenging site, a more real world site where we would have encountered, before we launched the whole study, some of the issues that we found when we rolled out our intervention. Next slide please. Next slide please.

    Oh, here we go. And then finally we learned that implementation research requires quite a lot of patience, and probably more flexibility than other types of research. Because you don’t always know a priori what sort of barriers you’re going to encounter as you move into real world clinical settings, I think it behooves us to design our protocols with enough latitude to be able to adapt our interventions in the mid study if needed. To add some extra contacts, to add an extra educational session, or so forth.

    You also need to closely monitor participation as you’re going along, and I think in retrospect, we would have been wise to have an interim analysis built in to our study plan so that we could see midway what was the likely end result of the study, and perhaps change the protocol, or stop the study early when it looked like it was going to be futile to continue.

    Of course, the downside of this is that requires additional sample size to be able to do that interim analysis. The down, the other flip side of being flexible in your implementation research, is that you do need to make sure you’re maintaining methodologic rigor and carefully monitoring treatment fidelity, how many points of contact you’re getting, what’s the dose of the of the intervention, if you will, in your facility as you go along, and carefully document all your decisions as you’re making them in your study log. Next slide please.

    So based on our unsuccessful innovation attempt, we really tried to move towards more systems interventions. Rather than trying to change individual provider behavior, which was very, very challenging we found in the nursing home setting, we wanted to implement interventions that made it easy for providers to do the right thing, and harder for them to do the wrong thing.

    One thing we tried was to develop electronic order entry sets for osteoporosis care, based on clinical practice guidelines. We implemented these in several VA nursing homes that use electronic order entry, and we found that the providers liked them, but they tended not to use them very often because there wasn’t really a prompt. They had to remember that the patient had osteoporosis or a fracture. So we’re in the process of looking at linking these order sets to prompts via e-mails or other viewer alerts for their high risk patients.

    And finally, we’re looking at having certain osteoporosis evaluation and treatment happen automatically in high risk patients through a nurse practitioner intervention when they trigger for high risk, based on a fracture or another event, allowing the provider to opt out if they choose to, but having it happen in the absence of the provider having to act on their own. Next slide please.

    In this slide I’d like to acknowledge our many collaborators on this project, and our funding sources, and thank you for your attention today.

    Paul Plsek: Thank you, Cathleen. That was fantastic. There have been a couple of specific sort of context questions that have been chatted in. Maybe we should deal with those. There’s a couple of other general questions that I think I’ll save for after Dennis’ presentation, because I think most of you have similar thoughts on this.

    A context question someone asks, what is the fall risks assessment? How did you identify those at highest risk for falls, fractures?

    Cathleen Colón-Emeric: Yeah, that’s a very good question. So falls and fracture prevention are very tightly linked, as you know. In nursing home residents, most fractures result from a fall. And so in our educational nodules, we did—particularly for the nurses—do a fair, spend a fair bit of time talking about the fall risk assessment and linking them to the practice guidelines for fall prevention in nursing homes. So that was the primary way we tried to address fall risk in our intervention.

    Paul Plsek: And so it was in the educational modules that…

    Cathleen Colón-Emeric: It was in the educational modules, specifically for the nursing staff.

    Paul Plsek: Yeah, yeah. And, of course, not all the nurses actually completed the educational module, right?

    Cathleen Colón-Emeric: That’s right.

    Paul Plsek: And another specific question someone asks, were there any incentives, monetary, or were there any particular signed agreements to participate?

    Cathleen Colón-Emeric: Yeah, great questions. So all of the nursing home administrators and directors of nursing signed an agreement to participate in the study. We did not have the funding available, unfortunately, to provide any sort of incentive, either for the homes as a group, or the staff, or as individuals, to be able to complete the module, and that’s been very successful in other studies. So that would certainly be something to consider.

    On the other hand, I don’t know that that’s a realistic option for real life practice outside of a research study, so I think just providing incentives may not be the only answer here.

    Paul Plsek: Yes. But the, but the presence of the signed agreements didn’t seem to make that much difference?

    Cathleen Colón-Emeric: Didn’t seem to make that much difference. They were willing to let us come in and contact them, and give them their, the materials, but they didn’t seem to spend a lot of energy promoting the intervention among their staff or their physicians.

    Paul Plsek: Yeah. Okay. And a third kind of specific question, before we move on to the next one, and then we’ll have a more general chat after Dennis’. What was the thinking process? Can you say just a little bit more about your next step. How did you chose your next steps, based on what was the link between kind of what you learned, and the specific choices that you made in your next steps slide?

    Cathleen Colón-Emeric: All right. Well based on your sort of discussion of, you know, either the context or the idea or the evaluation, we really felt that in our study it was the context that was the major barrier for our innovation being successful. There’s just so many competing demands on the nursing home staff’s attention.

    The physicians are there for such a short amount of time. Many of them have, you know, primary care offices, or other practices, and spend a relatively small amount of time and energy in the nursing home. It just seemed like trying to continue to provide individual clinicians’ behavior change was going to be too challenging in this particular context. So that’s why we really felt that we really needed to shift our innovation, our idea if you will, to something that was more of a systemswide intervention and didn’t require multiple providers to change their behavior.

    Paul Plsek: Okay. Great. Well thank you. That’s a very interesting study, and thanks for sharing that with us. It was interesting to hear your voice set. At points in time I detected a sigh or two here and there.

    Cathleen Colón-Emeric: We heaved lots of sighs over the study, but we did learn a lot.

    Paul Plsek: That’s great. And you – it’s fine to sigh, as long as you’re learning something.

    Cathleen Colón-Emeric: That’s right.

    Paul Plsek: All right. Well we’ll come back to you if you can stay on the line, with…

    Cathleen Colón-Emeric: Sure.

    Paul Plsek: as we bring in Dennis.

    Dennis, can you share with us the work that you’ve done with patient medication reminder tools?

    Dennis Manning (Innovator): Sure, Paul. Thank you. This is Dennis Manning at Mayo Clinic, Rochester, and on behalf of my colleagues, Karyl Tammel, Clinical Nurse Specialist, and John O’Meara, PharmD. We had some extensive discussions about the optimal outcome that we desire in that patient’s understanding of increasingly complicated multi-drug regimens.

    As you know, our patient education colleagues call this self-efficacy, when a person really understands their medications and the complicated regimen that they might be on. Our idea was that a better, or optimal reminder card, has not yet been invented or developed, and we wanted to work on that.

    For some of us, on the next slide, this quest goes back almost 20 years, and when I was in practice in geriatrics in cardiology, there were many people who had multiple medications, but about once a week I’d encounter a patient, or a family member of a patient, who was totally bewildered by a complicated regimen. Usually this is more than six medications. The person might have had low health literacy, but either they expressed doubt, or they just gave you that unmistakable look, I’m confused here. Next please.

    The innovation at that time was very low tech, and this was a solo practice, so I was able to implement it without any difficulty after I had a printer print up, on cardboard, a, about 500 of these grids on a medication on an 8-1/2 X 11 card, with its columns and rows that I would write on so that the patient would understand. Next please.

    This is the, a picture of one of these medication reminder cards where I hand wrote the medication for the patient while they were in the office. There’s a reason for the medication that is the indication, the strength of the tablet or pill, and also how many units they should take at what intervals in the day.

    It also afforded a column for the patient either to describe on their own, like green pills, if they wanted to write that down, or they could actually affix with Scotch Tape, one of the medications onto that slot. That seemed to diminish the angst of the person, the patient or family member, and many people brought these back into the office on subsequent visits. Next please.

    So the evidence of this, and the observation, was just personal testimony or anecdotage of the persons for whom I charted this out, and as I say, this is probably less than two percent of my practice who I found that’s confused with this. The advantage was that it was cheap and available. The disadvantages were it took about five minutes for me to design. Next please.

    Fast forward to coming to Mayo Clinic, I found some colleagues interested in the Patients Safety Committee, who expressed similar interest on patients’ understanding of complicated regimens.

    On the next slide you can, you know that discharge from hospital, for example, and, has become a national safety hazard, the transitions, and at that point, the literature is attesting to confusion occasionally, and the hazards of that occasion, misunderstandings.

    At that point already, Mayo Clinic had eight years of experience. Or, I should say, they had a few years of experience, they had ten years of experience. Now, with a tool that’s already in usage, the patient medication worksheet, you’ll see on the next slide.

    This medication worksheet is available to the nurse when the patient is discharged from hospital, and because the discharge summary is always completed at least just before discharge, the nurse can choose to develop this by electronically selecting the worksheet, and the medications that are on the left side, are automatically put in by the computer, and all the nurse needs to do is check a few boxes for what suggested time the patient might take them, and this form can be printed out relatively quickly.

    It, to our colleagues though, that this wasn’t perhaps the optimal one, and maybe we ought to incorporate some of the elements in the previous tool that I had shown you, and make even, and we could make a better tool. Next please.

    So we thought we might – was put into a database John O’Meara did from the medications, of comments and cautions that you might want to. He put a 900 medicated patient database together so that we could make – the next slide – a bit more formal personalized medication reminder cards that we call the Durable Display at Discharge, or the nickname of 3-D. 3-D is also a pun, because the, it’s almost three dimensional when a person elects to take one of the pills in the display column there.

    This also has a larger font, which in human factors was desirable. Also the font is sans serif so it’s as clear to read as possible. Has occasionally the trade name, if that’s relevant, or the patient knows it by that, and the unit dose. Also it affords how many of those units one takes at a certain interval.

    It affords the possibility of having the indication there, and that’s amendable by the person who’s editing the Durable Display, and the comments or comments come in automatically from this 900 medication database, and as you can see, only the ones were selected that were the most, either the most frequent or most significant that you might want to warn the patient or have them know about.

    In addition, we put the medication, the reminder about medications you should no longer take, the reconciliation piece, and there was a date on it. Next please.

    So this was, there are IT and IS issues, because our current EMR does not accommodate this sort of thing. It would be a lot of programmer time in developing these, and there wasn’t any published basis for claiming that this was better, and our colleague who is an epidemiologist says well it looks nice, but what do we know if this backfires? It makes the patient even more confused. A question only an epidemiologist could love. Next slide.

    So we decided to do a randomized study between the current tool at Mayo Clinic, the patient medication worksheet, and the three, 3-D, and to see whether or not when patients were called back after their discharge by trained surveyors and were asked about their satisfaction with the tool, were asked of their understanding of their medications in a three question pop quiz, and were asked about self reported errors – next please – and we published this in Quality and Safety in Health Care. Next please.

    We included people who have more than four medications. Actually, they actually had an average of nine medications, and they were phoned back, and the following slide shows their, the results. Next slide please.

    There was no difference in their age or genders generally, and they’ve had, the nurse completed this task in the same amount of time as it was with the previous tool. Their satisfaction, however, was high for both tools, and there was no significant difference in the groups. And also there was low self-reported errors, but again, no statistical significance.

    It should be noted that we had almost 300 patients randomized into this, but there was 40% dropout that we could not get in touch with, that were either too sick or other difficulties, and there was one quarter of the remaining patients could not remember either card having been given to them, so we were left with a far lower number in our study. Next please.

    One thing was that there was a greater, statistically significant greater understanding of their medications. More people got the answers correct in their quiz than the people who received the standard tool. Next.

    We found – so we were surprised that we couldn’t measure, at least with this observational instrument, the difference that we thought existed in the quality of these two tools—either self reported errors, or patient satisfaction—although these might have been fairly gross or measures, and it did appear that it promoted a little better understanding of the prescribed medications. Next please.

    The 3-D tool, when it was shown around, and the study results was endorsed by Nursing Practice Committee and Patient Education Committee, but there are many other projects in the IP and IS cue, and the current tool does not blend with our current platform, and is not currently a hot priority. So in a certain way, this would have been a failure for us to sort of change the institutional, or perhaps national practice, at this point – next please – but there were many lessons learned along the way.

    First of all, the current goal is probably pretty good. Ergo, maybe there’s a little delta. Now the patients had greater than eight, greater, actually had an average of eight or nine medications, but actually the subjects weren’t selected for their bewilderness. They were just selected on the basis of having more than four medicines, and a willingness to participate. So perhaps this wasn’t the most grateful population that I was working at, with, in the previous context of practice where I only trotted it out for the two percent who were really bewildered. But we also learned that a smallish delta does not an implementation make.

    We also learned that it was difficult to retain subjects in post hospital discharge by phone query alone. We had not paid or reimbursed the patients for their time or difficulties. We also learned that compatibility with your current IS platform is key, and we wondered about maybe other target audiences might be more interested in such a tool. Next please.

    We had some very unexpected findings. When we showed the 3-D tool around on campus, we found some staff who was very enthusiastic about the tool for their use and their workload. For example, at a preoperative evaluation clinic where anesthesiologist, nurses, and respiratory therapists prepare patients for a next day surgery, they said, we really want this tool. We would like patients to bring this to us so that our med reconciliation challenges can be helped, and time saving in increased comprehension. Next please.

    So it, what we recommend, perhaps based on this, was that when IT and IS resources are limited, they may be committed long in advance, and compatibility with your platform might be key.

    We also learned that human factors, attention to human factors and ergonomics, seems to be a very helpful scientific discipline, and that’s what we try to attend to in this.

    We also learned, and I’ve learned in the past, that inexpensive and low-tech 3-D tools, or something like that, can suffice. High tech versions are hard to make, but we may find some motives and manufacturers of them in the future. And we find that when we get positive feedback and we regroup, and with support of like-minded colleagues, sometimes you do have a good idea. It’s just that you need a different context or observation to get, further the idea. Next.

    So thanks for your attention.

    Paul Plsek: I thank you, Dennis. That was great. A couple of specific questions that have been chatted in, and then we’ll move into a general discussion about failure.

    A specific question—if the hospital patient is for discharge kind of say Wednesday morning, 11:00 AM, when should the nurse be giving the patient the Durable Display Discharge? What’s the timing?

    Dennis Manning: Well we found that the optimal would be at least a few hours before discharge, 9:00 o’clock, but even better than that would be the previous afternoon or evening, because then, Paul, this would allow for a dialogue back and forth, long before that final moment in which this final tool, and perhaps the tool needs to be amended, or reconciliation. The patient says no, I have 80-milligram tablets of Lasix, not 20 milligrams. So the earlier the better. Maybe even a day ahead if anticipated day of, time of discharge is set and known.

    Paul Plsek: Yes. So it’s a tool for creating greater understanding, and just having the tool by itself may not create greater understanding. It’s the conversation, it’s the reflection, and those sorts of things. Well, what we know about kind of human cognition, right?

    Dennis Manning: Right.

    Paul Plsek: Yeah, yeah. Did – you had a column in the example that you showed, that was a display column, and you said the patients could actually affix a medication to it. Kind of how many people do that? Might they make a mistake? Might they put the wrong tablet in there and then the tool is actually reinforcing the wrong behavior? Why not put a picture, a computer generated picture of the med there?

    Dennis Manning: Right. Great question. The – it could be that a picture could be put in if the manufacturer of this tool – we didn’t have the capabilities of doing that, but we also wondered that if the supplier of a medicine, let’s say it was generic, it might look differently from one supplier to a next, or may not be able to anticipate which picture should go in there.

    But it is true that we’re at the mercy of the person describing the right medication, and if they didn’t, they put the red pill when it’s really the green pill, and then made a mistake there, they would be systematically making an error, but that is something that is mitigated if the patient (a) brings it back in, or if somebody who is both mentally capable of doing this sets it out in the first place, perhaps at the pharmacist’s office or checking with the pharmacist, or double checking with a family member that they got it right.

    We don’t know, on the discharge study, how many patients actually employed it in that way. I do know that when I used these low tech ones in the office, most people did put medications on it and seemed to be well willing to waste a pill in order to promote their understanding, and I was able to check that they got the right medication on there if they brought them with them.

    Paul Plsek: Yeah. And of course, without any tool at all, they’re, you know, I know lots of people do, they just kind of dump all their pills into a single bottle and try to remember which one is which…

    Dennis Manning: Right.

    Paul Plsek: …there’s a default approach.

    Dennis Manning: Fraught with hazards.

    Paul Plsek: Yeah. And one final detail question for you before we bring Cathleen back in for a general discussion. There was also maybe a little contradictory. You showed a high, a greater understanding of the medication, but not any change in patient satisfaction. What – do you want to comment on that?

    Dennis Manning: Yeah, right. The satisfaction was high for both tools. It was – on a Leichert Scale of one to five, it was four point something for both tools, so we couldn’t measure a difference there. They liked them both. They were given something, and generally, they liked them both. And same way on the errors. There was very few errors reported, so no difference. But the pop quiz was actually an objective quiz, not the subject of assessment of the patient. So I think that was just a bit more objective, and it was a different domain of measure.

    Paul Plsek: Yeah. Great. Well, Cathleen, unmute your phone and come back on, and let me encourage all of the participants, please type in some questions. You can send it to – you can select in the Send To box on the Chat window, send that to all panelists, typing your question in the text area that’s below the Send To box, and just hit the return, and it comes in.

    We’ve had a few from people, and we’ll keep going, but keep adding, please, your comments and questions.

    And so with Cathleen back on the line, and kind of remembering Cathleen’s sighs at various points in her presentation, and Dennis, you’ve, it seems like you’ve been active for nearly two decades. I wonder how many times you’ve sighed. What – and you mentioned in your last slide, kind of the role of perseverance, and I wondered if both of you can kind of comment a little bit. What’s the role of kind of personal perseverance if you’re going to be an innovator, because you have…

    Dennis Manning: Both things. I thought that – you know, when I was using this in private practice, and I pulled this thing out and I – there was a lot of feedback right away from the patient or the family member. And they said thank you. I mean, there was real, honest feedback that you’re solving a problem for an individual.

    When we’re trying to change institutions and institutional practices, and we’re trying to influence a national dialogue, that’s a little more daunting challenge, but certainly both can be satisfied, but there’s different feedback, and we need the help of our colleagues to be energized to continue the innovation on the, in the long run.

    Paul Plsek: Yeah. Cathleen, you want to comment on it?

    Cathleen Colón-Emeric: I’m sorry. I think you have to go I with the attitude that you’re going to learn from whatever happens, and you’re going to take the successes and build on them, and you’re going to figure out why things didn’t work out the way you expected and make the necessary changes, and keep on going.

    Paul Plsek: Yeah. And so what’s the role of kind of keeping up the morale of a larger team? Did…

    Dennis Manning: Well, you mentioned, Paul, the reflective practice, and that’s where, you know, after this study, many, several of us got together and said, well now what do we think about these results? We think it’s a good idea. Why didn’t it work quite as dramatically as we thought it did? Yet we’re somewhat convinced, and with each other’s feedback.

    I mean, one is, you could get feedback and everybody says oh, it was a dumb idea. We ought to just discard it. But no, others have observed, look, I’m not assure that we’re testing this in the right context. Maybe we need a different – so you get a lot of – in a multidisciplinary approach, you get the feedback of other people’s understanding that is greater than your own, and the group comes to mutual support when there seems to be a nugget of a good idea.

    Paul Plsek: Do you have a similar experience, Cathleen? Are you working with a group of collaborators, ands how does that play out in the group when things don’t turn out as you hoped they would do?

    Cathleen Colón-Emeric: Yeah. I think that the multidisciplinary nature, and the getting diverse perspectives from people with different backgrounds and different experiences, is really helpful because, you know, I’m a physician, and then the nurse on the team doesn’t think like I do, and so it’s incredibly important to have her input into the process.

    In the same token, it’s also helpful to go back to the folks that you were trying to change their behavior, and talk to them about what happened. Maybe through a focus group or a survey, or even just informally, to hear from the front line, you know, what was their experience with what you were trying to do, and from their perspective, what should be done differently.

    Paul Plsek: Yeah. I’m doing some research in the UK, looking at the development of a good instrument to kind of assess the culture of innovation within an organization or a team, and one of the dimensions that kind of stands out when you look at a variety of research across a variety of industries, is a relationship, but a team, a teamness and kind of a valuing of diverse ways of thinking, both in the generation of the ideas, and then of course in the processing of the learning that might occur from an attempt that didn’t go the way say that you’d like for it to.

    Cathleen, can you talk a little bit about – I know there’s quite a number of people on the line who have come on because they work in the nursing home industry. I wonder if you could comment a little bit on the context there. What are some of the possible differences between the nursing home systems and perhaps other health care systems that might impact.

    You said at the beginning that the interventions that you chose had been shown to be effective in ambulatory care, and yet they didn’t seem to have the same effect in nursing homes. Can you talk a little bit about that very specific context?

    Cathleen Colón-Emeric: Sure. I think there are a number of relevant differences. One, the patient population is a little bit different. They are, tend to be older, they tend to be frailer, they tend to have more co-morbidities. More of them are close to the end of their life, and so it actually may not be reasonable to expect that all patients in nursing facilities are treated for their osteoporosis. On the other hand, there are lots of folks who would still benefit. So the, dealing with competing co-morbidities in a patient is an issue.

    I think there are staff differences that are very important to consider as well. They’re not called nursing home fir nothing. The nurses are there all the time, and the physicians flit in and out in a very brief amount of time, and so any intervention absolutely has to include the nursing staff as key players, and the administrators of the nursing home as key players as well. Whereas in an ambulatory care practice, it would typically target at primarily the physician staff.

    I think a third difference is some of the systems pressures that are unique, or at least more prominent in nursing homes, some of the regulations that I mentioned. Nursing homes are the most highly regulated health care system. The payment issues, some of their patients are, they receive a capitulated payment for their patents. They may have a disincentive to prescribe expensive medications or do a lot of expensive testing. And just a lot of the other competing demands to their attention. The lawsuits that go on, the state surveys that go on, and so forth, make it a challenging environment, and require modifications to the innovation.

    Paul Plsek: Any kind of general—either one of you, Dennis or Cathleen—any kind of general advice that you’d give to others on the line about how to assess context, how to kind of look at a setting? I mean, you’re both in very different settings, but in general kind of thoughts about how you approach that if you’re trying to make something happen?

    Dennis Manning: Well, for me, it’s, it was certainly I realized that I had come from a solo private practice to one of the largest academic medical centers in the world, and that sort of change just sort of hits you upside the head. I mean, that’s a total change. But certainly, like minded people who are addressing same patient centered outcome type problems are as, sometimes as enthused as you are if you have an outcome that is very important to everyone. And the ideas may be a little different, but remembering that the contacts at certain institutions, or depending on their size and the complexity, and you mentioned, I think, organizational attention and resources.

    Those are, how do you get organizational attention and resources is a big question at every large organization, and in our context, it goes through a committee, it goes through committee scrutinizing published data, have to get the thing into the literature, and those are the kinds of things, but it takes a different timeframe. You can’t think in terms of days or weeks. You’ve got to be thinking more months and years, because those cycles go a little bit slower.

    Paul Plsek: And so a focus on kind of how do you secure resources is an important thing to assess in a context as you go into it.

    Dennis Manning: Right. Its, exactly. And those resources are precious. There’s many competing other out there good ideas in different domains what will work in some hospitals or some academic hospitals won’t work in other ones, and what works in yours, and getting the multidisciplinary group and leadership’s feedback about what it would take to make this idea come into fruition, is locally very different I think, in different contexts.

    Paul Plsek: Yeah, yeah. And that…

    Cathleen Colón-Emeric: Another point to make here, I think, is that it’s important to early on in the process of developing your innovation or your intervention, to get the input from and buy in from the, your target users. So we conducted focus groups with directors of nursing, and with medical directors of nursing homes as we were developing our modules, and our teleconferences, and our other interventions, to make sure that we were not just telling them what we felt they needed to know, but also covering what they wanted to know in answering their questions and trying to adjust some of the barriers that they identified up front.

    Obviously we still had issues with getting people to participate, even once we tried to address that. So that isn’t the whole answer, but I think that that’s an important point as well.

    Paul Plsek: Yeah, yeah. Kind of related to the issue of kind of understanding how the resources are playing out. In the, as people signed up for the conference, this conference, they were asked if they had any questions—even before they heard the presentations—and I thought one of these was, there were a variety, and I’ve been asking a few of those, but the one that I thought was kind of particularly interesting is a rather simple question, how do you report negative findings to grantors. And somebody else kind of framed that as, are, have you learned any elegant ways of saying that you failed?

    Either one of you want to try to field that question, how do you report it to grantors if you’re on the research grant and it hasn’t quite worked out the way you hoped it would?

    Dennis Manning: Well I think that some of the things that you’ve identified, Paul you have to identify what lessons did we learn either in this domain, or that could be applied somewhere else, that would assuage some of the pain of the failure for the main hypothesis. That would seem to me, although I don’t have a large experience with any external funding. Maybe Cathleen does.

    Cathleen Colón-Emeric: Yeah. I think this is what one of my collaborators calls trying to make the silk purse out of the sow’s ear with your data, and it is challenging. I think the process that Paul was talking about at the beginning, and really carefully reflecting on your results and trying to understand what they mean, and trying to understand what you’ve learned from them. And sometimes, what you’ve learned from them is actually as important, or maybe even more important, than if you’d had a positive result. And I think reporting that in good faith is helpful.

    Sometimes you can do secondary post hoc analysis to try to explore what happened. Are there subgroups where it worked, subgroups where it didn’t work, did participation matter, like we’ve looked at in our data. So there are some secondary analyses that may help you make sense of your data and move the fields forward.

    Paul Plsek: Yeah, and I think we do need to kind of go back to the essence of the scientific method as you do this, and it is a shame that we have to talk about it with euphemisms like trying to make a silk purse out of a sow’s ear. That, in fact, it is the essence of breakthrough and learning and progress in science, that we learn from failures. It’s just as important to learn, pick up those lessons learned. It’s not that, it’s not a, it’s not being a spin doctor, or any of those pejorative terms. It’s just part and parcel of the, of what needs to happen. But I guess it is that kind of social pressure that kind of leads to the often reported thing that unsuccessful research is underreported, underreported in literature.

    Dennis Manning: Paul, I have a question, a couple of questions from the colleagues here.

    Paul Plsek: Sure.

    Dennis Manning: Can I do them? One of them is, why are some patients not using the tool, meaning the 3-D tool, and that’s a great question. It goes to what Cathleen was saying. Maybe we ought to ask the folks. And I don’t think we really did ask all the patients who weren’t using the tool about well why didn’t you use it, or why didn’t you remember, you know, the tool at all. And I think that goes partly to that we were applying it here in the context to all patients who signed up or had a number of medications, whereas in my former practice, I always trotted this tool out when a person was definitely bewildered. So that’s a more select population I – so I think when you apply a tool or a remedy on a blanket basis, maybe you’re not getting quite as much a take of the idea.

    Paul Plsek: Good.

    Dennis Manning: There was another question on if a patient, if the tool has a lot of facevalidity, but there’s a smallish delta, might we measure the provider satisfaction, and I think that’s a great idea. That’s one of the side learnings that came out of this that we might have discovered. This, what we think we’ve discovered is the format in which providers want to see medication. They want to see the indications.

    I mean, when I get a patient from a neurologist and there’s three strange medicines to me, I’m not sure whether they’re on Mysoline for seizures, or some other indication, and you’re trying to guess, and if you had that indication next door to it, and the comments or cautions, all of this would be edified because we’re all generalists to other people’s specialties.

    Paul Plsek: Yeah. Cathleen, I wonder, have you had any specific questions chatted into you? I’ve been trying to monitor those that have been chatting into me, but obviously Dennis has gotten a couple that I haven’t seen. Do you have any?

    Cathleen Colón-Emeric: Doesn’t look like I do.

    Paul Plsek: All right. Dennis, when, in your presentation you talked about a published paper, so just getting back on the thread of kind of publishing and reporting things. You did have a paper that was published in Quality and Safety in Health Care. Did – and it wasn’t clear – were you actually able to report your negative findings in that, or was that…

    Dennis Manning: Yes.

    Paul Plsek: …In a paper?

    Dennis Manning: Yes. And– it was published actually under the section called Quality Improvement Report, which I think, you know, allowed us to have a more broad range with this, and but I think because we had so much dropout in the study was why it perhaps didn’t get a larger play, but it certainly, it did get published, it did get results, which some of which were negative and some of, you know, one of which was positive. So, you know, it at least got into print like that, which at least gets you into other people’s conversation, and that’s where you go back to the context.

    I mean, if you didn’t have something in print, I wouldn’t have anything to walk into any committees with. If it doesn’t come out in a peer review journal, it pretty much doesn’t exist to the audience of the academic medical centers.

    Paul Plsek: Yeah, yeah. That’s certainly the Mayo culture that you’re speaking at.

    Dennis Manning: And it doesn’t come in your mailbox.

    Paul Plsek: Cathleen, have you published any of these results, or have you had any interaction with editors on trying to put a paper together on this?

    Cathleen Colón-Emeric: We were actually able to publish ours in the American Journal of Medicine, and they actually didn’t express concerns that it was a “negative trial.” I think – we tried very hard not to try to spin it as a positive trial with our participation data. We tried to be very honest and say that this didn’t work like we expected it to, although if we could get people to participate, we did see a difference, and they were very open to that.

    Paul Plsek: Yeah. So is it your suggestion that, at least from a sample of two, that it’s not journal editors who are responsible for the underreporting of negative findings. It’s authors themselves that don’t actually give it a go. Hmm.

    Cathleen Colón-Emeric: I think you can find a home for it, and I think you should be honest in reporting your real findings and your primary outcome. I often see these articles published, and I kind of wonder gee, was that really their primary outcome to start with, or did they have a negative primary outcome and they’re reporting one of their secondary outcomes as their primary. So encourage people to be honest, because you really can learn just as much from these unsuccessful findings as you can from the successful ones.

    Paul Plsek: Yeah. And if nothing else, for those of you that are on the call that may also have an attempt, certainly the AHRQ Innovation Exchange is gladly accepting those, and we’re writing them up. We think it’s part of a contribution to innovation in general, in health care.

    So kind of, if you were going to summarize this, we’ve got up the title slide. If each of you—Kathleen, Dennis—what kind of one or two points, in summary, would you make from the title here—Learning from Disappointment: When Your Innovation Falls Short? What would you say that are the one or two most important things you’d like people to remember from this? Cathleen?

    Cathleen Colón-Emeric: I think implementation research is an ongoing process, and with every study we make we make a step, and it’s not a step forward or backward, but it’s a step that you keep building on, and with the additional knowledge gained from that attempt, I think eventually we’re going to reach the top of that ladder. So just encourage people not to get discouraged and quit, but keep on going.

    Paul Plsek: Yeah. Dennis?

    Dennis Manning: I’d say, learn and persevere. Absorb the reflections from your other multidisciplinary colleagues, and good ideas might be picked up in other ways than you ever expected. For example, in the AHRQ website, we’ve had interest from now from the AMA Professional Standards Group, who’s interested in our tool. We never would have expected that from just the publication alone, but – so there’s other ways, and other ideas might surface if you’re still willing to dialogue with individuals, and if your team is still convinced that there’s a nugget of a good idea. Persevere.

    Paul Plsek: Yeah, yeah. And so Dennis, stay on here, and answer this question for me. How does it feel to be the featured presenta, one of the featured presenters in a seminar called, Learning from Disappointment? What was the experience for you personally, experience by talking like this?

    Dennis Manning: Well, that’s a little humbling, but it’s also great. It feels great to be a part of this. It feels great to have the, some of this work selected by AHRQ, so I know that learning from disappointment in PDSA cycles, I agree with Tom Watson, IBM founder as far as what we need to do so that we find better solutions.

    Paul Plsek: Yeah. Cathleen, how did it feel to be a featured speaker on a conference call, Learning from Disappointment?

    Cathleen Colón-Emeric: Well I got a lot of gentle ribbing from my colleagues about it, but I think it’s an important process, and I appreciate AHRQ’s attention to the topic, and appreciation that “negative findings” aren’t necessarily worthless. In fact, they have a lot to teach us.

    Paul Plsek: Yeah. Great. And I’ve got to say, it’s been really fun kind of interacting with the two of you as we sort of planned this, and certainly we kind of unscripted into this last kind of 30 minutes, not knowing what sort of questions would be chatted in, and I thank you for kind of sticking with me and playing along. But with that, let me turn back over. I think it’s Judi that is going to wrap this up. Is that correct?

    Judi Consalvo: Yes, it is, and I’m sorry to see this come to a close. It’s been great, and interesting. Cathleen and Dennis, we have to thank you for sharing your experiences with us. You’ve imparted a lot of information to our audience today. And Paul, we appreciate your knowledge and innovation, and for moderating this, what I feel is a very worthwhile discussion, and as AHRQ does too. As you’ve just mentioned, we have your innovations up on our Innovation Exchange site, and again, we hope all of our audience will go visit us and check this out.

    And to the audience, thank you for joining us and staying with us, and we hope this has been a learning experience for you too. We value your feedback, and hope you can spend a few minutes completing the evaluation that’s about to appear on your screen. And again, you can also contact us at any time at info@innovations.ahrq.gov  Thank you.