Skip Navigation
Events & Podcasts >
April 2009
Improving Innovations: How to Make Data Work for You
Webinar Host: Judi Consalvo, AHRQ Center for Outcomes and Evidence
Moderator: Brian Mittman, Director of the VA Center for Implementation Practice and Research Support
Innovator: Michele Campbell, Christiana Care Health System

View Slides

Listen to Audio file

Read Transcript
Panel Slides

Jump to Slides:

Web event Host: Judi Consalvo, AHRQ Center for Outcomes and Evidence
1 2 3 4 5 6

Moderator: Brian Mittman, Director of the VA Center for Implementation Practice and Research Support
7 8 9

Innovators: Michele Campbell, Christiana Care Health System
10 11 12 13 14 15 16 17 18 19 20 21

Expert:

Debra Rog, Westat
22 23 24 25 26 27

Eugene Nelson, Dartmouth-Hitchcock Medical Center and the Dartmouth Institute at Dartmouth Medical School
28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47

Slide 1

Text Description Follows

Roller Coaster: Implementation of the Arizona Medical Information Exchange (AMIE)

A Public Webinar
August 13, 2009
2:00 – 3:30 PM (ET)

Back to top

Slide 2

Text Description Follows

What Is the Health Care Innovations Exchange?

Searchable database of service innovations

  • Includes successes and attempts
  • Wide variety of sources ? including unpublished materials
  • Vetted for effectiveness and applicability to patient care delivery
  • Categorized for ease of use: extensive browse and search functions
  • Innovators’ stories and lessons learned
  • Expert commentaries

Learning opportunities

  • Learning Networks: A chance to work with others to address shared concerns
  • Educational content
  • Web Events featuring innovators, experts, and adopters

Back to top

Slide 3

Text Description Follows

Webinar Series

  • January 2009 -- Engaging Stakeholders: How to Obtain and Retain Buy-in for Your Innovations
    • Archived materials available at http://www.innovations.ahrq.gov/resources/resources.aspx
  • June 2009 – Learning from Disappointment: When Your Innovation Falls Short
  • Future Webinars: What would be most useful to you?

Back to top

Slide 4

Text Description Follows

Submitting Questions

  • By phone: Dial *1 to contact the operator, who will open your line.
  • By Web: Submit through the “Questions” button on the top right hand corner of the screen

Back to top

Slide 5

Text Description Follows

Need Help?

  • No sound from computer speakers? Join us by phone:
    • 1-(877) 705-6008
  • By Web: Submit through the “Questions” button on the top right hand corner of the screen
    • Click F5 to refresh your screen.
    • Log out and log back in.
  • Other problems?
    • Click “help” button.
    • Use Q&A feature to ask for help.
    • If on the phone, dial *0 (star-zero).

Back to top

Slide 6

Text Description Follows

Today’s Topic: Improving Innovations

  • How to identify the data you need
  • How to gather the data you need
  • How to interpret your data

Back to top

Slide 7

Text Description Follows

Participants

  • AHRQ Webinar host
    Judi Consalvo, AHRQ Center for Outcomes and Evidence
  • Moderator
    Brian Mittman, VA/UCLA/RAND Center for the Study of Healthcare Provider Behavior
  • Innovators
    Michele Campbell, Christiana Care Health System
  • Experts
    Debra Rog, Westat
    Eugene Nelson, Dartmouth-Hitchcock Medical Center & The Dartmouth Institute at Dartmouth Medical School
  • Audience members

Back to top

Slide 8

Text Description Follows

Structure of the Webinar

  • Overview & data collection issues
    Brian Mittman
  • Innovation description
    Michele Campbell
  • Expert commentary
    Debra Rog
    Eugene Nelson
  • Moderator’s comments
  • Open Q&A
  • Wrap-up

Back to top

Slide 9

Text Description Follows

Participants

  • Purposes of collecting data
    • Understanding the issue
    • Developing the innovation
    • Evaluating progress
  • Available resources
  • Quantitative and qualitative data

Back to top

Slide 10

Text Description Follows

Building a Patient Safety Mentor Program

Michele Campbell, RN, MSM, CPHQ FABC
Corporate Director
Patient Safety and Accreditation
Christiana Care Health System

Back to top

Slide 11

Text Description Follows

Impetus for Safety Mentor Program

  • IOM Report
  • Nonpunitive response to error Improvements made as a result of reporting
  • Reluctance to report errors
  • Volume and severity of events and near misses

Back to top

Slide 12

Text Description Follows

Goals: Safety Mentor Program

  • Empower frontline staff to serve as ambassadors
  • Encourage peer-to-peer feedback and communication
  • Enhance and promote error reporting including near misses
  • Facilitate learning

Back to top

Slide 13

Text Description Follows

Design of the Safety Mentor Program

  • Formulate goals
  • Gain organizational buy-in
  • Define safety mentor role
  • Identify educational and training needs
  • Determine frequency and content of meetings
  • Develop and implement data collection plan/tools
  • Plan how to evaluate innovation

Back to top

Slide 14

Text Description Follows

Considerations for Adopters

  • Select mentors carefully
  • Consider protected time for data collection
  • Act on front-line input
  • Will it Work Here? A Decisionmaker’s Guide to Adopting Innovations
    http://www.innovations.ahrq.gov/resources/resources.aspx

Back to top

Slide 15

Text Description Follows

Validation Of Our Success

Total Events Reported

Back to top

Slide 16

Text Description Follows

Validation Of Our Success

  • Improved reporting of medication-related near misses:

Back to top

Slide 17

Text Description Follows

Validation Of Our Success

  • Fewer events with major outcomes
  • Improvements in safety culture
    • A dramatic decline in fear of disciplinary action
    • Perception of improved patient safety and learning

Back to top

Slide 18

Text Description Follows

Other Uses Of Quantitative and Qualitative Data

  • Observations
    Documentation
    Interview questions
  • Ease of completion and navigation
  • Agenda items
    Improvements and suggestions
  • Qualitative feedback on safety project design and strategies

Back to top

Slide 19

Text Description Follows

Lessons Learned

  • Assess baseline data to evaluate success
  • Selection of a culture survey instrument is a important strategy decision
  • Resources impact on selection of measures
  • Insight and perceptions from safety mentors promote learning
  • Recognize that a culture of safety is local, multidimensional and is still evolving
  • Sharing data at the local level and organizational level can drive improvements

Back to top

Slide 20

Text Description Follows

Limitations

  • A variety of culture surveys instruments utilized
  • Paper collection process
  • Skills and understanding of staff in data integrity
  • Real time peer-to-peer feedback is dependent on comfort level of staff
  • Turnover of front line staff in safety mentor role contributed to the pace of progress

Back to top

Slide 21

Text Description Follows

Next Steps in Our Journey

  • Enhance Onboarding and formalize recognition
  • Implement Fair and Just Culture concepts
  • Utilize the results from our 2009 (AHRQ) Hospital Survey on Patient Safety Culture to assess progress
  • Define frequency of measures for future validation of our success

Back to top

Slide 22

Text Description Follows

Evaluation: A Broad View

Debra Rog, Ph.D.
Associate Director
Westat

Back to top

Slide 23

Text Description Follows

Take A Broad View of Evaluation

  • Evaluation once thought synonymous with outcome studies
  • Now viewed as broad set of approaches and techniques for a range of purposes and questions
  • Internal evaluation begins with a ‘self-critical’ posture

Back to top

Slide 24

Text Description Follows

Questions Evaluation Can Address

  • Is a new program – innovation – needed?
  • How should it be designed?
  • Is the program being implemented as expected?
  • What are the program outcomes? Is it having the desired effects?
  • What are the costs of the program in relation to its effectiveness and benefits?

Back to top

Slide 25

Text Description Follows

Is a Program/Innovation Needed?

  • Specific questions that can be addressed:
    • What is the nature of the problem?
    • What is its scope?
    • What are the needs of the people being served?
  • Evaluation and information approaches:
    • Needs assessments
    • Monitoring data
    • Survey data

Back to top

Slide 26

Text Description Follows

How Should The Program Be Designed?

  • Specific questions that can be addressed:
    • What are the goals we are trying to reach? The outcomes we would like it to achieve?
    • What approaches/activities can link to these goals and outcomes? What are feasible?
  • Evaluation and information approaches:
    • Stakeholder assessment
    • Logic models
    • Feasibility study
    • Evaluability assessment

Back to top

Slide 27

Text Description Follows

Is the Program Being Implemented as Expected?

  • Specific questions that can be addressed:
    • Is the program being implemented as designed?
    • Is it being implemented with “fidelity”?
    • What are the costs in implementing the program?
  • Evaluation and information approaches:
    • Monitoring data (on process)
    • Fidelity assessment
    • Cost study

Back to top

Slide 28

Text Description Follows

Collecting Data in Busy Clinical Settings & Cascading Metrics

Eugene C. Nelson, DSc, MPH
Director, Quality Administrationbr /> Dartmouth-Hitchcock Medical Centerbr /> Professor, Clinical Leadership & Improvementbr /> The Dartmouth Institute at Dartmouth Medical School

Back to top

Slide 29

Text Description Follows

“In God we trust. All others bring data.”
W. E. Deming

Back to top

Slide 30

Text Description Follows

Purpose of Using Data & Measuring

The purpose of measuring is to answer critical questions and to guide intelligent action.

    Back to top

    Slide 31

    Text Description Follows

    Principles for Collecting Data in Busy Clinical Settings

    • Seek usefulness, not perfection in the measurement./li>
    • Use a balanced set of process, outcome and cost measures. /li>
    • Keep measurement simple; think big, but start small./li>
    • Use qualitative and quantitative data. /li>
    • Write down the operational definitions of measures. /li>
    • Measure small, representative samples./li>
    • Build measurement into daily work./li>
    • Develop a measurement team./li>

    Back to top

    Slide 32

    Text Description Follows

    A Logical Approach to Planning Data Collection and Measurement

    • Write down the CRITICAL QUESTIONS that must be answered.
    • Design DUMMY DATA DISPLAYS that will be used to answer your critical questions.
    • Make a LIST OF VARIABLES that must be collected (to fill in the dummy data displays) and write down conceptual and operational definitions for each one.
    • Write a SIMPLE PROTOCOL and follow it.

    Back to top

    Slide 33

    Text Description Follows

    How many days can we go without having an infant infected? (Nosocomial sepsis, ICN, DHMC)

    Back to top

    Slide 34

    Text Description Follows

    How many days can we go between code alerts? (CCHMC, A6 South)

    Back to top

    Slide 35

    Text Description Follows

    Cascading Metrics: Connecting the Front Office with the Front Line

    Back to top

    Slide 36

    Text Description Follows

    Levels of Health System: IOM Chasm Report Chain of Effect

    • Patient
    • Physician
    • Clinical Unit / Microsystem
    • Clinical Service Line / Mesosystem
    • Health System / Macrosystem

    Back to top

    Slide 37

    Text Description Follows

    Building a Cascading System of Measures

    • Patient & Physician
    • Micro: Clinical Units
    • Meso: Service Lines
    • Macro

    Back to top

    Slide 38

    Text Description Follows

    A Cascading Set of Strategic Measures

    • Adverse Event Rate
    • Nosocomial Infections
    • Catheter Related

    Back to top

    Slide 39

    Text Description Follows

    Clinical System Improvement (CSI) Team

    Back to top

    Slide 40

    Text Description Follows

    CSI Inpatient Unit Level Quaterly Quality Dashboard

    Back to top

    Slide 41

    Text Description Follows

    CSI Inpatient Unit Level Quaterly Quality Dashboard

    Back to top

    Slide 42

    Text Description Follows

    Cincinnati Children's Hospital Medical Center - High Reliability Unit Innovation Team Monthly Report

    Back to top

    Slide 43

    Text Description Follows

    Key Points

    • Use systematic approach to build data collection into busy clinical settings to answer critical questions.
    • Consider using cascading metrics to promote alignment of work at different levels of the organization & to link strategy with execution, improvement & innovation.

    Back to top

    Slide 44

    Text Description Follows

    References & Resources

    • Nelson EC, Godfrey MM, Batalden PB, Berry SA, Bothe Jr AE, McKinley KE, Melin CN, Muething SE, Moore LG, Wasson JH, Nolan TW: Clinical Microsystems Series: Clinical Microsystems Part 1. The Building Blocks of Health Systems. Joint Commission Journal on Quality and Patient Safety, 34(7); 367-378, July 2008.
    • Martin LA, Nelson EC, Lloyd RC, Nolan TW. Whole System Measures. IHI Innovations Series white paper. Cambridge, Massachusetts: Institute for Healthcare Improvement; 2007.
    • Nelson EC, Batalden PB, Lazar J: Practice-Based Learning and Improvement: A Clinical Improvement Action Guide, Second Edition, Joint Commission Resources, 2007.
    • Nelson EC, Batalden PB, Godfrey M: Quality by Design: A Clinical Microsystems Approach. Jossey-Bass, 2007.
    • Nelson EC, Splaine ME, Godfrey MM, Kahn V, Hess AR, Batalden PB, Plume SK: Using Data to Improve Medical Practice by Measuring Processes and Outcomes of Care. The Joint Commission Journal on Quality Improvement, 26(12):667-685, December 2000.
    • Nelson EC, Splaine ME, Batalden PB, Plume SK: Building Measurement and Data Collection into Medical Practice. Annals of Internal Medicine, 128 (6):460-466, March 15, 1998.
    • www.clinicalmicrosystem.org

    Back to top

    Slide 45

    Text Description Follows

    Take-Away Points

    • Goal should not be “perfect” data collection.
    • Collect the best data you can within constraints of available resources.
    • Use your data for multiple purposes.

    Back to top

    Slide 46

    Text Description Follows

    Submitting Questions

    • By phone: Dial *1 to contact the operator, who will open your line.
    • By Web: Submit through the “Questions” button on the top right hand corner of the screen.

    Back to top

    Slide 47

    Text Description Follows

    Questions or Comments

    • Contact us: info@innovations.ahrq.gov
    • Subscribe to receive e-mail updates: http://innovations.ahrq.gov/contact_us.aspx
    • If you are interested in a smaller, more informal follow-up discussion on this topics, please email us at info@innovations.ahrq.gov

    Back to top

    Audio file (MP3 10 MB)
    Transcript

    Operator

    Greetings and welcome to Improving Innovations: How to Make Data Work for You. At this time all participants are in a listen-only mode. A brief question and answer session will follow the formal presentation. If anyone should require operator assistance during the conference, please press star 0 on your telephone keypad. As a reminder this conference is being recorded. It is now my pleasure to introduce your host Ms. Judi Consalvo. You may begin.

    Judi Consalvo – AHRQ Center for Outcomes and Evidence – Program Analyst

    Good afternoon, and good morning to maybe some of you. On behalf of the Agency for Health Care Research and Quality, I'd like to welcome you to our webinar on Improving Innovations: How to Make Data Work for You. I'm Judi Consalvo, a Program Analyst for the AHRQ Center for Outcomes and Evidence. We're very excited today about our topic and glad to see that you share our enthusiasm. We had a record number of registrants for today's event and we'll be polling you in a few minutes to get a better feel for who has joined us today. Since some of you may be new to the Innovations Exchange, I'll just take a minute to give you an overview before I introduced today's moderator.

    The Innovations Exchange is a comprehensive program intended to accelerate the development and adoption of innovations in healthcare delivery. This program supports the agency's mission to improve the quality, safety, efficiency and effectiveness of health care for all Americans with a particular emphasis on reducing disparities in health care and health among racial ethnic and socioeconomic groups. The Innovations Exchange has several components:

    Searchable descriptions of a wide range of innovations, including those that are successful and attempted, which provide information on the innovative activity, its impact, how the innovator developed and implemented it, and other information that you can use when deciding whether to adopt the innovation. In some cases, there is a story behind the story or expert commentary or highlight in innovation’s nuisances, importance, and applicability.

    Learning networks. Through the learning networks, you can connect with innovators and other adopter organizations to learn new approaches to delivering care, develop effective strategies and share information.

    And we have educational resources. This site offers you a variety of resources designed to help you learn about the process of innovation, adoption, and the steps you can take to make your organization more receptive to innovative approaches to care. These resources include written materials and opportunities to participate in webcasts and discussion groups.

    We are constantly expanding the Innovations Exchange with new topics and new ideas. We put out a publication every two weeks, which focuses on a particular topic area and includes new innovations, attempts and tools.

    Presently we have 308 profiled attempts and 1,398 tools. And so you can see we are striving to build a content-rich resource for you.

    Now this webinar is the second in a series of Webinars we are planning to support you in developing and adopting innovations in healthcare delivery.

    You can check out archived materials from our last webinar on Obtaining Buy-In from Stakeholders on our Website www.innovations.ahrq.gov. Our next Webinar planned for June is on learning from disappointment; these are innovations that couldn't be implemented, couldn't be sustained or had unexpected negative consequences. Often you can learn a tremendous amount from what went wrong and what people would do differently. We hope you will join us for that webinar, too.

    We would welcome your thoughts on other topics you want us to address. At the end of today's event, your computer will automatically take you to a brief evaluation form. Please be sure to complete the form as your comments will really help us to plan future events that meet your needs.

    You can also e-mail your comments and ideas to us at info@innovations.ahrq.gov.

    Okay. So before I turn this over to our moderator, I'd like to give our speakers a sense of who we have out in the audience today.

    Please answer the polling question you see now on your screen. Would you describe yourself as an innovator, a potential adopter, a researcher, a policymaker or other.

    So while we gather your responses, I want to clarify how we're handling questions. We would very much like to hear from you and we'll open the phones after the presentations are over so that you can ask questions of any of our speakers. You may want to jot down your questions for the speakers since we won't open the phones until they're all done.

    You're also welcome to send us your questions at any time by using the Q&A feature on the website for this presentation. Just type in your question and click on Send. While we don't anticipate any technical problems, I'd like to give you a few tips in case you experience any.

    First, if you experience any difficulty with the sound coming through your computer speakers, you can always join us by telephone. The telephone number is 1-877- 445-9761. The passcode is 319097. These numbers are always available on the right-hand side of your screen.

    If you have any trouble with the slides or your connection to the webinar, try pressing F-5 to refresh your screen. You can also click on the help button or send a note using the Q&A feature and someone will get back to you.

    If you are listening by phone, you can also press star 1 on your phone for assistance. As we mentioned earlier, we are recording this event so that anyone who couldn't make it today or needs to leave the webinar early can listen to the recording or read the transcript. You'll be able to find links to a downloadable recording, the slides, and a transcript on the Health Care Innovations Exchange Website in a few weeks. In fact, if you'd like to download the slides for today's presentations, you can find them on our Website now at www.innovations.ahrq.gov.

    Now let's look at your response to my earlier question. We've designed this webinar to be useful to a broad spectrum of participants. But it's helpful to know who we're really talking to.

    Okay. So as far as innovators, there are 74 of you on the line. A doctor, 67. A researcher, 67. A policymaker, came in at 28. And, there are 105 of you who have described yourselves as other.

    So, with our case study and commentators today, we are going to focus on three important questions. How can you identify the data that you need? How do you gather that data? How do you interpret your data?

    So during today's webinar we'll be hearing from experts in using this data to evaluate and improve innovations in healthcare delivery. No matter what you are trying to achieve, it is critical to learn how to harness data to assess what's working and perhaps more importantly, what's not working so you can determine what to do next.

    With this brief introduction I'd like to introduce our moderator for today's discussion, Brian Mittman. He's the Co-Editor in Chief of the Journal of Implementation Science and a Director of the VA Center for Implementation Practice and Research Support in Los Angeles. He's also an active member of the Editorial Board for the Innovations Exchange. We are very pleased to have him with us today to guide us through this important topic.

    Brian Mittman – Director of the VA Center for Implementation Practice and Research Support

    Thank you, Judi. For the attendees I'd like to give a brief overview of the agenda before we launch into the content of the webinar. The participants are listed on the screen--Judi from AHRQ Center for Outcomes and Evidence provided a brief introduction to me. I serve in Los Angeles as Director of the new VA Center for Implementation Practice and Research Support. Our featured presenter is the innovator Michele Campbell.

    I'll provide a brief introduction and I'll introduce the two expert commentators, Deborah Rog and Gene Nelson, and we'll spend about 30 minutes in open discussion, in which we hope to hear from you and address your questions.

    Before providing my brief overview, what I'd like to do is provide a summary of the structure of the webinar. I will begin with an overview of the key issues in data collection. We will then have about a 15 minute presentation by Michele Campbell describing the innovation or the case study that will serve as the focus of our discussion. Each of the expert commentators will speak for about 10 minutes and I will wrap up with just a few final comments. We will then open up for the 30 minutes of questions and answers and then provide final wrap-up.

    Let me introduce some of the key issues in data collection by asking you to think about three distinct purposes for collecting data as part of an innovation or innovative activity. First of all, there's a need to collect data to understand the issue or the problem that you are attempting to address.

    This is also important in demonstrating the need for improvement. Often we are not motivated to address a problem and take the need for innovation seriously unless we see good data that indicates that there are quality gaps or other problems in performance that require some attention. So, one of the key purposes for collecting data again is to demonstrate the problem and the need for innovation.

    Secondly, to develop the innovation. And there are two aspects of this purpose for data collection which is first, the need to diagnose the causes of the problem—to understand the root causes in what is leading to the performance problems. Data can be very helpful in conducting that diagnosis.

    There's also a need to use data to target the innovation to the context or the setting and again the specific causes of the problem. And finally, the more traditional use of data is for evaluating and monitoring progress and refining the innovation. Data allow us to see how we are doing, to understand if the innovation seems to be addressing the underlying causes of the problem, and allows us to fine tune or refine the innovation in a way that ideally would make it more successful and more effective.

    And the speaker as well as the commentators will comment and address each of these three broad categories or purposes for collecting data.

    For those of you who are interested in exploring these issues further after the webinar, I would encourage you to seek out some of the program evaluation textbooks. The field of program evaluation for the most part offers the best most comprehensive most useful guidance in collecting data and using data. So any program evaluation textbook is likely to be useful. There are, of course, a number of program evaluation courses, experts and evaluation websites that you can turn to as well.

    I would also encourage you to examine the innovations adoption guide posted on the Innovations's Exchange Website, which addresses issues of data collection and the evaluation.

    Now before introducing the speaker we'd like you to let us know if you would be interested in a post-conference follow-up. And at the end of the seminar, I will post this question again. The Innovations Exchange would like to offer an opportunity for more detailed discussion of the issues that we addressed today. We are talking about a one-hour conference call that would be hosted by the Exchange at a time to be determined.

    Those who participate in the call would have an opportunity to discuss and ask questions about evaluating their own innovations and share ideas and experiences with each other. And we'd like to gauge interest in this kind of networking opportunity.

    If you would be interested in participating, we'd ask you to please answer the question on your screen. If you are not interested, no need to respond. And again we will post this question at the end of the webinar today to allow you a second opportunity to indicate your interest.

    And finally, if you have specific questions or topics you'd like us to cover please feel free to jot them down in the box so we can be better prepared to meet your needs during the proposed follow-up session.

    The follow-up call would be facilitated by an expert in these issues rather than today’s participants. So please be sure to ask any questions you would have for the presenters while you're on the call today.

    So with that background and introduction, let me then turn to our first speaker. Michele Campbell. Michele is Corporate Director for Patient Safety and Accreditation at Christiana Care Health System in Delaware. Michele has over 15 years of experience in nursing leadership and quality and patient safety and presents nationally on applications of high reliability concepts. She's one of the lead developers of the patient safety mentor program that serves as a focus of today's webinar. So with that introduction, Michele, I'd like to turn the platform over to you.

    Michele Campbell-Christiana Care Health System – Corporate Director for Patient Safety and Accreditation

    Good afternoon. The impetus for building the safety mentor program of course is the landmark Institute of Medicine report To Err is Human. Certainly prompted not only Christiana Care but also all hospitals across the nation to take a look and evaluate the existing patient safety efforts in our organizations.

    In addition, in 2001, we administered an internally developed survey to assess our patient safety culture. The opportunities we found at that point were related to non-punitive response to error and also our staff telling us they weren't sure we were making improvements as a result of their reporting.

    To further elicit the insights and perceptions of the staff we held focus groups. We learned that they were really reluctant to report errors.

    And there were a variety of reasons for that. Primarily, they felt that reporting error was difficult to navigate in our current web reporting system. But most importantly, they expressed a fear of disciplinary action as a result of reporting.

    We look at our safety first learning report data which is our event reporting system and made a conscious decision that we wanted to promote more reporting from that system.

    So, the goals of our safety mentor program became first and foremost to empower the front line staff to serve as ambassadors. We wanted to really engage them at the front line not just for patient safety but infection control.

    Secondly, we wanted to encourage peer-to-peer feedback and communication. We wanted the staff to be able to instruct their colleagues on safety issues and provide that real time feedback. Third, we wanted to enhance and promote error reporting including near misses with the hope that we would mitigate harm to our patients.

    Finally, we also wanted to look at facilitating learning. Not only from our errors but from sharing our lessons learned and working with storytelling.

    To design the program as with any quality improvement program, we first performed a literature search first. And we were quite impressed with the Clarion Safe Passage Program. That really aimed at engagement of the front line staff. And with that, we had just formulated the goals.

    In our organization, in terms of gaining organizational buy-in, we identified who our key stakeholders were. The board, our leadership, our patient safety committee members, and most importantly our department heads. We wanted to have their buy in since we were asking them to appoint a patient safety mentor.

    We did have a focus group that got together and defined the role description for the patient safety mentor. It included responsibilities, requirements, meetings and the composition of what the safety mentor program would look like.

    Currently we have 75 front line staff including nursing, respiratory therapists, radiologists, laboratories, phlebotomists, and people from environmental services including laundry infection control. All the disciplines at the front line have safety mentors.

    We then looked at what would be the educational and training needs of the safety mentors and created a fairly robust resource guide as well as tools and website for them. We did a first all-day educational session as well.

    We then looked at some infrastructure issues determining the frequency and the content of the meetings. We currently hold meetings every other month and we have a fairly good attendance there. The topics of the meetings will include presentations from senior leaders as well as our patient safety officer, or teams that have presented lessons learned from root causes.

    But more importantly, we have a large amount of time devoted to dialogue—where we share the concerns, ideas, challenges and the strategies from the front line staff. We also talked about data sharing at that meeting.

    In addition, we developed and implemented data collection tools, consistent with our goal for peer-to-peer feedback. We decided to implement a safe practice behavior monitor, which I'll talk about later. But it's really aimed at assessing our performance and safety practices of the national patient safety goals.

    And then we thought about how do we plan to evaluate this innovation? What would be relevant and important to our organization? What would be accessible and feasible?

    For those of you who are adopters, I have some recommendations based on our selection of mentors. I would advise selecting the mentors carefully. Think about those informal leaders at the front line level who have the ability to communicate in a non-threatening way and a strong desire to learn. We also considered and implemented protected time for data collection. We actually have budgeted into the project that the safety mentors will attend the meeting and conduct any data collection on protected time so they're not worried about their staff assignment.

    And most importantly, act on the front line input. As Lucian Leap and Burwick talked about in their report five years after To Err is Human, we’ve learned that the most important stakeholder group in our patient safety journey is the alert mobilized front line staff.

    Another resource that you might find invaluable was implemented in September of 2008 titled Will it Work Here? A Decisionmaker’s Guide to Adopting Innovations. And the link is included here.

    This guide has four modules and will help you decide whether this innovation fits within your organization. You are asked: Does the innovation align with or further your organizational goals? Should we do this? Can we do this? And how can we do this?

    In terms of validation of our success, we looked at our total events reported. The concept of the safety mentor program was first conceived in a focus group in 2003, but fully implemented in May of 2004.

    And we have a 17% increase in reporting of our total events. In addition, we have had also improved reporting of medication error-related near misses. And that was at 107%. May I have the next slide, please?

    As I mentioned we have 107% increase in reporting of medication near misses. Those are events that were corrected before they reached the patient. In addition, we've had fewer serious adverse events--about an 8% decrease in events being reported with major outcomes. In terms of our safety culture, there is a dramatic decline in the fear of disciplinary action and a perception of improved patient safety and learning.

    In terms of other uses of quantitative and qualitative data, I'm not going to go into all four of these aspects, but, I will talk a little bit about safety practice behavior monitoring. We also did use data to drive our improvements related to our safety first learning report. We also looked at effectiveness of our safety mentor meetings and also the qualitative feedback from the actual safety mentors in terms of any safety project design or strategy that we implement.

    In terms of the safe practice behavior monitoring, this was consistent with our goal for the front line staff--to be able to provide peer-to-peer feedback using our safe practice behavior monitoring tool. It consists of observations, for example, for hand washing, documentation may be related to universal protocol and an interview question about perhaps how would they actually respond or receive a critical test result over the phone.

    And this data was not only collected by the safety mentors, but then reported at the patient level, at the clinical department level, at a team level, at a clinical service line level rolled up to reporting vice president, as well as to the health systems board performance improvement committee.

    Next slide. Some of the lessons we learned. Certainly measurement is always an essential component of a project. But it's not always planned as part of the work project. It was important to us to assess baseline data, to evaluate our success, and regularly assess our performance. A very important strategic decision was to select the culture survey instrument in terms of measuring our progress for patient safety.

    Many factors would impact on that. Does it meet with your evaluation of your goals? Is it meaningful for your organization? Does it align with your priorities? Is it relevant to your front line staff?

    Certainly, resources had an impact in terms of the selection of measures in terms of accessibility and feasibility. But most importantly, we learned that the safety mentors' insights and perceptions promote a rich learning environment.

    We also recognize that our safety culture journey is local. It's multi-dimensional. It's still evolving. And as I mentioned previously, sharing data at the micro level and the macro system level can really drive improvements and promote linkages to your strategies.

    Some of the limitations in our innovation were related to the variety of patient safety culture instruments used. We did have some paper surveys also used early on in 2001.

    There were a lot of skills and understanding of staff that affected the data integrity. We had 75 different people collecting data at the unit level with data definitions.

    It was really important to--and in fact the first six months of our meetings was--focused on data integrity. What we also learned is real time peer-to-peer feedback is really dependent on the comfort level of the staff. And you may even have to use coaching.

    In addition, our pace of progress was affected primarily by turnover of our front line staff who were the safety mentors. Our organization had very low turnover rates, but what we found is we chose those informal leaders that absolutely were getting promoted to management positions.

    For our next steps in our journey, we want to enhance the "On Boarding" and formalize more recognition for our safety mentors. We've assessed our progress against 2009 results from the hospital survey on patient safety culture. And, we want to focus our efforts on concepts of a fair and just culture and look at how we determine and define frequency of our measures for future validation of our success.

    And this concludes my portion of the presentation. Thank you.

    Brian Mittman – Director of the VA Center for Implementation Practice and Research Support

    Thank you Michele. I'd like to point out to the attendees of today's webinar that additional details on the patient safety mentor program are available on the Health Care Innovations Exchange website, including links to publications. We will, of course, take questions starting in about half an hour and we'll have an opportunity to ask Michele to provide more details as desired.

    But before turning to the Q&A, I'd like to introduce our first commentator and ask her to give her remarks. Deborah Rog is an evaluation expert, an Associate Director with Westat, and also Vice President of the Rockville institute. Deborah has over 25 years of experience in program evaluation and applied research and she currently serves as president of the American Evaluation Association. Deborah.

    Deborah Rog -Associate Director - Westat

    Hi. Thank you. Let me begin by applauding Michele and her team for integrating evaluation into their organization. I think it gives a really rich example of using evaluation to guide the development of a program solution to a problem and to continue to monitor its implementation and outcome.

    I also want to applaud AHRQ for hosting this webinar and having us really show how data can be used to maximize evaluation and data for delivery of programs.

    What I believe michele's example highlights is the value of taking a broader view of evaluation especially when you incorporate it into internal programming and internal decision making.

    For many folks, they think of evaluation as synonymous with outcome studies and what I believe michele's example shows and what I will spend my remarks showing is how evaluation really is a broad set of approaches and techniques for a range of purposes and questions. You can integrate it throughout the life of a program and before a program is there.

    In many ways, for program providers and others, you can think of internal evaluation from a self-critical posture. How can we do things better, what is the nature of this problem, how can we learn more systematically about a problem, guide a solution to that, and keep looking at the solution to see if it's targeting that problem.

    I think today's case study really illustrates that self-critical posture.

    Next slide. When I talk about a broad view of evaluation, I think of a number of kinds of questions that evaluation can address. Again, many of these came up in the case study. Is it a new program? Is an innovation needed? Is there a problem we need to fix? How can we learn more about that problem? Once we learn about the problem, the phenomena, if you will that's out there, then how can we design the solution using data and analysis to help us understand that?

    If we have a program underway or we develop that program, is it being implemented as we expected?

    What's the process of implementation? How can we understand the program? Once a program is up and going, what are its outcomes?. This is what we more traditionally think of as the evaluation. Is the program working? Is it effective? Is it having the desired effect? Is it not only having short-term outcomes but are they leading to longer term outcomes? Are they leading to some impact that we desire?

    And then, finally, what I think is often important to policymakers and I think we do this a little less than in some areas, which is what are the costs of the program? And in particular, what are the costs in relation to the effectiveness and benefits of a program?

    What I want to do in this short time that I have is delve into the first three questions that I laid out there, because these are areas that programs don’t think as much about and also what you can integrate a little bit more readily into your service delivery. Within an organization, I think as Michele noted, you can use information in evaluation to spark a program and understand a problem better. What's the nature of the problem?

    In Michele’s case study, we're looking at safety issues.

    In the work that I do, I do a lot of work on researching homelessness. There's been more health needs of homeless people that have been identified; that the number of homeless people go into hospitals get discharged go back out into the shelters, and need to be re-hospitalized because they have no place to recuperate. And a number of communities, almost at the same time they were trying to understand the nature of this problem, were collecting more data on what the problem is, what's its scope, and what are the needs of the people in order to decide what do we need to do.

    We can do needs assessments. They were monitoring data from hospitals, looking at the extent to which homeless people are coming back into the hospitals. They were surveys of people in homeless shelters and other areas. Again, to understand what kind of intervention was needed.

    In this particular area, the providers decided something called respite care, where you provide a place to recuperate, may be helpful as a solution to this type of problem.

    Next slide. Evaluation can also help in terms of actually developing the program. How should a program be designed? What specific questions can providers collect data on and what are the goals that we're trying to reach? What are the outcomes we'd like to achieve? What kinds of approaches or activities could best lead to these goals and outcomes? What's feasible given the resources we have, given the context we're in, and given the kinds of staffing we have in our program?

    Again in developing the safety mentor program, Michele highlighted a number of things they did. First one that I don't even have on my slide but I would put on there is literature review, going to the literature. Seeing what models are out there. Assessing the stakeholders, what are the views of the folks that are involved in your program?

    Using logic models—that allows a visual display of kind of needs that you want to focus on, the kinds of activities that you're doing, and how they'd relate to outcome and seeing whether they logically link together. Doing feasibility studies--whether you have a model in mind or whether it would be feasible to implement looking more intently on how that would be implemented within your organization.

    And finally a tool with a long name--evaluability assessment--a tool which is often used when programs are up and running to see if they're robust enough, to see if they have internal integrity to actually be evaluated.

    An evaluability assessment could be used to develop initial programs to make them robust. Finally, the last area that I would highlight for programs that are underway is are they being implemented as expected? Are there activities as expected? Is it implemented with fidelity, which means integrity?

    If there's a particular model you're using is that model in place? Have there been changes that have to be made to that model? That's particularly important if you're doing it in multiple sites because you may have several different organizations, and other organizations may have different places in looking to see that that particular program is being implemented the same way or being implemented according to a particular model.

    What are the costs of implementing the program? What are the how resources being used?

    All of this is very important information for making sure you can make mid-course corrections and that you're understanding what exactly you're doing and whether you're on track to achieve outcomes. Can you assess and implement this by monitoring data? Data that you have internally to a program, having specific assessment of fidelity, looking at each of these program features and the extent to which they are there and then looking at costs and analyzing those costs.

    I hope I've given you a few examples of how evaluation can play a broader role. I think it's best if we think of evaluation not as program services or evaluation, but as being integrated--that it's an integrated enterprise where evaluation is yet another tool to help foster and cause good programs and health service delivery. Thank you.

    Brian Mittman – Director of the VA Center for Implementation Practice and Research Support

    Thank you Deborah. With Deborah's very nice overview of the value and the purposes and the role of evaluation, we'd like to turn now to a related set of issues, and that is details on how to collect data, how to analyze it, and so on. And to comment on these issues, I'd like to introduce Dr. Eugene Nelson. Gene is an internationally respected expert in quality outcome and measurement. He teaches in the graduate program at Dartmouth Center for the Clinical Evaluative Sciences and leads performance improvement program work in the Dartmouth Hitchcock delivery system. He's a recipient of the Joint Commission's Ernest Day Cobb Lifetime Achievement Award for the use of outcome measurement. Gene.

    Eugene Nelson – Dartmouth Center for the Clinical Evaluative Sciences – Teaches, Graduate Program

    Thank you, Brian. I think michele's case study and the title of today's program Improving Innovations: How to Make Data Work for You, raises many important issues. And I'm going to focus just on two. The first issue is collecting data in busy clinical settings, how do we think about that? How do we go about doing that? And the second has to do with what I'll term cascading metrics, how we can connect the work and the observations being done in the front lines with the front office.

    Dr. Deming used to say: In God we trust, all others bring data.

    Bringing the data in or using evaluation the way that Deborah just suggested is critically important. It provides you the opportunity to learn, create a learning system and to know how things are going and why they're going that way, and if the efforts to change are making a difference.

    So, on this issue of in God we trust, all others bring data, we might ask: Well, what's the purpose of using data and measuring? And I think of it as in this way that the purpose of measuring is to answer critical questions and to guide intelligent actions. So we wish to use our data collection efforts to answer our critical questions and to guide intelligent action.

    In the next slide I'd like to just briefly touch on the principles for collecting data in busy clinical settings. The first principle is to seek usefulness not perfection in measurement. It's literally true that all measures have error in them and the point is to have the measures to be sufficiently accurate for the purpose at hand for your purpose. The second principle is to consider using a balanced set of measures. You might wish to have a small set that looks at process and outcomes and costs. If we push down here in a system, it might pop up there so we might want to have a primary measure and a counter measure.

    So think about a balance. Third is to keep measures little simple. The KISS Principle. We like to think big but actually to start small. So we're thinking about all of what may be important, but we're starting in a very targeted and small way.

    The fourth principle is to use both qualitative data as well as quantitative data. The quantitative data is especially useful to tell us did it make a difference with respect to innovations. The qualitative information is especially useful in telling us under what conditions, why and how did this difference occur or not occur.

    So both quantitative and qualitative data are extremely important in creating that learning system.

    The fifth is quite specific to write down operational definitions of measures so that you have the opportunity to collect information in a consistent way across locations and over time so that the measures can be trust worthy.

    The sixth principle has to do with sampling. Sometimes the system is throwing off data continuously and we don't need to take a sample, and it can be used on 100 percent of instances. That's often not the case, however.

    And so if you can't get 100 percent representative sample, then consider using small representative samples to get a sense of what is happening, especially happening over time.

    Seventh is to build the measurement into daily work. The work flows are throwing off information and data and what you'd like to be able to do is to understand the work flows and the potential data flows to make the data flow out of the work as efficiently as possible.

    And the last principle, number eight, is to develop a measurement team. It's often helpful to have a group of people and perhaps a captain to look at this issue of using measurement in your local context.

    The next set of pointers has to do with a logical approach to planning data collection and measurement. Oftentimes we're in the position of needing to collect our own data and the point is to do this very efficiently, and not waste time and effort. And this four-step process has been shown to be quite useful. The first is to write down the critical question that must be answered. Put it in black and white. What's the critical question that we must answer here?

    And then to do a bit of time travel and say: Okay. Now we've started our innovation. We've run it for a period of time and now we're looking at our data display that will tell us if and when and how strongly the innovation is working. So you'd actually make a make believe or dummy data display showing perhaps what you hope will happen. It's your statement of local proof.

    And then having done that you step back and you look at it and you say now what's the minimum list of variables that I need to collect to complete this data display with sufficient accuracy and timelessness for us to learn from. And that list of variables allows you to make a operational definition and it's a focused effort not on collecting many things but only the things that are needed to create the data display that answers the critical question that you started with and then finally to localize it to write a local protocol, if you will, a data collection recipe, of who collects what, when, where, how, how do they post it, such that the data display starts to, if you will, light up or to be put up in your location.

    The next slide then shows one example of data collection put to use. It comes from the Dartmouth Intensive Care Nursery. And here the intensive care nursery, caring for low birth weight infants, wanted to dramatically improve their nosocomial sepsis rate, and in a NICU of approximately 30 baby census, they would have a new infection every few days over time, and what you're seeing is the time trend of the number of days between infections being just a few days, four or five days on average.

    And then we see an arrow coming down, a task force was formed, interdisciplinary intensive care nursery, part of a larger learning community, and they started to analyze the local causes of sepsis and to make changes. As you can see, the number of days between infections started to increase and then something extraordinary happened.

    They went over 200 days without having a baby infected. Over half a year without having a baby infected where the convention had been an infection every few days. An extraordinary change.

    The point is to have this data display used locally to answer the critical question: Can we reduce or eliminate nosocomial sepsis in our intensive care nursery?

    The next example I won't spend much time on it's a similar issue it's from Cincinnati Children's and it's the A6 South unit and their question is how many days can we go between code alerts where a child in our unit is starting to crash.

    And what you see here, in the yellow boxes, are different interventions that were taken and looking at the days between code alerts. And, a quite exemplary improvement.

    So next let me switch now to the second topic and it has to do with cascading metrics in connecting the front office with the front line.

    Next slide. In a health system, there are many different levels of action. So in the IOM report, I was fortunate to serve on the committee. We had a chain of effect that went like this. Patient and then physician and then clinical unit or clinical team and then clinical service line and then the health system. And we viewed that as patient, micro system, nosocomial system, macro system.

    The next slide then starts to show the idea of literally connecting the dots and moving from small dots to big dots (meso) so at the level of the patient and the physician, we have the small dots of, let's say, adverse event rate or harm rate or mortality rate.

    And we can make observations at the lowest level, patient and physician, aggregated up to the next level of dot. The micro system or unit level. Aggregate that up to the service line level. And get finally to the whole system level, the level of Cincinnati Children's or Dartmouth Hitchcock, the macro level to look at the big dots.

    The point here is that the whole system can be no better than the small systems that contribute to it, and you'd like to have your measurement system start at the most disaggregated level and aggregate up.

    Next slide. Another issue on this cascading metrics idea is that a large health system, for example, Christiana may wish to minimize adverse event rate, a big dot measure. And then, starting to think in terms of causes and effects, major source of adverse events, nosocomial infections. Take that down a level. Major starts of nosocomial infections, catheter related, take it down a level, now literally in the organizational chart, what's the rate of catheter-related infections in the ICU, NICU for each nursing unit and start collecting information at the front line level unit to look at catheter related infections that contribute to nosocomial infections that contributes to adverse event rates or harm.

    Next slide is a specific case example that builds on these ideas. It comes from Cincinnati children's for almost a decade now. They've been taking these ideas that I've just been putting forward and making use of them.

    This shows you in a sense their organizational chart for planning an improvement system and having a data system to support it. At the top we have the patient and the family. In the middle, meso service system line, inpatient and outpatient team. ED. Peri-op, et cetera and the system as a whole.

    Next slide starts to show the scorecard, the instrument panel that now is looking at all of the inpatient care units against the organizational goals. So the first set of goals have to do with patient safety. And then the columns represent individual inpatient clinical units. And the color code green meaning at target level or above. Yellow meaning not there but approaching. Red means in trouble.

    And so we can look across and between and see how this inpatient meso system is working. And then moving ahead, let's focus on A6 South, a particular clinical unit. Next slide.

    We're going to take it down another level. Next slide. A6 South and what we'll see here is a front line unit level instrument panel that feeds into the overall instrument panel, and if you were to understand this or look at it carefully, you would see that some of those important success measures are actually collected locally by staff as part of their work assignments and as part of their care in work flow routines. Others of these measures come, if you will, automatically from the enriched information environment that they've been able to create.

    Then at the bottom you'll see the idea of the measurement team. There's captains. A physician and a nurse. And there are measurement/improvement team members listed.

    And this is updated on a monthly basis. So that's a case study example. So to summarize the key points, when we think down, we want to have a systematic approach to build data collection into busy front line clinical settings and that it's designed to answer critical questions. And then when we think up, we want to think about developing a cascading system of metrics that connects work at different levels of the organization and can literally link the mission and vision of the organization to strategic intent to strategic objectives to what's happening in the front line.

    So I'd like to thank you for giving me the opportunity to share these two commentary points and look forward to the Q&A.

    Brian Mittman – Director of the VA Center for Implementation Practice and Research Support

    Thank you. This is Brian Mittman again. I'd like to thank all three presenters for your comments and what I'd like to do is spend a couple of minutes summarizing some key take-away points and provide instructions for posing questions and open up for questions and answers.

    First of all, I think it's clear from the case studies as well as both sets of comments that our goal is not perfect data collection. There are clearly some trade-offs--in fact-- there's no such thing as perfect data collection. There are trade-offs between the quantity and quality of data. In many instances it's much better to collect and use a broad range of data despite some inaccuracies or some imperfections in the data. There are trade offs between timeliness and validity.

    For example, case mix adjustment and other ways of enhancing the accuracy of data often come at a cost of time. And it may be useful to have estimates very early rather than more accurate data at a future point in time. There are issues related to standardization or consistency with other data collected by other organizations outside. And trade-off between collecting primary data versus making use of internal readily available data that might be easier to use despite, again, some lack of consistency with outside data sources and therefore an inability to draw comparisons.

    No matter what, though, I think it's important to emphasize the need to assess the degree of bias or the inaccuracy. So if it is often useful to use estimates or other somewhat dirty data sources knowing something about how dirty they are, what the inaccuracies are. So, devoting some time and effort to that validation or assessment is important.

    It's also important, of course, to collect the best data that you can within the constraints of available resources and through the case study in the commentary we saw examples of both qualitative and quantitative data. And again the need to collect data on processes as well as outcomes or impacts. I hope some of these issues come up again during this question-and-answer period.

    And finally again to reinforce the issue of multiple purposes for collecting data, the issues that Deborah Rog presented in her presentation, the need to use data to identify problems and document the need for improvement. The need to use data to diagnose and understand the cause of those problems and plan for improvement strategies; to monitor improvement and to refine the innovation or the improvement strategy; and, again, to do so with a focus on outcomes and impacts as well as processes and contextual factors.

    What I'd like to do now is ask you to provide input regarding the value and the desire for a follow-up session. Again, for those who may have joined us late, the Innovations Exchange is interested in offering more opportunity for more detailed discussion on the issues raised in the webinar. This would be in the form of a one-hour open conference call sponsored by the exchange and would allow questions about your own evaluations and for you to share ideas and experiences with each other.

    If this is something that you'd be interested in participating in, please answer the question on the screen.

    While you are doing that, and while we are opening the lines let me repeat the instructions that Judi provided during the opening of the webinar in terms of submitting questions and again you have two options for submitting questions for discussion. First, by phone--if you would hit star 1 on your key pad the operator will come on and connect to you the webinar and allow you to ask your question live.

    You can also, however, send questions directly using the Q&A feature on your screen. Just type in the question and click on send. We've received a number of questions from you and we'll make every effort to get to as many of them as possible.

    So let me first ask if anyone is on the phone and what I'd like to do is alternate between phone and e-mail web submitted questions. While we're getting the phone connection straightened out, let me pose the first question that was submitted to Michele, our innovation presenter. And that is how did you collect the data? Did you use paper or electronic systems? Did you yourselves collect the data or did you employ an outside entity to collect the data for you.

    Michele Campbell-Christiana Care Health System – Corporate Director for Patient Safety and Accreditation

    Let me first talk about the patient safety culture data. We initially did paper as well as a web link on all the patient safety culture surveys that we did. Most recently in 2009, we only did an electronic web link with a 59% responder rate. Having said that, we realize as we're going through some of our data right now that there may have been some limitations in some of the areas such as environmental services or laundry, without having the opportunity to do a paper tool for the patient safety culture survey.

    But the second question in terms of the safe practice behavior monitors, we have over 100 different departments collecting data on the national patient safety goals. Right now, that is all paper and we are right now five of our medical practices are testing a web-based form for that data. That has been fairly resource intensive as the national patient safety goals grow, the data collection grows. But our sole purpose was not necessarily for data collection but also the peer-to-peer feedback. But we are up to about 13, 14 questions in terms of the safe practice behavior monitoring tools. We did specifically develop one for an inpatient department. So, one for an outpatient department, one for the diagnostic testing area, and one for the operating room.

    And the tool itself has integrated the actual data definition on the tool. So I hope that answers your question. Then we end up doing a fair amount of data entry into an Access database and then report that back out in terms of our safe practice behavior monitoring.

    Brian Mittman – Director of the VA Center for Implementation Practice and Research Support

    Thank you. Gene, let me ask you to comment on these questions as well from a broader perspective again using paper data versus electronic, having internal staff collect data versus outside staff and also if you could comment briefly on the issue of inter-rater reliability--if you have large numbers of data collectors, how would that be handled?

    Eugene Nelson – Dartmouth Center for the Clinical Evaluative Sciences – Teaches, Graduate Program

    Deming used to say: People that plot their own points understand their data. So I think there is great value in having local data collection done in very simple formats, checklists, making runs of data by every day is an X on a chart that's going up that's infection-free. So that is extraordinarily helpful. Oftentimes today as indicated Excel spreadsheets can help us go a bit further and are widely available.

    Brian Mittman – Director of the VA Center for Implementation Practice and Research Support

    And there's a follow up question, Michele that I'd like to pose back to you, that is given that most clinicians and staff are not trained in collecting measures in data, how would you suggest an organization get started? What did you do with your program to train your staff in collecting data?

    Michele Campbell-Christiana Care Health System – Corporate Director for Patient Safety and Accreditation

    We started very small. We had a focus group that developed the data collection tool with individuals from our data acquisition and measurement department as well as some individuals from the patient safety work with about five areas. They tested the tool. They gave us feedback on the tools and said what was working, what was not working. And what would make it simplified. And, as I said, I think of the first six months of our program, we really spent a fair amount of time at the meetings. We were actually breaking into focus groups and talking about the data integrity issues and how important it was when you marked the yes--what that meant versus the no. And once we tested the question, sometimes we did things at a question level. Sometimes we did the whole tool at a small four to five units and then we did some rapid cycle in terms of extending it out beyond the five areas, then we went to 10. And then we went to one hospital, then the second hospital. So that's how we tested the data collection tool.

    Brian Mittman – Director of the VA Center for Implementation Practice and Research Support

    Deborah, I'd like to pose a related question to you we talked about the difference between internal versus external data collection. Of course, there are trade-offs between external and internal evaluation. On the specific point, though, of data collection, to what extent are there blended approaches where, for example, an evaluation might be conducted by an outside group but it might rely on internally connected data and if so are any of the issues related to accuracy and reliability and so on different?

    Deborah Rog - Associate Director - Westat

    Sure, I think in some of the best situations have you that blending. I've done a number of evaluations where providers are collecting data from the individuals they serve, they're collecting assessment data. We may work with them to develop the tools. We may work with them to develop strategies of looking at how to assess reliability.

    Interesting, as you were talking, and Gene was responding before, it struck me that sometimes the qualitative data are the data that may be external folks. Because when you're looking at situations where you want some objectivity or independence or even a perception of that where you have people you're asking the questions of, who know you and know you well they may be giving you a certain kind of answer. So in those situations, it may be best to have external folks collecting the data or if there's a particular skill level.

    But in many ways you can blend the approaches and/or some of the data that you want on a very frequent basis it makes good sense from an efficiency standpoint to have it built into the program.

    Brian Mittman – Director of the VA Center for Implementation Practice and Research Support

    Okay. And I have a long queue of questions that have been submitted electronically, but let me pause and ask if the operator if there are any questions that anyone would like to pose live during the call.

    Operator

    There are currently no questions in the queue.

    Brian Mittman – Director of the VA Center for Implementation Practice and Research Support

    Okay. Let me turn to the next question, then. And that is how can one be sure to make decisions based on data at the right time if one month my data are low and the next month higher--when do I act, quarterly, six months, sooner? Let me, again, Deborah ask you to address that question first and then Gene.

    Deborah Rog -Associate Director-Westat

    The common answer that I have is it depends. And it depends on the kind of data. If you've got something that are safety issues and you have a very high rate of problems, I would react quickly. But on many other things you're looking for trends and you're looking to see if that pattern continues. You may also at the same time be looking and assessing what could have affected that trend. If you have something that's skyrocketing, collecting maybe some informal information or data or other things going on in the environment that may have gone on with the blipping of the data.

    Brian Mittman – Director of the VA Center for Implementation Practice and Research Support

    Gene, any additional comments?

    Eugene Nelson – Dartmouth Center for the Clinical Evaluative Sciences – Teaches, Graduate Program

    Yes. In the system, my Dartmouth home system and many that I work with, a primary skill for the improvers and innovators is statistical process control. Learning how the logic behind making a run chart, turning that into a statistical process control chart and then plotting that data as rapidly as possible, as close to real time as possible, and then using the known tests of shifts in data points or runs in data points going up or going down or points out of statistical control, more than three standard deviations from the established mean.

    So using data as close to real time as possible. Plotting data over time, and learning SBC conventions really can be mastered by front line staff and is extremely helpful for answering the question of when is up up, when is down down, and when is there strong evidence that substantial change in the system has occurred and if it's being sustained.

    Brian Mittman – Director of the VA Center for Implementation Practice and Research Support

    Michele, did these questions arise in your work with the patient safety mentor program?

    Michele Campbell - Christiana Care Health System – Corporate Director for Patient Safety and Accreditation

    Yes, at the team level. I would say, when we started collecting the data for the safe practice behavior monitoring, remember, we did it at the local level giving feedback each month to the departments. But as we stratified the data at the team level we did do some control charting in some of the teams related to some of the efforts there.

    But we have not used statistical control charting consistently back to the nursing units really because of what was relevant to the staff. But absolutely using it at team levels for monitoring, say, our VAP rates or bloodstream infections, we absolutely do use that.

    Brian Mittman – Director of the VA Center for Implementation Practice and Research Support

    We've received a couple of related questions having to do with multiple levels and the concept of a cascade of measures. Gene, this question is directed to you. In creating cascades of measures, how do you balance the need for senior leadership or governance to set priorities and strategic directions with the big dot measures that drive downward to front line versus the bottoms up approach starting from where you're measuring at the micro level?

    Eugene Nelson – Dartmouth Center for the Clinical Evaluative Sciences – Teaches, Graduate Program

    It's a great question. I think in general you do want and need senior leadership to set the large goals for, for example, mortality or freedom from harm or excellence in patient experience, and to start to conceive of a way of measuring those at the organizational level and then creating knowledge at the front line level. You have two jobs to do your work and to improve your work and to improve it in a way that contributes to the larger systems goals in a way that's measurable. And also to play catch ball, which is the term we use, so there becomes a dialogue between the front office and the front line for, let's say, A6 south on what the goals are for the next 12 months, and it may be that there's two and two. Two are coming from the system at large and two of the improvement or safety goals are coming from that front line unit --so that you get both bottom up and top down and focus on what matters, what's being measured and what's being done to improve the process to move those measures.

    So the idea of paying catch ball and senior leaders setting the primary objective and the primary measures trying to establish the cascade and playing catch ball so that you can get the issues of importance to improve at both levels of the organization being addressed and heard.

    Brian Mittman – Director of the VA Center for Implementation Practice and Research Support

    Before I pose the next question I'd like to ask you to go ahead and complete the evaluation in a minute on your screen you will see the post-webinar evaluation. And we encourage you to provide feedback. It's very useful to the Westat team as well as to the presenters.

    Let me then turn to the next question. Gene, I'll ask you to comment on this first. That is, how would you handle development of indicators and benchmarks when you have both electronic and paper data collection and charting available?

    Eugene Nelson – Dartmouth Center for the Clinical Evaluative Sciences – Teaches, Graduate Program

    Good question. I think the start point is this issue of what's the question that must be answered, what's the dummy data display, what are the essential variables to collect it? Can we get that electronically? If we can and it's sufficiently accurate and sufficiently timely, terrific. Use it, go for it. And it may be that over time certainly you want to improve your electronic automated information environment to be richer as time passes.

    However, if you're in the other zone, which is common, it has to be collected by paper, by hand at the front line as work occurs, then try to think minimalist. What's the minimum number of variables that need to be collected at what time period, periodicity to start to sketch out in sufficient accuracy the current zone of performance and how that's moving over time.

    So getting that figured out and then translating that into front line job descriptions, task activities, so at the end of the shift this person is responsible for doing that with respect to data collection on this important indicator.

    Brian Mittman – Director of the VA Center for Implementation Practice and Research Support

    Okay. Thank you. Next question I'll ask Michele to address first and then I'll ask the two commentators to speak as well. And that is, do you foresee using the data collected for quality improvement at the individual health service level in any comparative manner comparing your data against other acute care organizations or against any national benchmarks that might be available?

    Michele Campbell - Christiana Care Health System – Corporate Director for Patient Safety and Accreditation

    Very good question. Most of our benchmarking really has been internal benchmarking in terms of competition among our clinical departments. For example, in our nursing department, we have 59 nursing units and we share that data more for internal benchmarking by service line, by vice president, by type of unit. But it's a great question, because if you look at the data for the safe practice behavior monitoring, it really is almost perfection. How many is too many? And that really would be -- it would be an interesting project. Next innovator.

    Brian Mittman – Director of the VA Center for Implementation Practice and Research Support

    Gene, could you comment as well and again the use of data collected for internal quality improvement purposes for external comparison.

    Eugene Nelson – Dartmouth Center for the Clinical Evaluative Sciences – Teaches, Graduate Program

    This internal quality improvement for external comparisons, there's many ways to have mischief in the internal improvement versus external judgment world.

    However, I think we would like to focus on having the goal be continual improvement in all locations. And to have the work flow off the data so it can inform performance at the local level and then can, as I mentioned before, be aggregated up to the whole system level and then can be reported out for public reporting. That that's the most efficient way to think about data systems—so, at my home institution Dartmouth Hitchcock, if you went to our website, you can view many different service lines with cost data, safety data, clinical outcomes, patient perceptions reported out and the basis of the reporting is for internal improvement to know how we're doing and to know how to get better. So I like the idea of internal improvement producing the data being used rigorously and vigorously, and then rolling that out because our patients, their families, our communities, our public deserve to know what's happening underneath our hood within our system.

    Brian Mittman – Director of the VA Center for Implementation Practice and Research Support

    Deb, let me give you an opportunity to respond if you'd like to and then I'd like to ask you a couple of questions about qualitative and process data. But first any thoughts on the issue of internal QI versus external comparisons in the use of data?

    Deborah Rog - Associate Director -Westat

    I would hitchhike on what Gene said. I totally support that view, and I think the devil does get into the details. It can be very helpful for an individual purpose, but there may be measurement difference, other kinds of things that when you start to get an external comparison you have many other things that differ and it may detract from the purpose at hand, which is to really use it in an internal way to have, to guide internal decision making and to guide any service improvement or other actions. So I would agree with Gene.

    Brian Mittman – Director of the VA Center for Implementation Practice and Research Support

    Okay. Now I have a pair of questions which would probably require a good hour and a half to respond to, but let me ask if you could speak briefly.

    And that is if you could talk a bit more about collecting and analyzing qualitative data and tracking and analyzing the process of change or processes of care rather than the more quantitative data that we typically use to analyze impacts and outcomes. And, Deb, if you could begin.

    Deborah Rog - Associate Director -Westat

    Well, I'll start with qualitative data. I think qualitative data can be exceptionally useful, and I think it can be really very, very helpful to put some of the flesh on the bones on the quantitative data. I think in tandem is often the way you want to go and there's a variety of ways you can collection information through focus groups, through interviews. Again, I think you have to be concerned with who is collecting the information and if there's any way in which that person collecting elicits nonobjective information from others. But other than that I think there are a lot of different ways to do it. And I think we also have a number of tools that are available also to help us in organizing qualitative data.

    There are now qualitative software packages that are quite robust and quite extensive and complex to be able to use to do complex analysis, but they also can be used to help us organize qualitative data.

    As you start to collect the same type of information from 10 or more sources, it can become complex. So these qualitative systems, Invivo is one and Atlas another can be useful in helping you organize that information.

    Brian Mittman – Director of the VA Center for Implementation Practice and Research Support

    Michele, were there specific points in your program where you did rely on qualitative data?

    Michele Campbell - Christiana Care Health System – Corporate Director for Patient Safety and Accreditation

    I would say in terms of the qualitative data we certainly are looking at the results of the qualitative data from the patient safety culture survey. And not only looking at it from the perspective of the themes or the composite scores, but consistent with the goals of our program. So we are trying to, as we communicate our results back from our recent survey, we will look at that qualitative data in terms of giving the feedback to the safety mentors, to the organization to say: Hey, the staff's telling us we're on the right track. Our patients are telling us we're getting there, too.

    So we do utilize it that way in terms of themes and composites.

    Brian Mittman – Director of the VA Center for Implementation Practice and Research Support

    Gene, any additional comments in terms of the role of qualitative data and the kinds of data collection activities that you had presented?

    Eugene Nelson – Dartmouth Center for the Clinical Evaluative Sciences – Teaches, Graduate Program

    In addition to the interviews and the focus group interviews, the value of direct deep observation of current state or new state is extremely valuable qualitative information. And one of the values of the qualitative information is it can help you know what works where, why, under what conditions. And that's extremely valuable, is you try to spread innovation across the system where you try to move in innovation from one system to another, because generally if we took the, for example, the safety mentor program and transplanted that to 50 other hospitals, it would work quite well in some and not well in others and sort of middling in between that to learn what's behind that. To learn what's behind that the next time you acquire the qualitative information to understand what works where under what circumstances.

    I would like to add with the patient safety culture assessment; the work in terms of the quantitative data, really improving only just began. So that the culture debriefing sessions I think you do need to clarify with the staff to review the results and select an item that's relevant to them and to their experience and their clinical area.

    And it's important to envision what they feel the ideal might look like in terms of when they do select a particular process.

    And I think also getting then to the action step, what would the staff in the clinical area say that could move this in a more positive direction? So I think that's how we have used the qualitative sessions. But I think it's also a facilitator-led session and that's important.

    Brian Mittman – Director of the VA Center for Implementation Practice and Research Support

    Gene I wanted to follow up on something that you said at the very end of your response, and that is what I would call the issue of surprises, innovation that's been highly effective in other settings and that a new setting implements and then they discover a very different level of effectiveness and they need to understand why.

    And, again, that's obviously one of the key motivations for collecting data and evaluation. My question, though, is whether an organization should prepare for that and should put in place data collection efforts to provide the kind of insights that would be needed in order to understand why something is not working in our setting where it did before.

    Or do you initially set up essentially a monitoring system and if things are going well you're fine. If they're not, then you sit down to begin to think about what some of the reasons might be and what sorts of refinements would be needed.

    Eugene Nelson – Dartmouth Center for the Clinical Evaluative Sciences – Teaches, Graduate Program

    Great question. I think ideally what you would like to do is to have a monitoring system that gets down to the different front line unit levels and if things are going extraordinarily well across the board, then the issue becomes but are we at the theoretic maximum? Should we be at the theoretic maximum? If we aren't, but should be, what's theoretically possible with respect to safety and avoidance of harm, for example, what are our ideas to now close that gap?

    So I can see even though you're hitting initial targets, if you aren't at the theoretic maximum, you may wish to dig further using qualitative results.

    And of course, it’s important to know is why it's sustained here or not sustained there. Or differential levels of success why and why? High performing on this intervention, low performing on this intervention, what's behind that?

    Brian Mittman – Director of the VA Center for Implementation Practice and Research Support

    Okay. Any additional comments, Deborah, on this issue of what works when, why and for whom?

    Deborah Rog - Associate Director -Westat

    I think it just raises the issue of data alone. Don't do it for you, it's having an analytic posture and having the question set up there. But the one case you don't want to have is collecting a lot of data without knowing exactly how you want to use it and how you're putting it to the question. So I would just say it brings up to me having a keen set of questions and making sure then you're collecting data that are targeted to that.

    Brian Mittman – Director of the VA Center for Implementation Practice and Research Support

    Okay. And we have a couple more minutes for questions before I turn the podium back over to Judi to wrap up. But let me again pause and ask the operator if there are any questions from participants via phone.

    Operator

    No, we apparently have no questions on the phone line.

    Brian Mittman – Director of the VA Center for Implementation Practice and Research Support

    Let me pose another question, then, to Michele. And that is the web-based form for data collection is there an institutional tool you've developed? Is there a vendor-offered tool that you've used?

    Michele Campbell - Christiana Care Health System – Corporate Director for Patient Safety and Accreditation

    The web-based tool that we are piloting currently has been internally developed. The data that we collect beyond the web actually does sit into a database that is commercially developed. But the web tool itself that we're testing has been developed internally and it's in five physician practices.

    Brian Mittman – Director of the VA Center for Implementation Practice and Research Support

    Okay. Let me ask Michele as presenter and the two commentators if there are any other thoughts that any of would you like to add before we close?

    Eugene Nelson – Dartmouth Center for the Clinical Evaluative Sciences – Teaches, Graduate Program

    This is Gene. I think one of the opening comments that was really important that Michele made had to do with it being especially important to involve the front line leadership, the front line informal leaders. And I think that really one of our challenges is to connect the front line with the front office, both with respect to moving important measures, making important accomplishments and having measurement and knowledge at all levels of the system. So really getting down to that front line micro level is extremely important because the whole system's results are produced at the sharp end at the front line.

    Brian Mittman – Director of the VA Center for Implementation Practice and Research Support

    Thank you. And any final comments, Michele or Deborah, and then we'll ask Judi to conclude for us.

    Deborah Rog - Associate Director -Westat

    This is Deborah. I would just say that to really sort of bring home the issues that building it in -- not to think of evaluation as a separate activity or data as a separate activity. But really seeing how it can be built into daily work as Gene had said and thinking of it as an integrated activity. So it's a tool for you, not something that's a burden on the side.

    Brian Mittman – Director of the VA Center for Implementation Practice and Research Support

    Okay. So let me again thank all three presenters for your contributions as well as the participants who submitted questions and I'll turn the platform over to Judi.

    Judi Consalvo – AHRQ Center for Outcomes and Evidence – Program Analyst

    Thank you, Brian, I'm afraid we're now out of time. And I have to bring this webinar to a close. It's been a wonderful experience, and I want to thank everyone who submitted questions, which prompted some in depth and valuable discussion and this rich exchange of information. Thank you. Thank you all.

    I want to mention again to the audience that we value your feedback and hope you can spend a times completing the evaluation that's either on the screen or about to appear on the screen. And if not you can also contact us at any time at info@innovations.ahrq.gov.

    On behalf of AHRQ, thank you again to Michele, Gene, Deborah and Brian. And to all of our participants.

    Operator

    Thank you, ladies and gentlemen. This does conclude today’s teleconference. You may disconnect your lines at this time. Thank you very much for your participation. Have a wonderful day.