Is Safe Healthcare Rocket Science? Applying NASA to healthcare.
NASA for Healthcare By James Bagian, MD, ME, Physician, Engineer, and Astronaut
USA April 27, 2017
Yisrael Safeek: Welcome Dr. Bagian. You have a distinguished career as a NASA astronaut, flight surgeon, engineer, anesthesiologist, private pilot, and Air Force-qualified freefall parachutist. As a founding member of the Department of Defense’s Committee on Tactical Combat Casualty Care, your work in pre-hospital trauma care has substantially reduced mortality of service members who suffer battlefield wounds. You are also among the world’s premier patient safety experts, having been selected in 1998 by the VA to establish the National Center for Patient Safety and becoming its first director. You developed and implemented an innovative national program, aimed at protecting patients from hospital-based harm, at all VA hospitals. Moreover, this program is the benchmark for patient safety in hospitals worldwide and earned the Innovations in American Government Award in 2001 from the John F. Kennedy School of Government at Harvard University. You are currently the Director of the Center for Healthcare Engineering and Patient Safety at the University of Michigan. You were selected to become a space shuttle astronaut, at the time, the youngest person to be selected. During your career with NASA, you logged 337 hours of space-flight, over two missions, STS-29 (in 1989) and STS-40 (in 1991). After leaving NASA in 1995, you were elected as a member of both the National Academy of Engineering and of the Institute of Medicine, now called National Academy of Medicine. You took part in both the planning and provision of emergency medical and rescue support for the first six Shuttle flights – including flying on Discover in 1989 and Columbia in 1991. NASA used to say – “flying the shuttle is like flying a 727 to Disney World.” Is that really the case? James Bagian: Okay, yes, you are right. Back in 1985, if I recall, NASA’s public affairs, the people that do the public communications for NASA had said that. In fact, it was said at one of our office meetings by the head of our office in reference to this public affairs characterization that if flying the shuttle was like flying a 727 to Disney World, nobody would be going to Disney World. This drew laughter from those in attendance because the risk of flying on the Shuttle was so much higher than flying on a commercial airliner. Now, the decision to accept or take a risk is a comparative thing. It always comes down to what is the reward for the risk you take. There is risk in everything we do whether it is in healthcare, or flying a Shuttle, or an airplane, or riding a bicycle down the street, but we have to weigh it against the countervailing advantage or benefit. At that time, NASA wasn’t good about transparent communication of the risk and what the value was of taking that risk. I currently serve on the NASA Aerospace Safety Advisory Panel and in our annual reports to NASA and Congress we talk quite a bit about the importance of transparent communication of risk and the value of taking that risk. Because everything we do, whether we think about it that way or not, comes down to that type of decision. People, especially leadership, should be taking that risk or accepting that risk in an open and transparent way, so that it is understood why the decision is made to take a particular risk. YS: NASA's seamless continuum from the ground, through the atmosphere, into space was interrupted by the 1986 Challenger space shuttle accident. You and your crew were supposed to be on the Challenger, but got switched out a few months before the fatal mission. Did you witness the accident? JB: Yes, we had been assigned - our mission was 61D. It was the first SpaceLab Life Sciences mission and we originally were supposed to be the first ones off pad 39B. About five months prior to that, NASA made a decision that the 51L (the Challenger mishap crew) was going to fly their payload first. As a result, since my mission was pushed back I remained at the Kennedy Space Center to support crews getting ready. We were the ones tasked by the Astronaut Office to assist in making sure the vehicles checked out and were ready for the crew to fly. So, yes, I was there that day and I was at Road Block L if I recall correctly, which is the closest place to the launch pad. I was there when it occurred. Aircraft were not allowed into the area of the Challenger mishap immediately after the accident because there was still debris falling down for about an hour after the break up. Once we were cleared to fly into the area, I flew aboard a helicopter and we mapped where we saw the debris to provide information that would help with the investigation into the cause of the Challenger mishap. I was part of the accident investigation. I did the first dive to confirm where it had been reported that debris from the crew cabin had been located. On that dive, I confirmed the location of the Challenger crew cabin, and brought up the first remains of the crew. I was subsequently part of that whole process of the accident investigation that determined what happened to the crew and caused their deaths. Our ultimate goals were to understand the things we could learn from this mishap, so that in the future, if we had another mishap like this we would be less likely to lose the crew themselves. And that led to be the development of the escape system we used on the Shuttle. I was subsequently part of designing and implementing the escape system for the Shuttle.
YS:You were tasked by NASA to design, develop, and test a new shuttle escape system so shuttle crew could survive future disasters. Was that similar to military ejection seats? JB: No, it's not an ejection seat. We looked at a number of options including ejection seats but we actually went to a manual bailout system. We designed the system so that if another mishap similar to the Challenger mishap occurred, the crew would have a good chance at surviving. We felt that this was important because our investigation of the Challenger mishap had conclusively shown that the crew were strapped into their seats and still alive for over two minutes after the initial Challenger break-up and were killed when the crew module hit the water. This indicated that an escape system could have been of value. YS:You said that the word fault is the “F-word” in medicine. What do you mean by that? JB: Well, an early question after a mishap occurs that people ask; and that I think many of us have witnessed and maybe have done ourselves, is when something untoward occurs the first thing that is often done is to look for who to blame and we say, “Whose fault was that?” And when you do that, especially in a leadership position and say, “Whose fault was that?” people want to answer your question. They want to attach a name to it. That implies that if we just figure out who is at fault and we correct them, in some way, we punish them, we educate them, we train them and that everything would be okay. But, things aren’t that simple. First of all, it's hard to say you can really correct any person, and in reality probably the one you come closest to successfully correcting is yourself, not others. If we think that simply identifying the person that made a mistake, and that if we would just correct that person that the world will be okay – that’s a fallacy. The fact is, that there are usually many people who have made similar mistakes in the past and are likely to do so in the future. So, focusing attention exclusively on the person associated with the event is unlikely to prevent the myriad of other people that may be involved in similar events in the future from having the same undesirable outcome. A much more productive and proactive approach is to intervene at a more systems-based level that will not only prevent or mitigate the risk for the individual case under consideration, but for the greater population at large. This more systems-based approach will help prevent not only the individual who was involved in the adverse event under investigation from having this problem in the future; but it will help prevent it from happening to others in the future. We will have a true preventive impact. Once an adverse event has occurred, we can't undo that. What we can do is try to mitigate the risk or eliminate the risk for similar cases occurring in the future, and thus benefit a greater number of patients in a more effective and sustainable way. YS: How does healthcare compare to engineering and aeronautics when it comes to dealing with human errors? JB: Well, I think, first of all when we started the patient safety program at the VA we didn’t use the word error at all because error, once again, goes to blaming and determining fault. Error implies a human has done something wrong and distracts us from understanding the real underlying causes. That then leads people down the logic train to say, “Let’s find that person, let’s find that guilty person - the person who we’re going to blame,” rather than looking at the systems-based causes and contributing factors to understand how to prevent or mitigate the risk of occurrence in the future. So, we didn’t use the word error.
YS:The Roman Philosopher, Cicero, coined the very familiar statement, "To err is human but to persevere in error is only the act of a fool!” You advocate “eliminating patient harm” rather than “eliminating human errors.” JB: Exactly, because first of all, when you state that there was an error, that’s based on a post hoc analysis, after you've seen the result. This retrospective analysis and classification that an error has occurred is not particularly useful to the caregiver who doesn’t have the ability to foretell the future. When you say there's an error, it makes it a personal thing, so people tend to go to the “who’s at fault, who’s to blame,” and that is a distraction from determining the underlying systems-based causes of the adverse event. Secondly, using the engineering approach - your goal should be what you desire the future state to be. The patient does not care if the caregiver makes an error, but cares if they didn’t get the care they wanted to get. It is a fallacy to assume that an error always results in harm to the patient since in well-designed systems that have fault-tolerant properties, individual errors or mistakes may occur, but the system as a whole still does not result in harm occurring to the patient. It is naïve to rely on human performance ever being perfect. So, it is imperative to design care systems so that even when individual components of the system are imperfect, that the system as a whole still renders safe care. The goal for patient safety that we set for ourselves was that no patient would be inadvertently harmed while under our care. We specifically didn’t say that we wanted to eliminate all errors since that is not possible and is not the real goal from the patients’ perspective. YS:So, this brings up the question, why are so many resources expended monitoring medical errors? JB: Well first of all, I think one reason for the level of expenditure on monitoring is that people and organizations don’t always consider how the information from the reports will be put to use in preventing or reducing future patient harm, and whether the effort expended in collecting those reports is justified by the downstream gains. Just as it is important to carefully decide which tests are needed to appropriately diagnose and treat a patient rather than just indiscriminately ordering all of the studies that are available; it also important to decide which factors should be monitored and provide information that will provide information to achieve our goals related to patient safety. Unfortunately, this is not always done. Said another way, you want to have a rational way to determine if “the juice is worth the squeeze.” YS:The NASA Aviation Safety Reporting System (ASRS) was established in 1976 following a significant aircraft accident. You implemented NASA’s Patient Safety Reporting System (PSRS) across all 150-plus VA hospitals. Why that program? JB: We were concerned before we even rolled out our initial patient safety program in November of 1999 that our employees would be reluctant to report because of fear of the potential adverse impact reporting issues would have on them or their colleagues. We understood from a safety culture survey we had performed that people’s major concern was not being punished (though you hear people talk about that) or about losing privileges, or being sued. Instead, the number one thing was shame. Forty nine percent said that they would be ashamed if anybody knew they made a mistake. Well, that’s a very strong word - ashamed. We designed an internal reporting system that would insulate people from the fear of shame and we were also were concerned that people didn’t want to report internally because they were afraid that the internal reporting system might be used against them, despite our promises to the contrary. We contracted with NASA to operate a parallel patient safety reporting system that would address the fear that some staff might have that reports would be used in a punitive fashion. YS:How is PSRS similar to the highly successful Aviation Safety Reporting System (ASRS)? JB: The aviation safety reporting system (ASRS) is not anonymous. It's de-identified so people are initially known when they report to NASA. But then, as soon as NASA gets the information they need from the reporter, NASA then makes it anonymous, they take all the information that could result in the identification of a reporter out of their record. The problem is that when they release what they found to the public, they have to anonymize it so much that it can lose vital specificity and the ability to take definitive action. Ideally, you like to have people report with identities so you can go back to them and talk to them so you know with great specificity what needs to be done. You have a greater understanding and a better exactness about how to intervene. Now we also had a contract with NASA to run a system just like the aviation safety reporting system so people – if they didn’t trust our internal reporting system - could report to NASA. We would get similar sanitized reports so we’d still know something happened but we would have no way to determine who reported the event. NASA didn’t have the ability to go to a full investigation, they just relayed what was reported to them but it wasn’t a full investigation. Based on the experience in aviation where aircrew were much more comfortable reporting to NASA, as opposed to their own company, we thought that NASA would get most of the patient safety reports as opposed to our internal reporting system. This turned out not to be the case. We literally had a thousand times more reports to our internal system, three orders of magnitude more to our internal system, than the NASA system; which showed that people trusted us and our internal reporting system. We proved that we could be trusted and we took action on what was reported. In just the first 10 months of operation, our internal reporting system saw an increase in reporting of 30 fold when compared to the legacy system. That’s a three thousand percent increase. Over the next 10 years, we saw reporting go up approximately four to five percent per year, and every year after that. This sustained increase in reporting, demonstrated that not only was the system trusted not to shame anyone, but that problems that were identified resulted in action and that the effort to report was worthwhile from the reporters’ perspective. YS:Despite many similarities between health care and aviation, event reporting systems have not been well received in health care. Studies have shown that many physicians are reluctant to participate in programs to report medical errors. Should there be a national PSRS for all US hospitals? Should it be mandatory? JB: Okay, well the issue of mandatory reporting came up in the very first hearing in the U.S. Senate – after the IOM report came out back in 1999. In January of ’99, the senate had a hearing and I was invited to testify about this. One of the questions Senator Specter asked me was why I was quoted in the New York Times as being against mandatory reporting. I repeated to him a statement I had heard from Dr. Charles Billings, who was one of the founders of NASA’s ASRS system. Dr. Billings had said that “in the final analysis, all reporting is voluntary,” that people only report what they care to report once they weigh the relative advantages of reporting or not reporting. I said that there was no such thing as a mandatory reporting system if what you meant by mandatory is that everybody will report what you require them to report. Instituting a mandatory reporting system essentially criminalizes failure to report, if you detect it. If you make it mandatory and people don’t report, then they're culpable, there could be blame but that doesn’t mean they report. People report what they care to report, and they weigh what's the risk of reporting versus if they get caught not reporting. And then they have to say, “Well, what if I report and I get punished or my reputation is damaged? What's the risk if I don’t report and will I get caught?” So people do that analysis in their mind and they decide, is it worthwhile? So, why would the government criminalize non-reporting if doing so doesn’t mean everybody will report? It comes down to the reason you want people to report related to safety improvement. It is not for counting, it is to identify vulnerabilities which then will be prioritized to determine if further action is required. YS:It’s been 10 years for the WHO Safe Surgery Saves Lives and the Safe Surgery Checklist. Why are surgical errors still happening, and are checklists over-rated? JB: We were using checklists throughout the VA in the operating rooms to guide prebriefings and debriefings well before the WHO checklist. We used checklists as a cognitive aid to guide our teams in our medical team-training initiative. This checklist-guided team-training showed an eighteen percent reduction in the annual mortality. The checklist - it's just a cognitive aid to make sure we don’t miss things. It is not like a to-do list that is used in a robotic mindless manner. We used the checklist to guide the briefing, so you make sure you don’t miss topics. It facilitates the team having a shared mental model concerning what they are about to undertake, and how they will prepare themselves to deal with the variety of challenges they may encounter. First, in a surgical checklist, if you do it well, the most important thing is that you have this conversation. And we always say the very last thing you should ask in the briefing is, go around from the lowest in the hierarchy to the highest and say, “What are you most concerned about in this upcoming procedure?” Not, “Are you concerned?” Are you concerned is a “yes” or “no” question. Instead you say, “What are you most concerned about?” You would be surprised what you hear. Often, you hear nothing, but sometimes, for example we had a third year medical student say, “I thought this patient was going to have their left knee operated on and I see we’ve marked and prepped and draped the right knee.” Which averted a tragic outcome. In the beginning people think they need to have a special checklist for every type of surgery, and we have found that is not necessary. If you do an appropriate generic one, it will apply to all because it allows the latitude for the team to cover the important information. You have to trust people to use their head and checklist is not an excuse from thinking. It's a memory jogger to not overlook something in your haste or distraction. I think many people don’t understand how to use checklists in healthcare and they view it as a compliance tool to document that they have done certain things. While the use of a checklist certainly can increase safety and compliance with policies, it is not itself a recordkeeping tool to document compliance, and organizations that use it in that manner are missing the point. YS: That’s powerful, very powerful. We hear often of high reliability healthcare, a concept pushed by The Joint Commission, but not accessed through its survey process as evidenced by errors in accredited facilities. How can NASA principles be applied for high reliability hospital accreditation? JB: High reliability organizations are preoccupied with failure and understanding of how things could go wrong and how to be prepared to deal with these situations before they ever occur. They concentrate on where they fall short, they're always asking “How can I do it better even if I had a successful flight,” or in the case of medicine, “The patient did okay but I still ask how can I do it better in the future?” Instead, many organizations prefer to concentrate on what they did well and what awards they have received instead of how they can do better. In many cases, they don’t embrace hearing critical comments that can and should be used to achieve a constructive objective. If there's criticism, high reliability organizations put their ego aside to examine what they can learn that will help them do better. They examine if there’s a grain of truth that they, or more importantly, their patients could benefit from. Instead, in many traditional healthcare organizations we see a preoccupation with celebrating our success. You don’t see airlines celebrating when they’ve landed the plane without crashing. The crew doesn’t high five each other and say, “Well we successfully landed once again.” They expect to do it that well. So, they want to constantly be talking about, what are the ways that we could have done better? That maybe we're doing okay today, but how we could do better. They look at close calls to learn rather than wait for catastrophes before seeking ways to improve. In healthcare, most places don’t even take reports of close calls. The places that do, most of them don’t have a defined, transparent, explicit way they do risk-based prioritization, to determine if they should do full analysis on a close call when nobody was severely hurt as opposed to the high reliability organizations that do. High reliability organizations are preoccupied with failure and things that can cause failure, and that is what they talk about all of the time - all of the time.
YS:We all know the elephant in the room is “healthcare induced harm.” So, how do you eat an elephant? JB: [Laughs] One bite at a time. I think that the key is you want culture to change - the culture is the product of what your organization does every day. Culture is the product of what you do, your beliefs and your actions. You don’t achieve that by telling everybody we're changing our culture, we do it by providing people tools and structures that shape their behaviors and attitudes so that they respond to challenges in a manner that is consistent with the values and goals of the organization. We didn’t tell the frontline we were doing anything differently when we put this program in place. I was careful we made no promises to the frontline. All our conversations were to the executive leadership at the hospitals and at headquarters. I did that deliberately because I said, “If we tell the field, the people – nurses, physicians - things are going to change and if they don’t change, they’ll say, once again, another broken promise from top management.” Instead, we're going to continue to have them report as they did before, but they will see that the response to the reports is different than in the past. We wanted to let our actions speak for us rather than make promises. And that’s what we did. And then, like I say, in the first 10 months we saw 30-fold increase in reporting because people saw, “Whoa, that’s going on?” They reported an issue and things got better instead of the report disappearing into a Black Hole. By people seeing that the same report that in the past resulted in nothing being done now resulted in action being taken that was in the best interest of the patient and fair and not punitive from the staff perspective. Before, nothing got done about it - there was no investigation or the investigation was punitive - that it didn’t help anything. So, we saw a culture change, not because we said we're going to change the culture, because we gave them tools, we gave them tools that were better for them and the patients, and still got the job done. This motivated them to want to use the tools; so, they're not doing it because we said they had to, they wanted too. For example, one tool was the SAC score (Safety Assessment Code) risk prioritization grid. If they followed that prioritization grid then they did everything the Joint Commission wanted them to do and more, everything the FDA wanted them to do and more, and other regulatory requirements were addressed as well. So, instead of having to look at three or four different places, they go to one place and it's all addressed for them. Our staff thought, wow, you just made my job easier, not harder. People like that, right? We deliberately said – we're a servant leader model - we're going to provide tools that make it easier for the people doing the job and for them to care for the patient, we wanted to be as unobtrusive as possible to them and to have them view these initiatives as of value for them. From their perspective, not from the [VA] Secretary’s perspective, not from my perspective, but from their perspective. So, we gave them tools that worked for them, literally reducing hours per RCA (root cause analysis) and resulted in problems getting fixed. Providing tools and a system that works changes the culture. The fact that leadership, when people pointed out problems, didn’t say “whose fault is this.” And I use this comment with leadership – I said, fault is the “F-word” in medicine. YS:NASA's spinoff programs include fatigue Psychomotor Vigilance Task. What’s the role of Psychomotor Vigilance Task for medical trainees? JB: I can't comment substantively about that test. However, I think this whole idea of using a test to determine fitness for duty is where you want to be rather than regulating duty hours that are a blunt instrument at best. Limiting continuous hours of duty at some level is not totally unreasonable, but really, I would say the Holy Grail is having a good reliable fitness for duty check and I just don’t know if the NASA one has been shown to be capable of doing that for healthcare. But that’s certainly the direction where I believe we should go. YS:You are the first and, so far, the only person of Armenian descent to have been in space. What does that feel like? JB: I'm honored to have had that opportunity and I'm proud of my heritage, but– I didn’t do it alone. I was part of a team that worked together to get the job done the best that they could. YS: Dr. Bagian, on behalf of The SafeCare Group, I thank you for all you have done on planet earth and in space to improve the safety of healthcare for all mankind.
Unless otherwise noted, you can republish our articles for free under a Creative Commons license. Here’s what you need to know:
You have to credit us. In the byline, we prefer “Author Name, SafeCare.” At the top of the text of your story, include a line that reads: “This story was originally published by SafeCare magazine.” You must link the word “SafeCare" to the original URL of the story https://www.safecaremagazine.com/safecare-nasa-for-healthcare.html