Do Safety Differently
Sidney Dekker and Todd Conklin
Highlights & Annotations
Sapir hypothesis.[2] Whorf-Sapir says that language is relative and that meanings for words don’t exist in the words themselves, meanings for words exist in the people who use the words.
Ref. 5558-A
The second topic in Do Safety Differently tackles the worst kept secret on the plant floor. There is a huge difference between how managers think work is being performed and how work is actually being performed. Because work is filled with complex conditions, it seldom happens the way work was planned and proceduralized. Why this is surprising to managers is almost a mystery. There is very little mystery among the people who get work done—in all types of conditions, daily.
Ref. 715F-B
The challenge of this difference between work as imagined and work as done is not about a disobedient workforce. Rather, this difference is more of an outcome of a disobedient work environment—if work environments can be disobedient. But the world has a way of not being compliant with the way we thought that world worked. There are surprises, complexities, unanticipated situations, sudden breakages, and always plenty of ambiguities. Is it this, or is that? Should I wear this piece of protective equipment for this task, or should I not? I am close enough, or not?
Ref. A0C1-C
We saw a procedure once that instructed workers to apply a ‘light coat of grease’ to a particular screw-nut assembly. But what is a ‘light coat’? That is a judgment call. Judgment calls like that rely on interpretation, on professionalism. But the ‘lightness’ of the coat might well depend on who is doing the lubrication; on who trained or showed the person who is now applying the coating. Work as imagined says ‘apply a light coat of grease.’
Ref. 721D-D
We know (as you know) that when it comes to the performance of work that the map is not the terrain.[3] Although it may seem obvious to you, there is a difference between work as done and work as imagined, this realization may typically come as a shock to the organization. However,
Ref. 3536-E
What we think is a much more valuable way to understand this information is how your organization learns about the gap between work planning and work doing. If the work done is not as you imagined, Learn! This difference in both practice and perception is an important opportunity to ask questions differently. One of the best ways to ask questions differently is to use the expertise that currently exists in the organization. That expertise exists in your workers. Who knows better about how work is done than the people who do the work?
Ref. B18D-F
Is the compliance burden too high? It is time for us to declutter our organization’s safety bureaucracy. We live in a world where many of the complications in performing work are self-inflicted and nobody openly talks about this phenomenon. Here is what we know, to reduce operational bureaucracy, we must first recognize our organizations are cluttered with rules and expectations that often serve to provide no more value than just compliance to the rule or expectation.
Ref. E764-G
The futurist Jerry Pournelle[4] coined a phrase for this cluttering of our systems. Pournelle called this “the iron law of bureaucracy” and described the organizational outcome like this: The bureaucracy introduced into our organization by our organization will always strive to protect itself by demanding compliance to the bureaucracy itself. All this is presented to build the case that if we want less bureaucratic difficulties in our work sites, we must first admit that the bureaucratic difficulties are of our creation.
Ref. 9FA8-H
Safety is having the capacity to make things go well.[5]
Ref. 7DCA-I
You may think that this is obvious. After all, if you don’t have the capacity to make things go well—in your teams, in your people, in your processes, in your designs—how can you be safe?
Ref. E883-J
with this: Not having had any bad outcomes doesn’t mean that you’re safe. It just means that you haven’t had any bad outcomes Indeed, the absence of negative outcomes doesn’t automatically imply the presence of positive capacities. It could be due to luck, or to smart counting (see next bullet) You can help your run of no bad outcomes by calling bad outcomes something else (by putting people on ‘suitable duties,’ for instance) or by allowing your people to underreport Most things go well, rather than badly. Much more goes well than goes wrong in your organization. So, if you’re focusing your safety efforts on those few things that go wrong, you’re only using a tiny portion of the data available about how your operations are doing.
Ref. 0284-K
“So, help me understand this,” the Ph.D. student said, “you make efforts to improve safety by focusing on the few negative events—the sorts of things you don’t want to have—and then you try not to that again? That is like trying to understand how to have a happy and healthy marriage for the rest of your days by focusing on a few cases of divorce or domestic violence. As if those few negative instances are going to tell you what you do need to do to make your marriage happy and healthy. If you want to understand how to have a happy and healthy marriage for the rest of your life, isn’t it much smarter to study happy and healthy marriages and learn from those? It would seem so obvious indeed. If you want to become safe and stay safe, isn’t it smarter to find out what you should be doing, rather than investing most of your resources into figuring out (from the past) what to avoid (in the future)?
Ref. A3AC-L
parachute on your back). In activities such as these, the relationship between injuries and fatalities tends to be straightforward.[6] The way you get injured is much the same as how you might die. It follows that if you have more injuries, you will probably also have more fatalities. One predicts the other. One is simply an extension of the other. One can even help explain the other.
Ref. 2319-M
The fact that a model suggests this to you doesn’t always make it so, of course. Fair enough: in unsafe systems, injuries or incidents and accidents tend to be caused by the same sequence of events. And the difference between them is only in how far that sequence reaches. But in otherwise already safe organizations, that is no longer the case.
Ref. 0550-N
For sure, lots of organizations have believed for the longest time that if they can prevent incidents and injuries, then they can prevent accidents as well. You may even have been told that if you can prevent unsafe behaviors, you can prevent injuries, incidents and accidents.
Ref. 43F2-O
an organization wanting to do something about its safety, it sounds like an attractive (and not so expensive) idea. Because all you need to do to make your organization safe is tell people on the frontlines that they need to behave safely. You can launch a campaign, telling them to care more, to try harder. You can even sanction the behaviors you don’t want to see and reward those that you do like to see. Other than putting up some posters, you won’t have to do much around the workplace—no design changes, no structural investments.
Ref. A2BB-P
course, we asked David what the LGI was. He looked at us and then said, “It’s the Looking-Good-Index.” How right he was. A singular focus on metrics can function as a decoy, taking organizational attention away from the build-up of risks and a possible drift into failure in other areas. Underlying risks can then be left to grow misconstrued or unnoticed, as has been recognized by thinkers in organizational safety since the 1970s.[11] LTI is a great example of organizations and boards counting what they can count, but not looking at what counts.
Ref. 3DE2-Q
cases of accident or injury, workers were deemed to be the cause (Heinrich called it ‘man failure,’ an early label for ‘human error’). So, if you wonder where the figure 80% human error comes from, here you have it. It came from those who’d want to avoid blaming themselves or their systems (which may sound familiar, of course).
Ref. E8B4-R
But there was no way for Heinrich to verify neither any of this, nor any data to verify a proportional existence of unsafe behaviors. Because there was no such data. Insurance claims get made when there is injury or damage. No claims get made if there’s nothing to claim. So,
Ref. 75F7-S
14] You’ll have to start doing some other things. If you go gang-busters on trying to prevent every little thing from going wrong—like organizations do when they declare ‘zero harm’—you are likely to create a greater accident and fatality risk, just like what happened in LaPorte, TX, in the example above. Let’s look at this issue now.
Ref. AF7A-T
Of course, pursuing zero harm is a necessary and noble commitment. But trying to run the safety of a company with such a policy can quickly become a bit absurd, and lead to adverse effects.
Ref. 8A2C-U
In control engineering terms, this is called the ‘fundamental regulator paradox.’ It says that if you regulate a machine so well that it bends your key data stream toward zero, and then you’ll soon have nothing to regulate the machine on. You start to fly blind. You won’t know what it’s doing, and what you need to do. Until it’s too late.
Ref. 2F06-V
That’s exactly what happens when we try to ‘regulate’ the safety of our operations by steering outcomes toward zero. When you get there, or even when you are close to it, what are you using to inform your safe running of the operation? Just keep doing the
Ref. CE94-W
There were four fatal accidents for companies with zero safety. There were zero fatal accidents for companies without zero safety.
Ref. 2689-X
unsafe behaviors stops hearing about other hazards as well. It can create a climate of what safety consultant Corrie Pitzer calls ‘risk secrecy’, in which knowledge of hazards doesn’t travel to the right places, and in which injuries are under-reported and incidents remain hidden.
Ref. FBAA-Y
As a commitment, zero is fine. As a policy, particularly one with incentives and rewards around it, it is unsafe. Research has shown that paying bonuses for low numbers of incidents or injuries can be quite dangerous. One prominent safety researcher calls these kinds of bonuses or incentives ‘Risky Rewards’.[16]
Ref. C4EB-Z
air their concerns. He gradually managed to build an environment of trust, of psychological safety.[17] He wanted to show his people that bad news was welcome with him—which he needed to hear it if safety needed to be improved and assured. It worked. After about half a year, the number
Ref. 55C7-A
attract the ire of their supervisor. A new Ice Age of safety secrecy descended on the company. The safety metrics started
Ref. 6AC3-B
Safety as the capacity to make things go well The major shift to make is this: stop seeing safety as the absence of negative outcomes. And, if you are a safety professional or a leader, stop seeing your job as trying to prevent (or rename) those bad outcomes just so your numbers look good. Instead, start seeing safety as the presence of capacities that make things go well. And see your job as identifying and enhancing those capacities.
Ref. 2E20-C
The question that most organizations yearn to have answered, though, is this: what is going to take the place of their long-held and easily communicated LTIs or total recordable injury frequency rate? As Thomas Kuhn pointed out, people are unwilling to relinquish a paradigm—despite all its faults—if there is no plausible, viable alternative to take its place.
Ref. 67A2-D
few years back, one of us was working, together with some students, with a large health authority, which employed some 25,000 people. The patient safety statistics were dire, if typical: one in thirteen of the patients who walked (or were carried) through the doors to receive care were hurt in the process of receiving that care. 1 in 13, or 7%. These numbers weren’t unique, of course.
Ref. 8CDB-E
Workarounds Shortcuts Violations Guidelines not followed Errors and miscalculations Unfindable people or medical instruments Unreliable measurements User-unfriendly technologies Organizational frustrations Supervisory shortcomings
Ref. 8997-F
seemed an intuitive and straightforward list. It was also a list that still firmly belonged to Heinrich’s era in our understanding of safety: that of the person as the weakest link, of the ‘human factor’ as a set of mental and moral deficiencies that only great systems and stringent supervision can meaningfully guard against. In that sort of logic, we’ve got great systems and solid procedures—it’s just those people who are unreliable or non-compliant:
Ref. 99F8-G
People are the problem to control We need to find out what people did wrong We write or enforce more rules We tell everyone to try harder
Ref. 9858-H
Many organizational strategies, to the extent that you can call them that, were indeed organized around these very premises. Poster campaigns that reminded people of particular risks they needed to be aware of, for instance. Or strict surveillance and compliance monitoring with respect to certain ‘zero-tolerance’ or ‘red-rule’ activities (e.g., hand hygiene, drug administration protocols). Or a ‘just culture’ process that got those lower on the medical competence hierarchy more frequently ‘just-cultured’ (code for suspended, demoted, dismissed, fired) than those with more power in the system. Or some miserably measly attention
Ref. 80C0-I
time in the hospitals of the authority to find out what happened when things went well when there was no evidence of adverse events or patient harm.
Ref. 8BD9-J
that should otherwise have been telling us something quite different. But it turned out that everybody had found that in the twelve cases that go well, that doesn’t result in an adverse event or patient harm, there were: Workarounds Shortcuts Violations Guidelines not followed Errors and miscalculations Unfindable people or medical instruments Unreliable measurements User-unfriendly technologies Organizational frustrations Supervisory shortcomings
Ref. 32CF-K
didn’t seem to make a difference! These things showed up all the time, whether the outcome was good or bad. It should not come as a surprise. Research reminds us of ‘the banality of accidents:’ the interior life of organizations is always messy, only partially well-coordinated and full of adaptations, nuances, sacrifices and work that is done in ways that are quite different from any idealized image of…
Ref. A493-L
This means that focusing on people as a problem to control—increasing surveillance, compliance and sanctioning—does little to reduce the number of bad outcomes. But if these things don’t make a difference between what goes well and what goes wrong, then what does? We were still left with a relatively stable piece of data: one in thirteen went wrong and kept going wrong. What explained the difference if it wasn’t the absence of negative things (violations, shortcuts, workarounds, and so forth)? This is not…
Ref. EAC5-M
went well, we found more of the following than in the one that didn’t go so well: Diversity of opinion and the possibility to voice dissent. Diversity comes in a variety of ways, but professional diversity (e.g., compared to gender and racial diversity) is the most…
Ref. B946-N
voicing dissent can be difficult. It is much easier to shut up than to speak up. I was reminded of Ray Dalio, CEO of a large investment fund, who has fired people for not disagreeing with him. He said to his employees: You are not…
Ref. 5AF6-O
Keeping a discussion on risk alive and not taking past success as a guarantee for safety. In complex systems, past results are no assurance for the same outcome today, because things may have subtly shifted and changed. Even in…
Ref. F129-P
bypass surgery of the day), repetition doesn’t mean replicability or reliability: the need to be poised to adapt is ever-present. Making this explicit in briefings, toolboxes or other pre-job conversations that address the subtleties and…
Ref. CAA6-Q
Deference to expertise. Deference to expertise is generally deemed critical for maintaining safety. Signals of potential danger, after all, and of a gradual drift into failure, can be missed by those who are not familiar with the…
Ref. 3898-R
sharp end, rather than the one who sits at the blunt end somewhere, is a recommendation that comes from High-Reliability Theory as well. Expertise doesn’t mean only front-line people. The size and complexity of some operations can require a collation of engineering, operational and organizational expertise, but high-reliability organizations push decision…
Ref. 42B1-S
voice concerns. In her work on medical teams, too, the presence of such capacities was much more predictive of good outcomes than the absence of…
Ref. 5E38-T
made in organizational research, and also in the sociological postmortems of big accidents, is that the totality of intelligence required to foresee bad things is often present in an organization but…
Ref. AA77-U
Don’t wait for audits or inspections to improve. This is one that quality guru Deming found as well. If the team or organization waited for an audit or an inspection to discover failed parts or processes, they were way behind the curve. After all, you cannot inspect safety or quality into a process: the people who do the process create safety—every day. Subtle, uncelebrated expressions of expertise are rife (a paper cup on the flap handle of a big jet; the
Ref. 997C-V
plant control room, to know which is which; the home-tinkered redesigned crash cart in a hospital ward). These are among the kinds of improvements and ways in which workers ‘finish the design’ of their systems so that error traps are eliminated and things go well rather than badly.
Ref. CE84-W
that take evident pride in the products of their work (and the workmanship that makes it so) tended to end up with more good results. What can an organization do to support this? They can start by enabling their workers to do what they want to do and need to do, by removing unnecessary constraints and decluttering the bureaucracy
Ref. CC8A-X
The list above is not so much a set of conclusions, but a set of hypotheses. Are these starting points for you and your organization to identify some of the capacities that make things go well? We reckon they are. How would you enhance those capacities?
Ref. F05C-Y
Safety metrics can amount to a ‘Looking Good Index’ (or LGI). Who in your organization is trying to (make whom) look good, and for which stakeholders or what purposes? Does your organization measure or otherwise track the presence of capacities that make things go well? If not, what are the obstacles to them doing so?
Ref. E2B6-Z
you want to know how work is being done, whom do you ask? If you said any answer other than the people who do the work, then you should step away from this book. Or perhaps take a deep breath and
Ref. 4E1F-A
somehow come around to the insight, that the real experts on how work is done in your facility are your workers. And that is just a start to what they know and all the stuff they can tell you. You’ll be amazed once you start listening without judgment.
Ref. D86E-B
Traditionally, organizations audit for compliance. Organizations actively and aggressively seek deviations from prescribed work. Organizations observe workers doing their work to identify “risky behaviors”. Organizations walk-down work practices while holding the appropriate procedure in hand, checking each step with the most serious intentions. Our organizations act like some type of combination of a workplace anthropologist
Ref. F8ED-C
The idea that work is happening the way work is imagined is overly simplistic. It denies the reality that the world of work is a world filled with uncertainty, variability, and constantly changing organizational priorities and operational goals. Performing work is not nearly as predictable as organizations desire work to be – and the act of wanting work to be predictable does not make the work stable or the statement true.
Ref. 35EA-D
There is a difference between the work being done in the way organization imagines that work being done, and actually doing work. This difference is normal and the better (and sooner) the organization understands and embraces this difference, the better the organization will function as an effective and reliable facility.
Ref. 7956-E
day at the work site is the same and that every procedure is complete and encompasses all potential operational complications. But we know, deep in our soul, that every day at our worksites is markedly different from the previous one or the next one. And that no procedure is ever complete enough to actually do work. If you have ever done any type of work at all, these facts become quickly apparent.
Ref. AAC3-F
happens in complex systems. Workers must therefore be more adaptive than obedient. The work being accomplished
Ref. 2D5B-G
you successful at doing the work you do is the worker’s ability to be responsive to the almost unlimited amount of variation that exists daily. This worker responsiveness is awesome to watch – your organization’s workforce is quite amazing when all these factors are considered.
Ref. 4AB2-H
Uncertainty is (and always has been) Uncertain
Ref. 7214-I
Given the presence of uncertainty and variability in the performance of work as a reality, our discussion is better focused on what an organization should do differently to best cope with an uncertain world. There is no need to further describe operational variability – operational variability is not the problem.
Ref. 0DC6-J
The world’s leading experts in how work is being done in your organization already are on your payroll. You have within the walls of your facility the opportunity to know all there is to know about how work is being done. This information is well within your grasp; all you must do is ask the workers to tell you how the work is being done.
Ref. B966-K
are many reasons for organizations not recognizing the expertise and information available to them. This information is always within the organization’s grasp, bought and paid for by the organization that employs the workers. Many of these reasons are discussed in the earlier chapters of this book.
Ref. 77CA-L
organization on a daily, hourly and minute-by-minute basis. Building a strong organizational culture is like owning a puppy, success is a constant effort filled with progress and failure, you are never finished with the work and you will have to clean up many messes left on the floor.
Ref. FFCE-M
The boss turned to us and said these fortunate words, “I wish there was a way we could just bring everybody in a room, shut the door, and ask them what we should learn from this?” It was at that time we uttered these words to the boss, “Why can’t we just do that?” He told us to make it happen and that is just what we did. Little did we know, that would be the start of what
Ref. EEBC-N
high-level overview of what a learning team does when you are interested in understanding something about your operations is where our discussion will begin. When you have some type of operational curiosity happening in your organization, ask a group of workers to help you do three things: Define the problem Craft some potential solutions Try the potential solutions out – micro-experiment.
Ref. B606-O
We have learned that the most important ingredient to effective operational learning is in the actual crafting of the question to be asked – good questions always are foundational to generating good answers. Too often, our analysis is based upon a flawed understanding of the problem at hand.
Ref. DE87-P
Solutions are fun and sexy and we have been taught our whole working lives to generate answers fast and effectively. That idea may be wrong; our zeal to solve problems quickly often means that we have not done sufficient analysis and effective problem
Ref. F8E2-Q
If we solve the wrong problem, we will generate the wrong corrective action. Many organizations have very effective corrective action programs that fix the wrong things well. That doesn’t mean your organization is bad at solving problems, but probably does indicate your organization is not doing enough to
Ref. D57E-R
with a gap between meeting one and meeting two – Find a place to meet and schedule two meetings a day or two apart to best prepare the group to both identify and solve the improvement target. The use of two meetings is almost entirely logistical – having two meetings allows the group to separate problem identification from solution generation. As we have discussed earlier, the biggest enemy of problem identification is the need to solve the problem immediately. Having two meetings makes it easy to simply put all solution ideas on the second day. This is a surprisingly simple way to keep the solution bias
Ref. EF3A-S
Micro-experiment these solutions in a safe-to-learn, safe-to-fail environment – One of the most beneficial aspects of learning teams is the ability to prototype solutions on a small scale, collect data about the prototype and then move to more effective and sustainable solutions. To allow testing to happen with any hope of success the organization has to make it both a safe-to-learn
Ref. 310D-T
should surprise no one that the work the organization imagines is happening, is not the work that is being done. Our problem is not to fix the gap between organizational planning and work control and the actual work. Our opportunity is to become better at learning how work is done on a normal day with regular people doing their daily work.
Ref. BCCE-U
Change how you define what you want Change how you learn from yourself and others Change how you respond to failure and success.
Ref. ECB2-V
In this case study, you can see this event as a success; a tank lost containment and the process safety design was ready and able to manage the loss of containment to a secondary containment system with zero loss of product to the environment.
Ref. 9FAA-W
Investigations learn; Corrective actions fix We ask organizations across the globe why they do investigations.
Ref. 9BC4-X
The answer is often the same, “to prevent re-occurrence.” That answer is wrong. Investigations don’t change work control, investigations don’t fix broken equipment, investigations don’t remove at-risk behaviors and investigations are definitely not corrective actions.
Ref. 364A-Y
simple. Therefore, talking about a ‘root cause’ unfairly and misleadingly builds a false sense of hope that the problems that caused the accident will be simple to understand and simple to remove and fix. This is never true in our experience. Investigations learn many, many things while trying to describe how the event happened. Every contextual factor we don’t discuss to support the idea of a ‘root cause’ could be a vital piece of information needed to point the organization towards
Ref. 0B41-Z
Don’t let an accounting system or some type of corporate record-keeping system dictate what you will learn as an organization. Far too many organizations are held captive to their administrative record-keeping process. Learning systems that have those terrible ‘pull-down menus of causes’ limit the way the organization learns and, more seriously, what the organization will learn. These menus allow for trending and tracking of cause-codes but do not allow the organization to learn the complex nature of how the event happened.
Ref. AEE1-A
trending is much more a part of traditional safety and not a function of doing safety in a different way. We don’t investigate trend data to predict the next…
Ref. 9E1E-B
The most remarkable part of how software systems that limit effective and context-rich data reporting to optimize for data-trending is the fact that these limitations are entirely self-inflicted. Regulators don’t ask for this information. Investigators do like this process. The intention, being able to trend causal factors and then predict the future to prevent the next event, is understandable and even desirable. The…
Ref. E82B-C
Our traditional investigations seem to have mostly been done to determine who failed and to…
Ref. CDA6-D
Investigations were not seen as places to learn new information about the organization’s work processes and practices. Investigations tended to seek the place where some type of deviation from expected behavior or process was supposed to happen and then…
Ref. 0F6D-E
paperwork; the right way in the right format. Few, if any organizations revisited investigations after the investigation was completed for either learning value…
Ref. F1FF-F
spend a lot of time thinking about what this meant for actually doing the work. When you deliberately limit the amount of information that can be discussed about context, local rationale and work mindset, it is no surprise that our…
Ref. F0BC-G
Investigating and learning from events in a different way has opened up the scope of the events. Where once we were almost entirely limited to look deeply into the event, our teams are now much more likely to look out from the event and to determine the entire context of the work environment to understand the complex nature of the work being done, and specifically, the event in question. Suddenly, asking the local rationale question, “what was going on that made these workers…
Ref. A5B7-H
Moving from asking, “Who failed?” to the much more important question of “What failed?” is a seismic shift in thinking in the investigation world.…
Ref. A8C3-I
Learning differently allows for worker error. Knowing workers are not perfect, that mistakes happen all the time while doing both successful work and while…
Ref. 70BD-J
Things go wrong all the time in our daily operations. In most instances, workers detect and correct problems in real-time. Failure happens all the time; all the components for an unwanted outcome live in your organizational system and processes as a part of daily work. An organization can learn from typical work much better than from waiting for an event.
Ref. B9B9-K
Events are the unexpected combination of normal work contexts. Don’t look for some type of special deviation to explain why an event happened. Instead, look at the work as it is done when it does not fail. Learn how this work is done when it does not fail to better understand the conditions present when the work fails.
Ref. F856-L
Investigations learn/Corrective actions fix. There is a huge difference between learning and fixing. Learning always must happen before fixing. Too often organizations do investigations to fix problems. Investigating to fix problems will ensure the organization will not learn enough about the work to truly understand what happened. It is much better to see the corrective actions as a product of the learning
Ref. 67F8-M
Your organization must learn before it can act.
Ref. 4B35-N
Investigations answer the “how” question, not the “why” question. When an event happens, there is a desperate need to answer the “why” question.
Ref. D5BD-O
“why something bad happened.” There is a caution here, the mysterious “why” question is much less useful to the organization than the very practical and informative “how” question. “How”
Ref. 975D-P
allows the organization to move beyond individual motivation and to focus more on the complex conditions that had to exist for the failure to happen. Stick with “how.”
Ref. A99F-Q
One of the best motivations for doing Safety Differently is the opportunity to change the organization’s approach to event learning. Deliberately learning the contextual factors present in an event by changing the actual event learning foundation will help the organization understand an event, and more importantly the event context, in a more effective way. The opportunity to seek the richer question of how the event happened offers the organization a more complete understanding of the event.
Ref. E871-R
event learning technique does not need to be used only to understand a failure.
Ref. 3502-S
you were to ask your own organization why it does investigations, what do you think the answer would be? Should you try to change that?
Ref. 4367-T
Why is thinking in terms of ‘root causes’ not very helpful? In what sense
Ref. 3920-U
easy to write more rules. In a study of hospital wards, colleagues at Macquarie University found that nurses—on average—need to follow 600 policies every day. That’s a lot of policies. When they asked nurses if they could recite some of those policies back to them, they got a lot of blank stares. On average, nurses were able to describe between
Ref. B60A-V
Safety and rule clutter has a way of building up around any job, particularly safety-critical jobs. In the US, there has been such a swelling of rules, guidelines, protocols, prescriptions, procedures and policies for administering anesthesia that there are currently some four million documents. Somebody did the math: it takes about 2,000 years to read it all! And then you haven’t even trained as a doctor yet.[23]
Ref. 7D7D-W
Remember some of Amalberti’s data from the first chapter. If you are an unsafe industry or activity, with chances of one in a thousand ending up with a fatality or serious injury or incident, then writing more operational rules can still increase your safety. But by the time the chances of you badly or fatally injuring someone are down to one per 100,000, they stop having much, if any, effect.[24] The system of writing more rules, Amalberti says, becomes purely additive. It adds more rules to the system, but it offers nothing in return—except more clutter and a bigger compliance apparatus to implement, monitor, audit and control (so of course, somebody is
Ref. E5A3-X
Also, for every new rule added, an old one seldom gets taken out. As said, it’s easy to write more rules. It’s really difficult to scrap them. Organizational activities and accountabilities abound that all—either regularly or ad hoc—offer many opportunities for the addition of rules. There is typically no similar set of activities and accountabilities for reducing the number of rules. And, of course, there can be quite a bit of anxiety around getting rid of rules, which you don’t see when rules get added.
Ref. C777-Y
But safety clutter can be dangerous. It can compound and add risks by making things less transparent. It can muddle the waters by making critical issues less obvious, by creating decoy phenomena that get everybody worried while real trouble
Ref. 5C66-Z
And, as said, it can suck up time and resources without adding anything of value but distracting people from what they should be looking at. The research on disasters shows plenty
Ref. 334D-A
In 2008, two years before the Macondo (or Deepwater Horizon) well blowout, BP warned that it had “too many risk processes” which had become “too complicated and cumbersome to effectively manage.”[26] You may have experienced unreasonable
Ref. 7B8E-B
So, we’ll do a couple of things in this chapter. First, we try to get our heads around ‘safety clutter.’ What is it exactly? Then we dive into the reasons for increasing clutter. What are the various kitchens in which compliance clutter is cooked up, and what’s going on there? Where does all this stuff come from, and why? And finally, of course, we will look at some ways in which you can start safely decluttering.
Ref. 615B-C
intervals as a form of monitoring. Redundancy and monitoring, used appropriately, can sometimes reduce risk, but only under tightly scripted conditions.[28]
Ref. A65D-D
But clutter can take many other forms. Perhaps the most ridiculous clutter is also the most irritating for those who prefer to use common sense, who want to be taken seriously, and want to get work done. This sort of clutter comes from role and rule creep: the gradual spreading of safety rules or symbols that
Ref. E81B-E
safety group is given too much sway over what happens on the operational front-end, then there is a risk of—what sociologists call—bureaucratic entrepreneurism.
Ref. 9DCC-F
feel newly empowered can impose rules on others, which at the same time gives them more to do, more authority to do it, and surrounded by an aura of inevitability. They can even claim that what they are doing is both ethical and necessary and that those who disagree are not taking safety seriously.
#concept
Ref. 9175-G
You can see this in another form of clutter too: over-specification. This is quite a popular way to extend safety clutter. It often connects to recording and accountability requirements, which are intended to demonstrate that certain activities or steps are done.
Ref. BB2E-H
“The largest source of growth in rules and regulations is the private sector. We tend to blame the government for bureaucracy’s drag on our productivity, but the dollars locked up by businesses in complying with self-imposed red tape are double those associated with government regulations.”[29]
Ref. 6A4F-I
the government is not the problem. On average, three out of every five rules are made up and self-imposed by organizations. Only two of the five rules can be traced back to a regulation or government requirement, this same study found. In some sectors, it’s much worse than that. Finance is one of them. Healthcare is another. 85% of its compliance demands are those that it has produced itself, as an industry—related to how it bills, accounts, distributes, responsibilizes, trains and checks.[30] One ICU doctor in Texas told us that she could easily fill each 12-hour shift with 16 hours of paperwork and compliance activities. (We couldn’t get the math to work out on that one either,
Ref. F52B-J
There is a link with the government. But it probably isn’t what you think. Interestingly, we have seen the rise in safety clutter and compliance burdens precisely because of deregulation.[31] This may seem counterintuitive but think about it. Since the 1990s, there has been a shift from compliance-based regulation in many industries to risk- or performance-based regulation. Under such a regime, the government no longer comes in regularly to check, with its people, whether you are compliant with every little specification, rule and regulation that it had on the books for you. With the increasing complexity and sophistication of many technologies, the government
Ref. FED9-K
probably no longer even has the in-house expertise to do that well. Instead, you are more on your own, and now you have to demonstrate to the government that you know your risks and that you have them under control. How many
Ref. CFE1-L
requirement is the same as what we see in some freshmen students. When asked a question on a test, they will throw everything at the professor they’ve ever learned in the course (never mind word limits). They do this just to make sure that they’re sort of compliant with expectations and hope that what’s needed to answer the question is buried in there, somewhere. Organizations similarly overcompensate—richly.
Ref. E376-M
Freedom in a frame Freedom-in-a-frame means that you give your people a framework within which to work (framed by rules or boundaries that you jointly develop and agree on), but within which you give them the freedom and discretion to do their work in the way they see fit. This is a kind of discretionary space, a space that can be filled only by an individual human. This is a final space in which the organization does leave people freedom of choice (to use this tool or not, to launch or not, to go to open surgery or not, to fire or not, to continue an approach or not). It is a space filled with ambiguity, uncertainty and moral choices.
Ref. AD1E-N
Freedom in a frame acknowledges and deploys the kind of professional autonomy and trust that allows people to know the boundaries of their roles and authority, yet encourages self-sufficiency, adaptive capacity, interpretive discretion and local innovation.
Ref. 83B9-O
Why might you find more of this in government jobs? Reasons that have been mentioned include the slightly looser focus on results and money (and thus on accounting and accountability), as well as the typically less precarious employment relations.[35] Kaufman’s book The Forest Ranger from 1960 describes how 792 semi-autonomous forest rangers—each with jurisdiction over vast swaths of federal land—were able to make reasonably consistent decisions about grazing rights, timber harvest, fire protection, and scores of other necessary choices regarding the use of public resources. Kaufman captures the public culture in which: Rangers internalized certain common professional values;
Ref. D24A-P
There is a fascinating and little-known historical footnote to this. To achieve these results, the US Forest Service had been inspired by Prussian methods of administration. Of course, the Prussians are commonly (or stereotypically) seen as perhaps the most rigorous and inflexible of Germans. But that’s true only on the surface. In his instructors to commanders, Field Marshall von Moltke wrote in 1869: Don’t order more than necessary and avoid planning beyond the situation you can foresee; Subordinates are justified in modifying or even changing the task assigned, as long as it supports the higher commander’s intent (he called this Auftragstaktik, or ‘Assignment tactics’) Look for those with Verantwortungsfreudligkeit, who have willingness and a joy at taking responsibility for others and for the work they need to jointly accomplish. They like taking ownership and take pride in doing so.
Ref. 3ACA-Q