Knot Tangle

Last year I spoke at the excellent safety event organised by Líder Aviação in Rio de Janeiro. My question was “Is SMS enough to make our organisations safer?” [Update, I gave a similar talk at EBASCON 2017 in Munich and I made a more elaborate resource page which details more explanation and links here]

This question came from two major organisational accidents I saw unfold from up close and which made me think about the effectiveness of our organisations in using all that safety data and actually doing something with it.
More resources can be found here.

Transcript of the talk:

Is SMS enough to make our organisation safe?

This might seem like a strange question coming from someone who’s worked for 15 years in safety management systems: I have been developing them, running them, and afterwards teaching all over the world about them.

One of the reasons why it says is “Safety Risk Management Coach” and not consultant, is rooted in two experiences I had this year.
Tere were two major organisational accidents, and they made me think about what we’re trying to achieve with SMS and if it is really working.

The first major organisational accident I saw, I´d like to share with you because it think there are some very relevant points we can recognise.
This was a small manufacturing operation that was making aircraft parts for major aircraft manufacturers.
We were called in because this organisation had major quality problems.
In fact, their quality certification had been revoked, so obviously it is a big economical threat.
We came in the first week, we looked around and saw the documentation but also the people on the floor.
We saw the workstations, we saw the management teams were in in the meetings and it was clear that the organisation on the whole was failing.
It was no wonder that they lost their quality certification, our question actually was: “How come that this was a surprise?” Because these things, they they don’t just fall out of blue sky, it this develops over time so there were plenty of opportunities to intervene and already the past year as the Dr quality certification, year after year. So they should have been aware that something went wrong.
So that was the first week.
The second week I came back to the site and I was told no, don’t go to the factory, go to the hotel.
Okay… came into the hotel and I heard there was a major fire, in fact the critical production process had burned down…

It had burned down the second time in five years. So…
…strange situation where you have an organisation that already suffered the same accident five years ago.

Even stronger still, three weeks or so before the accident in safety inspection had mentioned that there were certain measures that needed to be taken to guarantee an fire safety.
So knowledge was not the issue. The organisation knew its risk, it knew what it had to do: the risk mitigation was no mystery.
But somehow it wasn’t capable of converting that data into action!
It was the the first big wake up for me.
The second big wake-up was that I was involved with the Germanwings accident: I was a volunteer, one of many volunteers, close to a 1000 helped out in the aftermath of the accident.
One thing that stuck out for me and made a big impression was I was in the crisis centre when they heard that it was a suicide by the pilot, murder-suicide if you will.
Now, on a human level that disbelief you see on the faces is understandable. Of course, nobody wants to believe that a professional in our community is capable of doing such an act.
But at an organisational level you have to ask to question “Why was this a surprise?”
It had happened before. Many times actually. In 2014 an Ethiopian airliner was hijacked by the first officer who decided to land in Geneva and asked for political asylum. He could have just as well crashed on London, Paris or any of the big major European cities.
Before that, Lineas Aeras do Mozambique had the fatal accident 33 people died, very likely a captain suicide.

So how come that, when we know that these things can happen, the industry in general and Lufthansa in particular, hadn’t been trying to mitigating this.
There was no risk mitigation strategy before it happened to them!
Now afterwards, obviously there were a lot of, knee-jerk reactions you might call them, of people trying to mitigate risk risk of pilot suicide.
Now, you might ask the question “Are these risk mitigations really effective?”.
Now all of a sudden we have the two-person cockpit rule.
Okay, so that might avoid that kind of scenario where one guy gets locked out of the cockpit.
But there have been a violent attempt by a jumpseater on FedEx, where on a FedEx aircraft a jumpseater attacked both the pilots an with the crash axe.
That didn’t turn into out into an accident, but could have been.
So what were trying to do now is risk mitigation, actually can increase the risk because now all of a sudden we have cabin crew locked in the cockpit.
So what about their annual psychiatric in the valuation that we are now making mandatory for the pilots.
Now depression is a very strange thing and it seems that, so far as I can tell from the Internet, about 8 to 12% of the population will suffer at some point in their life of depression.
That means that for pilots, who have a very high stress lifestyle, at any given moment quite some people suffer from depression.
Now we have random drug testing to detect antidepressants.
So now we’re forcing maybe a significant part of our pilot population to give up the very thing that helps keep these conditions under control.
“Is that an effective risk mitigation action?” I ask you.

So these things make me one wonder about whether “Is SMS enough to keep organisation safe?”

I´d like to talk about where we are now.
Everybody in this room probably has an SMS and is building to improve it.
So there’s lots of databases, there´s software, there are just culture policies (over the head of the receptionist probably, so it can be audited to when clients and authorities come by) and we’re counting the reports we are getting.
So lots of bureaucratic activity.
But we have to ask the question: “What are using it for?”
What is this system that we are building? Which is in the end nothing but a tool. Is that useful?
Now, I think have to ask a relevant question here, we are in lovely Copacabana, you see lots of people with very beautiful bodies.
I want a straight and honest answer here, and maybe we should have had the confidential voting machine here: “How many people here in this room, in their life admit to have bought an exercise equipment in their lives?”.
Hands up please. Okay … the rest of you don’t want to admit it.
How many of you have used that exercise equipment more than 100 times … not me!
So, we got this idea that if we just get the right tool that it will solve all our problems.

I have been sitting where you been sitting many times, and listening to people talk about safety management systems: how it’s going to solve aaaallll our operational safety problems. How we just have to get the safety data and everything will get sorted.
Now that kind of irritated me: when I came back to my company that wasn’t really what was seeing.
We got all the data all right, and we thought we knew what was going on and we told our managers and they said “no”.
Okay, then we try next time: “NO!!”
This could go on for quite some time.
So all this stuff about getting safety data and making the organisation safer didn’t quite seem to be as automatic as they told us.
So with this talk I´d like to talk about the fact that tools are only useful if you use them… and use them with the right intention…

So… the purpose of SMS is basically, the most simple definition you can take: “allocate resources to reduce risk”.
So your SMS needs to change the operation for the better.
I found a very good article from Bill Voss from the Flight Safety Foundation where he was a bit critical about the SMS’s role, in the same vein as I am critical about it. Where he asks :” We made it so complicated and such a big bureaucracy. But what is it doing for you?”

So he asks four questions which I think are very relevant to what I’d like to discuss today.
First question, and I think if you take anything from this talk, take these four questions and use them in your organisation. That will give you an indication if your SMS is working or not.

So first question: “What is the most likely going to cause your next accident or incident?”
Now the the “most likely” here as a big load, which will talk about later; we are very bad that judging likelihood.
In the second question, which is very important as well and we’ve talked about this before in the in the other presentations.
“How do you know that it is most likely that is going to cause your next accident or major incident?”
Further and the most important thing, that really sums up risk management, “What are you doing about that?”.
If you can’t express “This is our problem and this is what we are doing about it”; you’re not doing safety management.
You have at Sydney Dekker calls a self-referential bureaucratic system which basically exist for its own purpose.

It generates a lot of paperwork about safety, but it is not converting that paperwork and that data into something that is effectively making us safer.
And then the fourth and very important question is : “Is what you are doing, working?”
It is basically about safety assurance.
So, I think these four questions are very important in having a reality check about our safety management systems.
We get so caught up in passing the next audit and ticking that checklist.
But can we really ask and answer these four questions, and more importantly, can everybody in your organisation, answer these four questions the same way? Can you answer them?
That´s what safety management systems are supposed to do: they are supposed to give the company the same image, the same picture of your operational risk.
So I’ve been asking variations of these four questions all over the world, and in many different organisations, and are very revealing.
When you talk to the CEO and the you talk to the guy pushing the aircraft back, talking about these four questions you can get surprisingly different answers.
That’s a great indication that something is wrong.

So I see a common problem areas around safety risk management:
First of all the understanding of risk: I will go into detail about how our brain is really working against us regarding understanding risk. We have to take that into account when were trying to analyse risk, because the brain is convincing us that we understand what we’re talking about and we are really not.
Then the understanding the reality of our operations: We are getting great tools online like LOSA and that will that help with that.
And, of course, just culture is supposed to help people send the right reports but we have to aske the question “Are our reports really reflecting the next source of our accident.?” I’ve seen organisations which were very happy to tell me that they had hundreds of reports about and basically safety issues the equivalent of wearing high viz vests.

Explain to me how you can correlate hundred people not wearing a high safety visibility vest, to the risk of crashing helicopter into an oil platform.
I don’t think you make a make any correlation and we have to be very sceptical if we see the SMS trying to sell us it as victory that we’re getting more and more reports about these high-frequency but low or little impact, low consequence kind of reports.

And then most importantly: “Taking effective action”
It is good, like I said in my stories, that we know about risk but if we don’t take action were not making ourselves any safer.

So to start off with, “the understanding of risk”.
There are three main areas that I think are relevant.
First, understanding complicated versus complex.
Human factors & risk
and our own mental limitations.

Some people call this complex but actually it is complicated.

Actually this is what we love isn´t it. We are technical people. This is what we got into the business. We love complicated stuff.
We love taking part are the alarm clock of our mother and then you’re putting back together without getting all the pieces together.

Sometimes, complicated means that there may be many parts and they interact.
But they will be linear and they are predictable and knowable.
This engine we can perfectly model in every aspect of how it operates, which implies that there is one best way of operating that engine.
Now, growing up as engineers and pilots or technical people: this is how we view the world: as complicated.
But in fact we deal with aviation organisations which are not complicated ! They are complex! And not only complex, they are social- technically complex.

What does that mean basically?
This is an example of a socio technical complex system: the Syrian civil war.
Now, nobody here can pretend to understand this entire conflict and that is the problem with complex systems.
There are so big that no one person can know everything about it.
You can know a bit about it, so you can know the in situation in Syria versus the rest, maybe, but you cannot possibly know all the interactions between all these areas.
So one of the properties of a complex socio-technical system is a constant dynamic interaction between all the actors:
so that means between systems between technology, between people and other systems.

So the other thing is that it’s unknowable.
This is something our brain has very difficult time understanding. We think we understand, we think what we see is what there is. I will talk about that later.
It is also non-linear, that means that in a complex system a very small input can give a huge output.
This conflict supposedly starts, which is obviously some sense making, with one guy setting himself on fire which started the Arab Spring, which then started the revolution, and against the dictator.
So a very small, relatively small input can give a large output.
This is different than our assumptions about complicated things. because we assume that if things went bad in a big way, like this engine exploding, something big must have happened like a bird going in there to make a fan blade separate.
So that’s how we think the world works.

While in fact, the interaction between various components can make it, so that not no component failed and you can have a failure without any broken component! Which is also very difficult to compute because our brain likes simple explanations for difficult problems.

So our tendency then as technical people is to go down and inwards. What we like to do is to go down to the component level and see, like with an engine, what went wrong.

So the correlation of that is of course that once we find a component, we just need to change that component out and then we fixed our system !
Brilliant.
The problem with complex socio technical systems is the component failure was probably a result of the relationships between various other components.

So if you just fix yourself the end component which ultimately gave up,
you didn’t fix anything regarding the relationships!
Which can make another component in your system fail. So this is one of the reasons why we need to ask WHY does stuff happen and not just WHAT happened or WHO did something wrong. Because ultimately it is not a useful question.
Even if we figure the WHAT or WHO out, if we don’t understand WHY we not able to putting in interventions that will prevent the next complex failure in the organisation.

So, another slide, between four questions and this I think you’re well on your way to understand risk.
Why are there goldfish here? I think this is what represents perfectly what safety professionals should be: this will goldfish jumping from one bowl into the other.

What is local rationality? Basically, local rationality is a psychological concept that says: “people do what makes sense to them, given their understanding, their goals and focus.”

Now, everybody has different understanding of their situation, has different goals in the situation and has a different focus.

Basically it’s like asking a goldfish “What is a goldfish bowl?”
The goldfish does not know it is in there!
Once you start comparing goldfish bowls to each other that you can really see what their point of view of the world is.

And this is very important for safety people to recognise that, while after an accident it’s pretty clear what happened, we can make this neat “chain of events” and make sense of it in our brain how it happened.

BEFORE the accident, everybody is in their own goldfish bowl, with limited & local understanding of the world!
Until we understand what their goals, understanding & focus at the time of the accident was, we can’t make sense of it on systems level.

This is a hypnosis experiment *can you click okay ***

So the object of the exercise was exactly what Rod was saying this morning. It’s pretty neat because it’s nice illusion. Here we go.
If you look at the centre dot, keep looking at that please, there is a test afterwards…. Keep looking at the centre… This is where you get to hypnosis on ya… so it’s run quite a course… okay …
Who saw all those dots disappear on the periphery? Probably depends on your viewpoint. This is a perfect metaphor for our focus.
It’s what Rod was talking about this morning: the human eye and brain, because our perception of reality really is not only just what our eyes see but how our brain interprets those signals, and it is an exercise you can see if you’re focusing on one thing, you start to lose awareness of the other dots, they never went away!
But your eyes don’t see them any more.

That’s a perfect metaphor also for our brain, once we are focused on one thing we start losing sight of others.

So the trick of risk management and understanding risk is that we have to pull back a bit!
We can’t focus too close to the problem because we start losing sight of everything around that.
So this is why a good safety investigation needs to look beyond this chain of events.
This is Reason’s Swiss cheese model, which is useful, to understand that an accident isn’t due to a single component in system.

But the danger of this kind of representation of accidents is that there is only one potential pathway.

So if we start to assume that if we understand, through our accident investigation, what the various steps in the accident were: our assumption is, “okay, now we just have to plug the holes backwards and we mitigated risk!”
But risk management is not the reverse of accident investigation!
It looks more like this.
You have various pathways that can lead to an accident.

So you have to make sure that you look at, and you pull back, you look not at the one pathway, but you look at the several pathways.
Because if you are focused on just solving one problem: and you could argue this was the case with Germanwings.
Germanwings was actually an accident, that was an accident is debatable, but it’s a failure in our system.
It was enabled through 9-11, through another risk mitigation which was closed cockpit door, which enabled this to happen.
So we have to be very careful when we’re trying to mitigate risk.

If we are just locally applying risk mitigations due to this kind of narrative, all these mitigations: they add complexity to our systems.
And not only is complexity confusing, it’s expensive!
So every risk mitigation that we apply to system costs resources: it costs time, it cost money, and it costs mental effort of people.

So when we were talking about wrong deck landings: it is very well to talk about technology, which might solve the wrong deck landing issue, but what is the cost of that?
What is the attention cost of that?
We are making an already complex cockpit, more complex!
So we have to really look at the system, not just at the very end of the of the chain.

So… those are the consequences of complexity: our knowledge and understanding of the system is limited and its local.
So the defence against that, as a safety professional is talk to a lot of people.

If only talk to the people closest to the accident, we do not have a systematic understanding of what happened.

This may be controversial statement, but behaviour of people and human error is a consequence of the complexity of the system and it’s not the cause of the complexity.

Now, maybe this true ,maybe this is not true.
But it helps you to think about our complexity differently.

If we think about just culture in terms of moralistic judgement: “ That guy made an error, that guy was that the guilty one!”
We are not really helping ourselves because: what can we change in the system? After firing that guy, what is going to prevent the next guy from doing the very same thing?
So we have to look at behavior as a consequence of the complexity of the system and not as a cause.
We have to realise that small actions can have big consequences. And small actions, we will talk about that later about the violations and practical drift, but this is another thing that our brain doesn’t really get.
It is like with the engine that is really complicated. “It’s very solid so for the engine to go wrong, something big must happen like a bird strike.”
Well, in a complex system that is not necessarily the case!

There are also and mental limitations which are very well understood in science. A very good book is a “Thinking, fast and slow” by Daniel Kahneman: he is a psychologist who won the Nobel Prize for economics. Actually because his major field is decision-making under uncertainty.
Does that sound familiar for risk management?
So his findings, and he has done lots and lots of experiments, say basically that there are two ways the brain thinks.

That system one, which is intuitive & quick and makes quick judgments, is obviously there for our survival instinct so we can act quickly.

But in the modern world that system One is actually tricking us quite a lot. The system two is more deliberate and is more rational (it’s not entirely rational, we think it is, but it’s not)

And the tricky part is that the things that system one assumes, system two doesn’t necessarily recognise as shortcut.
So to give you one example. There’s the famous Linda experiment: I will try to repeat it here.

So if I tell you about Linda that she’s a young woman, she went to university. She wears glasses, she’s politically active and she goes to rallies a lot.
Now I ask you the question : I”s it more likely that Linda is a bank teller, or that she is a feminist and a bank teller?”
Hands up I you are betting she’s a feminist and a bank teller.

…You’ve heard this one…

Statistically in experiments, the brain assumes because of the description that she is a feminist, but we said feminist AND bank teller. So statistically, actually shes a lot more likely to be (just) a bank teller than a feminist AND a bank teller.
So this is one of the shortcuts we do also in operations.
Our brain assumes on the basis of what we do, we model our operations on the basis of what we’ve experienced before.
So it may be we find an explanation for Germanwings never before had a pilot committed suicide.
So in our brains we didn’t really necessarily think it was a possibility, while in fact, statistically, it was a proven fact that it was a possibility.

So heuristics are basically shortcuts in the day-to-day life.
Our brain is constantly looking to make sense of the world and it looks for patterns in the world, even when there’s no pattern is to be found.
A famous example is the stock market , people think they can predict the stock market, while in fact it´s mostly random, but our brain thinks it can detect patterns where there are none.

So another consequence is that we have a lot of mental biases.
Now one of the biggest biases is the hindsight bias.
Which is a huge obstacle for a good accident investigation.
The hindsight bias is something that basically after an accident happens, and with …, in America “Monday morning quarterbacking”: …
is basically thinking about the situations with all the data at hand and then projecting yourself in that situation and thinking “They should have done this, the should have done that”.

Well in fact during that situation we are talking about local rationality: the people in the accident scenario did not have all that information available!
They were making decisions under uncertainty.
So, our brain tells us that that doesn’t matter, and this is the tricky bit about biases!
Even if you know you have it, you’re still convinced that you understand It!

So there’s lots to be understood about this, we can talk about this for hours, but as I’ve put that page with a lot of resources on my website SRMCoach.eu/rio. There are lots of very good links to lose yourself hours.

So, we have to deal with that social technical complexity.

A good start is to create non-technical skills in our safety professionals: basically the agility and adaptability to complex situations.
So the assumption there as well in risk assessments, that risk is static, while risk can’t be static because our organisations are dynamic and complex they consistently changing.

So we cant have static risk assessments! What does that mean?
Well, many times you see a risk assessment where safety case be made and then it’s approved and goes in a drawer for five years and never to be touched again.

But in the meantime, our environment and operational context is constantly changing. We have new people on board. We have new managers, we have new technology and the original safety case might not be valid.

So we have to be very careful when we make safety risk assessments, that they are up-to-date.

So we need to understand also the psychology of risk: when asking our professionals how likely something is, very possibly were telling us things that are of no value at all.
When you ask an expert “is it likely or not?” the answer will largely depend if that expert has actually encountered that situation in his own past.
So, understand how heuristics play a role in the risk assessment phase.

But ultimately it comes down to critical thinking, asking really good questions, and this is deceptively simple.
It’s difficult to ask good questions, because to ask good questions you have to take other perspectives. To do that you sometimes need to physically go elsewhere.
The worst safety professional will be the one who is stuck in his cubicle looking at manuals.

Because you will not understand the complexity of day-to-day operations and will not ask the right questions.
So your safety professionals should, as much as possible, be in contact with a very wide diversity of people to understand their local rationality.
And the biggest thing that they can bring to your organisation, is to start building a system picture of your organisation.

And that’s the S that is often forgotten! It’s safety management SYSTEM.
A systematic way to do manage risk and that systematic way requires system view of your operation.
So moving on…
Is this your SMS? If your SMS is telling you there is no problem and we are in great shape, while in fact you’re not, you might be in trouble.
Because not only is this useless. It’s worse than useless!

It creates a false sense of security.
If your SMS says “We have all risks under control, and we get reports so we’d know that our people are telling us, we would know if it is something went wrong.”
These are famous last words of Piper Alpha for instance.
The manager of the platform said yeah I was I was comfortable that everything went was okay because nobody told me otherwise.

So your SMS needs to reflect the reality of your operations.

There three basic concepts that are very important:

“Work as imagined versus work as done”, “Just culture” and “successful investigations”

So first about that “work imagined vs work as done”
Work as imagined is what basically our manual says the operations should be doing.
Now continuously your procedure, that says to go from A to B, is under pressure by the day-to-day challenges of your operation.

There is always not enough time, there are manpower issues, there might be spare parts issues, there might be bad weather.

So all of these things interact and put pressure on that “work as imagined”.
So how many people that are pilots have done exactly the same flight twice?
…nobody of course!
Because our operation is constantly dynamically changing, everything is different, so there’s no way that the procedure can cover every eventuality.
Also, worse even: different procedures can conflict each other!
So well, people say, I’m sure George will talk about this later, when we talk about violations: people think about violations as a bad thing.

But sometimes violations are the only way to getting get the job done!
We are quite hypocritical about that as an industry.

One example is the work-to-rule strike for instance in Europe when usually the French when they don’t like the work, they call work to rule strike.

So they´re not really striking, but the following every rule in the book and everything goes slower.
What does that tell you about that system?

Aren’t they supposed to always follow every rule in the book?
How come that once they are doing that everything goes slower?
Well that’s the work as done: in reality nobody can follow all the rules because our rules look like this.

Have you done in your organisation an exercise like this: put together every manual, every rule, every personnel manual & operations manual, instructions thats ever created, and that people are supposed to follow and put that on a pile?
What was the result?
Now very much, increasing the complexity your system to a point where we have to ask ourselves “Is this our organisation still knowable for the people working in it?”
I think this is a big role for our safety management systems.
We have to start looking at reducing this pile because in it lies complexity.
All these manuals, they say different things, subtly different things, and a got in for very stupid reasons sometimes.
So we have to be critical towards all our procedures as well because they might actually be not only hindering the operation they may be actively creating complexity and confusion.

So like we said, adding defences and what usually people take as a defence against risk is creating a new rule.
So in this situation, if you know about something like suicidal pilots and you create a new rule: that comes on top of all these other rules. You think it’s going to be effective?
How do you know that is going to be effective?
Have you started to look at the operation and see how many people are following all these rules?
So you have to understand that extra defences are not always the answer, if the only looking at single scenario events.

This is a test there will be a scoring afterwards! It is about looking at who is a good investigator.

….

anybody see something in this video?
How many changes did you see?
Your all were thinking I was going to ask you “Who done it?”
But that wasn’t the question! The question was: “ How many changes happened?”
In this video anybody see more than five changes?
Nobody see any changes? Interesting!
And I thought you were all very good observant, investigators!
Nobody more than 5, I can stop already! 21 changes !
….

This was an advertisement for a road safety in London I believe. Some people may know it.
It’s a great example of how we think that what we’re looking at, is what is really happening.

But that depends largely on the kind of quality questions we ask. If we ask only one question “Whodunnit?” we’re so focused on that, that we are not looking at the complexity of our operation.

So, and think this ties in with just culture, where just culture is not just only to protect, obviously that value of our safety reports and that people feel comfortable reporting, also we have to understand that just culture is really essential to make sure that we don’t go into knee-jerk reactions that our brain is tricking is into. These reactions are really not effective to change and improve our operation.

So, a few questions about your just culture in your operation:
“Have explained why you need a just culture to every level of the organisation?”
Because obviously people hear about just culture, but if they don’t understand why this is so necessary: you will get situations were at the supervisor level, “yes they heard about the memo about just culture”, but the next time that somebody screws up he’s going to be rather irate, as supervisors tend to get.
And even just a good bollocking can actually be perceived as a punishment. So if not every level in your organisation is totally on board with the philosophy of just culture, it’s not going to happen.

So, the other thing you have to realise is that your people understand HOW just culture works:
It is not only the fact that you have a policy written against a wall.

You also have to have compentent investigators who understand human factors.
So they are trained to look beyond the whodunnit question and who can look at all the things that are different in the operation.
So they understand the performance influencing factors, George is no doubt going to talk about.

So the other question:
“Is everybody applying it consistently?”
Because it’s nice to have a just culture in flight operations, but if engineers and mechanics are not getting the same treatment: you still don’t have a just culture in your organisation.

So another big question is: “Who reviews the events?”
Who gets to judge what is acceptable and not acceptable?
Are you asking “what is acceptable” of people who know the operation, who know the messy details of the day-to-day job that they themselves were in?
That kind of a situation they can judge, if what the guy did was in fact necessary or not?
Because if we don’t, we are trying to hold our people up to a standard that’s imaginary.
The work-as-imagined that´s in all manuals, but in fact, if you ask 10 people “could you have done the same thing and gotten the job done following the procedure?”
And nine people say “NO, I don’t have the time!”
One example is on Airbus 320 turnaround we’re asking the mechanic in 30 minutes to do 60 separate maintenance tasks!
Now, under normal conditions, that’s feasible.
But if one thing goes wrong, what happens to all the other 29 tasks?

So we have to be realistic about the pressures that our people are under day-to-day in our operation.

So “WHO reviews events?”.
Please make sure that in the review phase of accident investigation or incident investigations will involve the people that are actually operating, so they can have a reasonable judgement of what is going on.

Then successful investigations: we have to start with that end in mind.
What’s the purpose of this is safety investigation?

It so that you have an effective recommendation that will make your operation safer!

If you’re safety investigation is not making this, it is basically useless!

If the conclusion of your safety investigation is “yes it was human error, move on”: you have wasted everybody’s time and money.

Because you cannot learn from that and you cannot change your operation successfully from that.

So successful safety investigations require that you have competent investigators, who know about these biases, who know about human factors and who were trained to look at the system and not just at the individual situation.

So here we come again to do the new non-technical skills for safety professionals that we can bring in. And I think the main one we’ve been talking about this morning as well is collaboration. Why is collaboration so important for safety professionals?

Because of complexity!
Our systems are so complex, one person cannot possibly know, it would be sheer arrogance to think that we understand the whole system.
Every part of the organisation and every part of the other organisations have their own local rationality and we have to put goldfish bowls together to get a system-wide view of what happened and what is going to influence our safety performance.

So collaboration, this is a great example of that, I hope that during the network you exchange business cards on a regular basis.

Keep in touch and share your operational experience because that’s really what will make you more efficient.

Critical thinking again: critical thinking, not only about the system and asking good questions, but also about our investigations.

Are investigations really getting to the root causes and contributing factors that caused the accident or are we allowing our brain to find a plausible explanation.

Accessing and analysing information: Obviously it is a key skill but how many people are accessing for instance, information outside the company.

There is a wealth of information available and I´m glad to see the initiative of Heli-offshore because that is a great opportunity to bundle that common operational knowledge and can help to analyse the information.

But the problem we can face here is information overload.

So it’s a skill as well to analyse information just information and data, very much like FDM, on its own is not not enough.

If we cannot convert that to what they call actionable intelligence: something that you can go to your manager and with and have a reasonable confidence that the recommendation you do will have an positive effect on your organisation.

And finally, curiosity and imagination: it might seem like a strange skill for for safety professionals who is a technician.
But basically, if your system is complex and dynamic, many different scenarios can happen.

We have to be curious and look at not just what is plausible, but what is possible!
And that means that we have to stretch our comfort zone sometimes and we have to ask things about stuff we don’t know.

Because again, in a complex area we don’t know, we cannot possibly know everything.

So, curiosity and asking questions is really the best defence against bias
and an Heuristics.

Like we said before, communication, communication, communication.
Now communication is first and foremost about the listening!

And this is a skill, when I went through my coaching training I saw Active listening is taken for granted, but is so difficult to do!

You really have to train yourself to actively listen to what your people are telling you.
Again your brain starts to take shortcuts.

If you do not attentively listen during interviews, you are missing a lot of these fixtures like like an in the video, having all these kinds of clues that people are telling you in interviews which go unnoticed because we are in such a hurry to confirm our biases.
We forget to actively listen to what are these people really telling us.
Are they really telling us that that “this was no surprise for them because they’ve been reporting it so many times and nothing happened!”

What does that say about our system?

So all these things happen by listening and observing.

And observing, I´ve been hearing that there are great initiatives going on for LOSA.
LOSA can be a fantastic tool to look at what’s really happening in our operation, provided of course we don’t go after individuals but we start to look at it from a systems perspective.

“Why are these people doing what they’re doing?”
“Why are they not following procedure?”, and I guarantee will be doing LOSA, you WILL see people not following procedures.
But that might be actually defences, personal defences, that they are doing to not get in trouble.

So we can learn from that.
Because in in the end, our people, most of the time, did a great job making a very complex system work, not always following the rules, but they make it work.

So we have to ask ourselves the question “What makes our system successful?” looking for factors that will provide and promote a robust and resilient organisation instead of just looking at the factors that will make it fail.

Because if we are only looking at what can fail, we can only see what we need to avoid.
But if we can look at what factors are contributing to success then we have a target!
If we do more of that, we can assume that are making a more robust and resilient organisation.

So this observing is not just for failure. We have to observe for success as well!

It is critical that we understand what is needed to set up our people to succeed!
Not just not let them fail, which is quite different.

And feedback like we said this morning.

Feedback is the most underrated tool that you of the safety professional have now.

Feedback is really the way that you can steer the organisation towards more desirable behaviour, to get more background information and get more visibility of complex issues.

If you are just collecting reports and sending back a standard reply. “Thank you for your report”: THAT IS NOT FEEDBACK!

The person that received that, “thank you for your report” is basically assuming it went in to a garbage bin.
Feedback means that you tell these people what happened with the report, even if you could not solve what was reported, they need to know why it was not possible, and why it’s still useful to reports!

These kinds of things and that feedback is quite a complex issue and that’s a skill that we can learn perfectly.

But we have to be really aware of how a lack of feedback can kill the reporting system dead, just as much as a bad just culture can.

So, taking effective action.
And this is where come to my story of of converting myself into a coach. I have been certifying myself as a professional coach, which taught me a lot about the psychological tools which I believe can be very, very useful for safety professionals as well.
What does a coach do? A coach, contrary to a consultant does not tell you what to do.
A coach helps you to clarify your situation and your personal goals and the systems goals and tries to detect okay, “what are you options?”
So, you notice the difference here that the coach doesn’t assume that he knows what is necessary to do.

I think this is great for safety professionals as well as an attitude, that you go with an open mind to the people in your organisation and you listen to them and clarify what the problem is, what their goals are and what the options are to get there.

The second part of coaching is responsibility.
Before people take action, they need to have a sense of responsibility.
So as safety professionals we can clarify that responsibility as well with our managers our supervisors and even individual in the organisation.

And in the third part is action and action plan.

It is not enough just to say you will take an action.
You have to be very specific about what action you are going to take how you are going to take it when you’re going to taken and what will be the measure of success.
How many organisations are measuring their actions that way?

If you don’t measure how actions are implemented, you can’t measure afterwards if it is successful or not.

That’s the question 4 “How do you know it works?”

Because you might assume that because the manager told you
“somebody will do that and it’s done”.
If you don’t have specifics. There’s many ways of “doing” something that might be “just tick the box”; they sent an email to somebody else and assumes it’s done or they went really through the motions and had a 2 hour instruction to tell everybody how it should be done.
That’s a very wide range of “doing” of “taking action”.

Those three steps I think can help safety professionals approach the relationship with their management in a much more constructive way.

Where traditionally safety management people sometimes are perceived as people that bring bad news to the organisation, in this way we can be a more collaborative partner to our management team.

So the SMS itself will get used more as a tool because now it’s useful.
The biggest obstacle you can come across is this face during management meetings.
I’ve caused this kind of face.

It happens when we start bombarding our management team, with useless information and data and and bar charts which actually are not actionable so we have to make sure that we address “what’s in it for me?”
for the manager.

What’s in it for them and were telling safety data how is that relevant for them. If we make it more relevant automatically you will get more management commitment.
And this is, I have to say really irritates me in ICAO, they say get management commitment as if it’s so easy to go into a shop and get a bag of it.
Management commitment takes a long time to conquer and takes even more effort to maintain.

Because if the management doesn’t see results from the SMS in their own area they will lose commitment and they are right to ask critical questions.
With spending so much money on our SMS, “What is it doing for us?” and if you can’t answer that question sooner or later you’re going to lose the support you need to get things done.

So to avoid this face meetings. Please make sure that your safety meetings are relevant to your management team and staff members attending.

Another thing that´s relevant with coaching is you have to understand motivation.
With just culture we talk a lot about the stick: about how punishment is not great for motivating people, that’s true.
But science has also conclusively proven that “carrots”, incentives to do something, are just as bad as sticks.
What carrots do is, like to green dot, they focus attention.
Which is great if your people have doing it mechanistic task like producing hundred widgets : if you give them a reward from making hundred and twenty widgets: that´s no problem!
But it’s proven psychologically that if you make people do stuff for rewards, for task that require a minimal cognitive load, putting incentives there is really negative and can reduce performance.

So what’s the alternative?
The alternative to this, which is extrinsic motivation, external motivation is intrinsic motivation.
I have put up a great video from Daniel Pink on my resource page which goes deep into that.
But basically intrinsic motivation is what any professional has when they are doing their job with, well, enthusiasm we might say.

It usually requires tree parts:
that that person has autonomy, can self direct.
Has a mastery: they know how to do the task and that they know WHY they need to do the task and which purpose.

This is called self-determination theory.
I´ve written a blog article which is on the page, which explains why, if we are communicating about safety, we have to address these things.

If we don’t, if we take away autonomy from our people, automatically it will reduce motivation.

If we don’t explain people why we want them to do a certain action or sudden changes in behaviour, we will not be getting the motivation.

So we need to be very clear in our communication, based on what science tells us about motivation.

Lots of things are said and done about goals and objectives and the problem with this stick and carrot here is that this philosophy for setting goals and objectives can have very perverse effects.

You see it in oil and gas for instance with lost work time incidents, correct if I´m wrong.

Where the division or platform gets bonuses based on their performance related to accidents that require personal injuries that required people to go away from work.

Now, what’s the perverse effect of that?
Because they get a bonus, nobody wants to tally up an accident or an injury because it will affect their bonus. Ç
So they will send people to the hospital with broken legs with a laptop, so they don’t have to count that injury as a “lost time incident”.
So is that really the purpose?

So here we need to be very careful with setting goals and objectives that are generating, what Sidney Dekker calls ”metrics against dependent variables”.

Safety and things like “Zero accidents” are dependent valuables you cannot control them directly. So it’s a lot easier to fake the numbers to make the thing fit rather than change something where you don’t have control over.

So you have to be very careful when we are giving incentives, I´ve situations where people were getting small rewards for making safety reports and obviously organisation got safety reports, they were happy, the auditors were happy.

But the safety reports’ quality was really in the the nature of “yes Mr X did not wear his Hi viz vest” so are really of no value to the safety management system to predict accidents.

So the alternative here is to , for your organisations to really define a compelling vision.
And that visions should be positive not a negative.
I know in this industry there’s a lot of organisations that have a target Zero goal, but if you think about it the target Zero goal is Zero accidents is like an anti-goal.
It’s something while you don’t have any control over.
If you have Zero accidents: well, okay, I’ll take it.
But if you have an accident, you fail, but you didn’t do anything different.
So that kind of goal does not incite action.
I would advocate strongly to make a more positive goal and look at what we were talking about before, the safety two approach of what are the things that make our organisation successful.
So we count things that give us a better chance of making a more robust and more resilient organisation and we don’t have to count negatives like accidents and incidents.

So, the skills to generate action: think about coaching instead of advising because if you think about it: if you give advice you assume that you know the proble.
But allow people to come up with their own solutions ask professionals that do the work to come back and give give you solutions that you can facilitate to management.

Understand motivation and effective communication and influence which is really a skill that can be learned.
It is not to manipulate but you have to think about all the hard work you do to collect all the safety data.
If you then present that to your management team and you’re not able to convince them and you put them to sleep, well, you’ve lost months and months of effort!
So as a safety professional you have to understand how to effectively communicate and influence the management team so that you have an impact on the organisation, so your organisation will do something about risk.
That takes initiative and entrepreneurialism: basically that the spirit there is that you have to start with small small experiments that allow yourself to fail.
It’s not is not necessary problem if your safety mitigations are not effective, but make sure that you know if it’s working on time so you can change and adapt!

So this is where is like to leave it.
To come back to those four questions:

Make sure that everybody in your organisation can answer this question, the same way.
“What is most likely going to cause you next accident and incident?” “How do you know that?” make sure that you’re not kidding yourself.

“What are you doing about it?”
And again, make sure not kidding about yourself. “Is it working?”
I’ve put up a resource page with lots of links SRMcoach.eu/RIO
Very much for your attention. I’ll be available for anybody wants to discuss further.

 

 

About the Author
Jan Peeters


Jan is an experienced Safety practitioner who is always on the lookout to improve SMS and the management of safety. He coaches organisations and individuals in Safety Management.

Leave a Reply