Why do near-misses go unreported? The 4Ds framework is the magnet to your haystack.


A tech nearly slipped off a ladder three times last month and said nothing. It was an eight-foot ladder, a two-minute sensor reset. Reporting it meant an hour of paperwork and the same safety lecture he'd already sat through a dozen times. He only mentioned it after someone else actually fell.
The incident report blamed "worker non-compliance," but missed the real question: why was anyone climbing that ladder three times a shift?
That's an example of of an opportunity to leverage Human and Organizational Performance principles in practice, that I picked up from a chat with one of our most long-standing clients. Do we chastise the individual or do we re-evaluate the system?
The groundwork for the HOP approach was laid by people like Conklin, Dekker, and Reason for decades now. They taught us to look at "work as done" versus "work as imagined." But there's always been a gap between their ideas and what actually happens on the floor, even in organisations with safety managers that know better.
What does this mean? It means that systems designers tend to have a bias towards imagining that their system would work perfectly 'if only these imperfect humans can be cajoled into adhering to it properly!' Whether or not that's true in any particular case, the HOP approach recommends system designers to always place the blame on their design. But that idea alone is not enough because the next question is, now what?
A brain-worm breakthrough finally came from Jeff Lyth in recent years. He realized frontline crews don't talk in academic theory, so he boiled it down to four words people actually use: Dumb, Dangerous, Difficult, and Different (which was a reinvigoration of an old aphorism, “Don't do anything dumb, dangerous, or different!”, tracing back to the US Air Force - and who knows where before that!).
I was talking with Brent Sutton recently - he's good at making this practical. His books *HOP Beginners Guide* and *4Ds for HOP and Learning Teams* helped close the last inches of that gap in my mind (You can find them at learningteamscommunity.com/books.). This is what finally pushed me to see that myosh made an explicit implementation of the - I daresay forthwith-evergreen - 4Ds paradigm.
This paradigm gives you an approachable mechanism for gathering qualitative data points on the rough edges of your systems.
Imagine I'm cooking a new dish. Do I write up a recipe, then follow it blindly before blaming the ingredients when the final result is a disaster? No, I'm not waiting for the final plate to know if I'm on track; I'm smelling the garlic to catch its moment of sweetness, probing the roast to see if it's hit temperature, watching the sauce until it coats the back of a spoon, listening for the sizzle that says the pan is ready. Each small signal is an in-process probe that lets me adjust before failure becomes permanent.
We built our new Operational Learning module around this mental framework. Here's some examples of what you'll see populating your system records when asking the 4D questions as part of your safety reviews:
Dumb
A power company found system software engineers were sitting around for hours waiting for the company server box to compile code iterations for testing, when their more modern desktop work machines had many multiples of the bottlenecking CPU capacity. Quick napkin math illustrated that annual developer time wasted far exceeded the cost of an overdue hardware upgrade. The server core was swapped out the next day for $1500, saving both time and sanity.
Dangerous
A warehouse worker bypassed a forklift safety sensor daily because humid air kept triggering a faulty door. The alternative was a ten-minute detour that tanked his quota. Management wanted to discipline him; the HOP approach asked why production targets were fighting broken equipment. They fixed the door instead.
Difficult
A maintenance crew was torquing a bolt so deep in a machine that three guys had already hurt their backs. Nobody requested the $800 specialty tool because "incidents were low" on the reporting widget. The conversation only changed when an apprentice sheepishly asked, "Why are we doing it this way?" because he had recently admired a suitable offset attachment at TKD. The 4Ds give veterans permission to see the difficulty they have learned to live with.
Different
A construction crew nearly had an accident with "equivalent" anchor bolts. Same spec, different coating. The new coating changed the friction just enough to make them snap under torque. Before, the crew wouldn't have flagged a supplier change. Now they do.
The software just makes sure that observations and ideas are solidified so that the conversation leads somewhere: maintenance sees the "Difficult" tasks, procurement spots "Different" trends, safety handles "Dangerous" hazards, and so forth. You can add 4Ds Learning to your module stack now if you're myosh user.
But you don't need our tool to start testing this today. At the end of the next shift, ask your crew: "What felt Dumb, Dangerous, Difficult, or Different today?" Then listen without defending the system. Set aside perfect compliance and aim at hearing the truth before someone gets hurt.
Adrian has been a Director at myosh for 20 years, overseeing the implementation of safety management software in various companies, from small firms to multinational corporations. His roles have included Training, Support, Development, Analysis, Project Management, and Account Management. Adrian’s experience provides him with extensive knowledge of health, safety, environment, and quality management, focusing on industry-specific needs. He also helps integrate the latest industry practices into myosh’s products by building relationships with experts and hosting educational webinars.