Engineering the Liminal
Why Your Data Doesn't Change Behavior
I’ve watched utilities spend millions on monitoring systems that nobody acts on. I’ve sat in operations centers where LiDAR data shows centimeter-level ground movement, SAR detects deformation through cloud cover, and IoT sensors stream real-time pressure and vibration data—yet field crews still patrol the same fixed routes they’ve run for twenty years.
The problem isn’t the technology. It’s what happens in the space between the alert and the action.
That space has a name: the liminal. From the Latin limen, meaning threshold, it describes the interval between one state and the next—the hallway between knowing and doing. And in thirty years of working with power companies, I’ve learned this: organizations that can’t engineer their liminal space can’t build resilience.
The Graveyard of Good Intentions
Every company I’ve worked with has the same problem. However, few admit it plainly: they’ve engineered their detection systems brilliantly and their response processes adequately, but the threshold between them remains unengineered territory.
Risk analysts produce beautiful slope stability models that engineers review, approve, and file. GIS teams generate vegetation encroachment maps that never route to the crews who could clear the right-of-way. Operations dashboards light up red when rainfall hits exposed segments—but the field supervisor’s workflow doesn’t change, because the alert lives in a system they don’t open.
The data exists. The insights are valid. The need is urgent. But nothing happens.
This is the liminal gap—the unstructured, unowned, often invisible space between detection and intervention. It’s where insights go to die, not because they’re wrong, but because no one has engineered the handoff from awareness to action.
And here’s the uncomfortable truth: most organizations don’t even recognize the liminal as something you can engineer. They treat it as inevitable friction, the natural lag between systems. But it’s not natural. It’s a design choice … or more accurately, a design failure.
The Architecture of In-Between
The liminal isn’t empty space. It’s full of invisible decisions, manual handoffs, and organizational boundaries that we’ve simply accepted as permanent.
Someone sees an alert and decides whether it’s urgent. Someone else translates a technical risk into operational language. A third person escalates through the chain of command. A fourth schedules a meeting to review options. Eventually, if the urgency persists through these layers, a work order is created.
Each handoff is a threshold. Each threshold adds latency. And latency is the enemy of resilience.
The utilities that get this right have learned to collapse these thresholds, not by moving faster, but by engineering the liminal space so that crossing from detection to action becomes automatic.
Here’s what that looks like in practice:
When a slope stability model reaches a critical threshold, a work order is automatically generated in the CMMS. Not an email. Not a dashboard flag. An actual work order with asset location, priority code, and crew assignment logic. The liminal space between model prediction and field mobilization shrinks from weeks to minutes.
When vegetation density exceeds clearance standards, the trimming schedule is updated immediately. The system doesn’t wait for the quarterly planning cycle. It doesn’t generate a report for someone to review. It crosses the threshold from analysis to execution without human translation, because that translation has been pre-engineered into the workflow.
When weather patterns intersect with vulnerable infrastructure, compliance pre-populates the incident documentation. The liminal moment between event detection and regulatory response is engineered so tightly that if the event escalates, you’re not scrambling to reconstruct what you knew and when. The timeline is already logged because the threshold was designed to automatically create the record.
This isn’t visionary. It’s plumbing. Boring, essential plumbing that most utilities avoid because it requires forcing uncomfortable conversations between IT, operations, and engineering—three groups that rarely agree on data standards, let alone how to engineer the space between their systems.
The Liminal Audit
I use a simple test when I walk into a new client: Show me what happens when your best predictive model flags a critical risk.
Then I map every threshold between that flag and the field action. Every approval. Every translation. Every system boundary. Every manual handoff.
This is what I call the liminal audit—counting the number of thresholds someone or something must cross before action occurs.
If the answer involves someone exporting a report, emailing a PDF, or scheduling a meeting, you have an unengineered liminal space. If the answer involves manual re-entry of data into another system, your thresholds aren’t connected. If the answer is “it depends on who sees it,” you have liminal chaos—spaces with no design at all.
The organizations that get this right can trace a straight line from sensor reading to work order to field resolution—with timestamps, decision points, and accountability at every threshold. Not because they have better data, but because they’ve engineered the liminal space where decisions must happen.
Designing Thresholds That Act
Engineering the liminal is systems design, not software deployment. It means deliberately architecting the space between states so that crossing from one to the next requires minimum human intervention.
Define trigger conditions with operational precision. Not “monitor slope movement,” but “when displacement rate exceeds 5mm/month for two consecutive readings, auto-generate Priority 2 inspection within 72 hours.” Make the threshold concrete enough that a system can execute the crossing without human interpretation. Every vague threshold is an unengineered liminal space.
Connect workflows across the threshold. Don’t give your field supervisors another system to check. Put the work order in the tool they already use. Don’t require the compliance team to access a GIS portal. Push the documentation into their existing reporting workflow. Every system boundary is a threshold. Minimize the thresholds that people must manually cross.
Close the feedback loop across time. When a crew inspects an asset, the observation should be reflected in the risk model to update it. When maintenance resolves an issue, the system should recalibrate its prediction. The liminal space exists in time as well as workflow—the gap between action and learning. Most utilities leave this threshold completely open, discarding field intelligence that could refine their models.
The utilities that do this well didn’t start with technology; they started with people. They started by mapping every liminal space in their operation—every gap between knowing and doing—then worked backward to engineer the thresholds that collapse those gaps.
What Resilience Actually Means Now
Twenty years ago, resilience meant building things strong enough to withstand failure. Today, it means engineering your liminal spaces tightly enough that failure doesn’t cascade.
The difference is temporal. It’s about how long you remain in the vulnerable state between detection and response. Every minute spent in that liminal space is a minute your system is aware of risk but unable to act on it. That’s not resilience. That’s exposure with documentation.
The utilities that survive the next decade won’t be the ones with the most sophisticated detection systems. They’ll be the ones who’ve compressed their liminal spaces—who’ve engineered every threshold between insight and intervention so tightly that detection and response feel like a single motion.
This isn’t about AI, digital twins, or any other buzzword. It’s about whether your organization has the discipline to identify every liminal space in your operation and deliberately design what happens there.
Because I can tell you from three decades in this industry: the failure point is never the sensor. It’s always the threshold. The handoff. The unengineered liminal space where awareness goes to wait.
The Question That Defines Maturity
Here’s what I ask every executive team: Where in your operation does someone see a problem, know what needs to happen, but lack the authority or tools to make it happen immediately?
That’s an unengineered liminal space. That’s where you’re losing time, money, and safety margin. That’s where resilience dies quietly while you’re generating reports about it.
The answer is usually uncomfortable. It involves admissions about organizational silos, legacy procurement decisions, and the gap between what your systems can do and what your people actually do with them. It means acknowledging that you’ve spent millions on detection and response, but nothing on engineering the space between them.
But that discomfort is the price of honesty. And honesty is the only starting point for real change.
The utilities that learn to engineer their liminal spaces—that treat thresholds as design problems rather than inevitable friction—will define the next era of grid reliability. The ones that don’t will continue to generate beautiful reports about issues they see, but can’t cross the threshold to fix.
That’s the choice. Not whether you’ll have thresholds—you’ll always have thresholds—but whether you’ll engineer them.



