Automation is usually presented as progress.
After years of working with systems that actually have consequences, I’ve learned that this isn’t always true.
If something can be done faster, more consistently, or without a human involved, the assumption is that it should be. Most of the time, that’s right. Automation removes friction and reduces obvious error.
But there’s a point where automation stops helping and starts getting in the way.
I’ve seen that point enough times now to recognise it early.

Automation is good at clear failures
Most systems are built for binary problems.
Something breaks.
A check fails.
An action is triggered.
In those situations, automation works extremely well. You want fast, predictable responses. No hesitation.
The problems that cause the most friction aren’t like that.
They’re slower.
Messier.
Everything is technically working, but the experience degrades. Things feel wrong before they’re clearly broken.
Automation struggles here, because ambiguity doesn’t fit neatly into rules.
Automation doesn’t remove decisions, it locks them in early
Whenever something is automated, a decision has already been made.
Someone has already decided:
- what matters
- what “bad” looks like
- when to intervene
- how the system should respond
Those decisions might be reasonable. They might even be correct most of the time.
But when reality shifts outside those assumptions, automation doesn’t pause or adapt. It keeps doing what it was designed to do.
Repeating the wrong action faster doesn’t make a system more reliable.
Where manual control is deliberate, not a failure
This shows up a lot in engineering work.
There are areas where automation looks ideal on paper, but in practice, having control matters more than speed.
Failover is one example. Automatic reactions sound sensible until you’re dealing with partial failures, systems under pressure rather than systems that are outright broken. Latency creeping up. Load behaving differently. Everything still technically healthy.
In those moments, reacting automatically can make things worse. You end up responding to symptoms instead of understanding causes.
Having the ability to pause, look, and decide when to move is often the more reliable option. Not because automation is bad, but because judgement matters when situations aren’t binary.
What these situations have in common is that reacting automatically feels decisive, but often removes the space needed to understand what’s actually happening.
Timing matters too
Another place this shows up is in patching.
From the outside, monthly patching feels like something that should be completely automated. Same process, same schedule, every time.
In reality, the timing changes. Each month is different. Teams are in different states. Dependencies shift. Critical systems don’t all carry the same risk at the same time.
Automation handles the mechanics well. Humans still need to control the timing.
That combination; automated execution with human judgement around it is usually where things stay calm.
Humans aren’t the weak link
There’s a common idea that the goal of a good system is to remove people entirely.
In practice, the most stable environments don’t do that. They move people away from repetitive execution and closer to interpretation.
Less reacting.
More observing.
Automation handles the routine. Humans handle the edge cases. That’s not inefficiency — it’s design.
This isn’t just a systems problem
The same pattern exists outside of work.
Rules work until they don’t.
Shortcuts help until they hide the real issue.
Automatic responses feel efficient until they’re misaligned.
Sometimes the right move isn’t to tighten the loop, but to slow it down.
Automation is a tool, not a goal
Used well, automation creates calm.
Used blindly, it creates movement without understanding.
The difference isn’t how advanced the system is.
It’s whether someone is still paying attention.
In the next few years, AI will automate more decisions we currently struggle with. That will only make the remaining human ones more important!
Leave a Reply