The headlines are predictable. They are always predictable. Whenever two pieces of multi-million dollar aluminum come within a hair’s breadth of each other on a taxiway, the autopsy follows a scripted path. Investigators look at the "Runway Status Lights" or the "Airport Surface Detection Equipment, Model X" (ASDE-X). They ask why the siren didn't scream. They ask why the lights didn't turn red.
The standard media narrative is obsessed with a single, lazy question: Why did the safety system fail to send an alert?
That is the wrong question. It’s a question asked by people who believe that safety is a product you buy and install in a server rack. It’s a question that assumes a perfect algorithm can substitute for the messy, high-stakes reality of human spatial awareness.
The real problem isn't that the alert didn't fire. The real problem is that we’ve built an aviation infrastructure so fragile that we expect a computer to save us from 150-ton errors in judgment. If you are waiting for a dashboard to beep before you realize you’re about to clip a wing, you’ve already lost the fight.
The Myth of the Safety Net
Most people view runway safety systems like ASDE-X or the newer ASSC (Airport Surface Surveillance Capability) as a literal net. They think if a pilot or a controller makes a mistake, the software catches them.
I’ve spent years looking at the telemetry and the post-incident reports from near-misses at hubs like LaGuardia and JFK. Here is the uncomfortable truth: These systems are not nets. They are mirrors. They reflect the chaos of a congested airport back to the controllers with a slight delay and a massive amount of "noise."
When an alert doesn't fire, the knee-jerk reaction is to call for a software patch. But if you talk to the people in the tower, they’ll tell you about "nuisance alerts."
Imagine a scenario where your car’s collision warning went off every time you drove past a parked car, a mailbox, or a particularly tall blade of grass. You would turn it off within ten minutes. That is the fundamental tension in runway safety. If you tune the system to be sensitive enough to catch every possible "LaGuardia-style" incident, it triggers a thousand false alarms a day.
Controllers start to ignore the beeps. It’s called alarm fatigue. It’s a documented psychological phenomenon that has killed more people in hospitals and cockpits than hardware failure ever will. By demanding "more alerts," the public is actually demanding a less safe environment.
The LaGuardia Bottleneck Logic
LaGuardia is a postage stamp with delusions of grandeur. It is one of the most complex, cramped, and unforgiving pieces of pavement on Earth. When you have planes moving at high speeds in such a tight geometric configuration, the window for a system to "detect" a conflict and "alert" a human is often shorter than the human’s reaction time.
Let’s look at the physics.
If Plane A is traveling at 15 knots on a taxiway and Plane B is landing at 140 knots, the rate of closure is immense. By the time the ASDE-X logic processes the GPS coordinates, filters the ground clutter, and decides that a collision is "imminent" rather than just "close," the actual distance between the aircraft has shrunk by hundreds of feet.
The math of a $100$ million alert system is often defeated by $2$ seconds of human hesitation.
We are pouring money into "improving" the alert logic when we should be looking at the structural absurdity of our airport layouts. We are trying to use 21st-century software to fix 1950s-era pavement geometry. It’s like trying to fix a crumbling bridge by installing a faster digital toll booth.
The Automation Paradox
The more "robust" (to use a word the industry loves) we make these systems, the worse our pilots and controllers become at basic situational awareness. This is the Automation Paradox.
When you tell a pilot, "Don't worry, the Runway Status Lights will turn red if it’s not safe to cross," you are subtly giving them permission to stop looking out the window. You are offloading their primary responsibility—visual separation—to a sensor array that is subject to rain fade, multipath interference, and software bugs.
I’ve seen this play out in flight simulators and real-world ops. When the tech is "always on," the human brain goes into a low-power state. We see "looking but not seeing." A pilot looks at the runway, sees another plane, but because the lights aren't red, their brain tells them it must be okay.
Safety isn't a feature. It’s a practice.
The obsession with "Why didn't the alert fire?" reinforces the dangerous idea that the alert is the primary source of truth. It isn't. The primary source of truth is the view through the windshield. If we continue to prioritize the digital over the visual, we are just waiting for the next "undetected" error to turn into a tragedy.
Stop Asking for More Data
The FAA and various investigative bodies love data. They love "Big Data." They think that if they can just ingest more transponder pings per second, they can create a "transparent" runway.
They are wrong.
More data creates more complexity. More complexity creates more "tight coupling." In systems theory, tight coupling means that a small failure in one part of the system (a sensor glitch) can propagate rapidly through the rest of the system with no way to stop it.
We don't need more data. We need more "slack."
Slack means more space between planes. Slack means simpler taxi instructions. Slack means accepting that LaGuardia cannot handle 1,200 flights a day without compromising the margin of safety that a computer can't replace.
But "slack" is expensive. It costs the airlines money. It causes delays. So instead, the industry buys a new software update and calls it "progress."
The Accountability Shift
Investigating the "alert failure" is also a convenient way for everyone to avoid blame.
- The FAA can blame the software vendor.
- The vendor can blame the "environmental conditions" or "sensor interference."
- The airline can blame the airport infrastructure.
Nobody has to look at the pilot and say, "Why did you taxi onto a hot runway?" Nobody has to look at the controller and say, "Why were you juggling ten planes in a space designed for five?"
By focusing on the technology, we sanitize the error. We turn a human failure into a technical glitch. It’s a neat trick that keeps the stock prices stable, but it does nothing to prevent the next crash.
The Real Fix is Low-Tech
If you want to stop runway crashes, stop looking at the monitors in the tower.
Start by simplifying the signage. Start by redesigning taxiway intersections so they don't look like a bowl of spaghetti. Start by limiting the number of movements during peak hours so that controllers aren't pushed to the edge of their cognitive limits.
The industry won't do this. It’s too "inefficient."
They would rather spend $50 million on a "predictive AI alert system" that will inevitably fail when a sensor gets covered in snow or a transponder stops chirping. They would rather chase the ghost of a perfect algorithm than deal with the reality of human error in a crowded space.
We have reached the point of diminishing returns with airport surveillance technology. We aren't failing because the tech is bad; we are failing because we are asking the tech to do something it was never meant to do: eliminate the consequences of bad design and human fatigue.
Stop waiting for the alert. Look out the window.
The "safety system" didn't fail at LaGuardia. The system worked exactly as it was designed. It provided a false sense of security right up until the moment of impact.
Quit asking why the computer stayed silent and start asking why we’ve built a world where we’re too afraid to trust our own eyes.
The next time you’re sitting on a tarmac, look at the complexity around you. The lights, the sensors, the frantic radio chatter. It’s all a fragile shell. When it cracks—and it will crack—don't blame the code. Blame the people who thought the code was a substitute for common sense.
Stop looking for the "glitch" and start looking at the schedule.