#4: The Pilot-to-Aircraft Ratio Is a Trap
Your team may be spending millions of dollars answering the wrong question.
“An approximate answer to the right question is worth far more than the precise answer to the wrong question.” — John Tukey
Ask ten pilots how many unmanned aircraft one person can control, and you’ll get twenty answers. I’ve heard everything from “it depends” to confident declarations of specific numbers — often from people who’ve never flown them in combat. This is my attempt at a third option: the question itself is mostly wrong.
A quick definitional note before we go further. When I say “pilot,” I mean the person who owns the life-and-death consequences of what the aircraft does. If a fifth-gen fighter pilot is issuing intent to an AI wingman, that fighter pilot is the pilot. If it’s Ender’s Game and a four-star general is clicking approve on a targeting decision from a command post, the COCOM is the pilot. If this terminology becomes obsolete before this article does, substitute accordingly.
The Intuitive Frame and Why It Breaks
Earlier in my career, I thought about this the same way most people do: how many aircraft can one person manage simultaneously? It’s a clean, measurable, resource-planning question. Program managers love it. Acquisition officers put it on briefing slides. It feels like a constraint you can design a system around.
After more than 1,000 hours flying MQ-9s in the Middle East, it stopped feeling clean. An ISR mission might give you four hours of dead airspace over a desert compound where nothing moves, then ten seconds where everything happens at once. Close air support is the extreme version — a single missed radio call, a location update that doesn’t come through, and an innocent person dies. The aircraft count didn’t change between those two scenarios. The decision demand swung from near-zero to overwhelming in under a minute.
That observation broke my simple model.
Decision Density: The Better Proxy
The frame I now use is decision density — the number of human decisions required per unit time. It’s not a number I can hand you on a slide. It’s a variable, and it’s almost entirely mission dependent. Research published through the Air Force Research Lab has been gradually shifting focus from “how many agents can a human manage” to “how can agents and humans work together effectively” — because the former treats the ratio as static when the actual bottleneck is cognitive load under pressure.
This matters because decision density is, in large part, software defined. Every major software revision on a CCA-class platform changes how much the pilot needs to babysit the automation — sometimes dramatically. The Air Force’s DASH experiment — Decision Advantage Sprint for Human-Machine Teaming — is specifically structured around the insight that the limiting factor isn’t aircraft count; it’s the speed and quality of human decision-making in high-tempo scenarios. Design a system around a fixed pilot-to-aircraft ratio today, and you may have precisely solved last year’s problem.
What I Actually Saw in the YFQ-42A Program
I flew the YFQ-42A during its early test days. The honest version of what I observed: the aircraft ratio question was almost never the interesting one. The interesting question was always “what is this aircraft going to ask me to decide, and when?”
Some flight profiles had me barely in the loop — the automation was doing its job, I was monitoring. Other test cards had me making calls at a rate that would have been unsustainable with more than one airframe. Neither scenario maps cleanly to a ratio. And here’s the part that should keep acquisition officers up at night: a new software release could flip which profile you’re in. Every three to six months, a new rev will change the answer.
Two Factors That Actually Matter
If I were advising a board on this, I’d frame the answer around two variables rather than one number.
The first is timeline. Technology designed to win a conflict in 2027 looks entirely different from a 2035 or 2050 solution. Committing to a ratio now is fragile. Each architectural decision that treats it as fixed will need to be unwound with the next software rev.
The second is the moral weight of the decisions involved. This isn’t soft philosophy — it has hard implications for system design. Defense policy experts have consistently argued that regardless of what automation can technically handle, decisions with direct human life consequences require a human authority. That’s not a legal formality. It’s a load-bearing constraint that shapes workload, interface design, and latency requirements from the ground up. The USAF plans to spend more than $8.9 billion on CCA programs between FY2025 and FY2029 — and how they architect human decision authority in that spend will define the answer to this question more than any pilot’s intuition about aircraft counts.
A Rough Taxonomy, Not a Formula
Since people will ask: here’s how I calibrate it by mission type.
Ocean surveillance, non-kinetic ISR, low-threat environments where the automation is primarily scanning and reporting: very high ratios are probably viable with the right design. Hundreds, possibly more. Decision density is low, and most system failures can be managed with time to spare.
Armed escort — loosely coupled to manned elements, standing by for potential engagement: somewhere in the range of ten to fifteen until the situation changes. The density can spike without warning, which sets the ceiling.
Tight formation with manned fighters, fully integrated into a kinetic engagement: one. Maybe.
These aren’t engineering specs. They’re calibrations, and they’ll move. Embracing “I don’t know my decision density” is a better use of resources than spending billions designing around a number that’s going to change with the next software patch.
The right question isn’t how many planes. It’s how many decisions — and how fast.


