As the U.S. military races to adapt to ever-larger amounts of increasingly advanced, and iteratively autonomous AI, how do humans stay in control? The standard answer is that no machine can exercise lethal force without human approval — but this solution is as obvious as it is wrong.
By the time an AI asks its human overseer to approve or veto a specific strike, it’s already far too late in the flow and tempo of the interactive dynamic of human-machine teaming. Having a human make only the final decision could allow algorithms to make significantly impactful decisions well before that, from positioning forces to prioritizing targets, in ways that unacceptably constrain human choices.
Yet, requiring human approval for every intermediary step evidently sacrifices the speed and scope of capabilities that make AI so attractive in the first place. So how then can we reconcile human control and machine speed?
The answer, we believe, requires embedding human preferences in the software itself. Instead of requiring an automated process to halt at some crucial point to request human input (slowing the AI while providing the human with only a narrow set of options), ideas such as a commander’s intent for an operation need to be systematically, deliberately and preemptively integrated within the algorithm.
Building this guidance in early will represent a paradigmatic approach that ensures every automated decision is bounded and guided by human choices, rather than human decisions being constrained and channeled by automated ones.
A Call For Clarity
It’s important to note early that there is no widespread desire to create the sorts of scenarios popularized by fictional accounts such as the Terminator or SkyNet. Indeed, AI enthusiasts and skeptics alike agree that human beings need to be in control of unmanned weapons and military command systems. But there is much less consensus, or even clarity, on how to implement such control.
Despite significant investments, including an almost $55 billion funding request for the Defense Autonomous Warfare Group, and high-profile attention from Defense Secretary Pete Hegseth on down, the current landscape of U.S. military AI remains inchoate.
The extant menu of AI for military use includes automatic, semi-autonomous, autonomous and increasingly agentic autonomous systems, but there continues to be ambiguity in what these terms mean and how they are used. Human decision-makers can be “in the loop” and asked to approve or veto every significant action by the AI; “on the loop,” observing the AI but letting it make its own choices unless they choose or are called to intervene; or “near the loop,” whereby AI functions within a distinct operating “niche” of machine systems, in which humans can be variably engaged (e.g. to be “in” or “on” the loop as circumstances dictate).
But there is no clear standard for what counts as involvement significant enough to adequately and responsibly inform the human overseer, let alone ask their permission before engaging some action. This both risks introducing ambiguity into what constitutes sufficient human involvement, and reinforces the urgency for precise doctrine and clearly defined standards as the U.S. advances its military AI capabilities.
This point is brought into stark relief by Department of Defense Directive 3000.09 [PDF], which formally establishes the need for human involvement in any AI engagements entailing the use of lethal force, yet does not provide a doctrinal paradigm for how such involvement can and should be realized. We believe that a unified AI framework would address these challenges by establishing common principles and enforceable standards across the defense enterprise. Such a framework would ensure that AI systems are developed and deployed effectively and responsibly.
Simply put, the more autonomous these systems are, the greater the need for coherence. Whether in unmanned aerial systems, maritime platforms, or ground-based robotics, autonomy relies on robust data pipelines, validated algorithms, and clear rules of engagement. Above all, the extent and type of human involvement should be defined by doctrine. Ambiguity undermines trust in these systems. A unified framework provides the governance necessary to ensure safety, reliability, and responsibility in mission effectiveness.
Synthesized Command And Control
As autonomy scales and agentic architectures expand, traditional command and control models built on direct human interaction tend to lose the capacity to ensure coherence, accountability, and operational alignment. These models rest on the premise that authority is exercised through discrete human decisions at identifiable moments. That premise can fracture under conditions defined by speed, scale, and machine-driven adaptation.
To mitigate this failing, we propose a model of Synthesized Command and Control (SYNTHComm) that defines authority as a continuously engineered property of the system. SYNTHComm functions to shift control from episodic intervention to persistent governance embedded within system logic. Authority becomes encoded, distributed, and executed across the architecture, shaping behavior from within rather than being applied externally through individual decisions.
SYNTHComm operationalizes intent by translating command authority into terms the machine can understand: structured constraints, weighting functions, and context-responsive control regimes that persist across execution. These elements shape system behavior in real time, ensuring alignment with mission objectives and rules of engagement across changing conditions. Human influence remains present through design, configuration, and accountability structures that define how decisions unfold.
SYNTHComm employs a spectrum-based model of governance. Agentic behavior can be represented as a frequency spectrum, A(ƒ), where the distribution of frequencies reflects how a system balances stability and adaptability. Lower-frequency components — those which change less often and more slowly — capture enduring mission elements such as commander’s intent, policy constraints, and long-horizon objectives. Higher-frequency components — those which need to change rapidly and often — reflect responsive, time-sensitive adaptations to local conditions, uncertainty, and emerging opportunities.
SYNTHComm operates by introducing a weighting function, W(ƒ), which encodes structured authority, constraint, and contextual modulation across the agent’s behavioral spectrum. This function determines how strongly different classes of behavior are expressed as a function of mission priorities, acceptable risk, and operational conditions.
The application of W(ƒ) to the agent activity A(ƒ) produces the governed output:
Here, S(ƒ) represents the system’s realized behavior in practice (i.e. decisions and actions that have been shaped, filtered, and aligned through the imposed governance structure). As shown in Figure 1, low‑frequency components associated with stable objectives are preserved to maintain mission continuity, while higher‑frequency components corresponding to adaptive or reactive behavior are selectively attenuated or amplified in response to context.
This formulation enables control to emerge through spectral shaping of the decision space rather than through discrete commands. Thus, governance is applied continuously and proportionally, allowing adaptability to be constrained without suppressing system autonomy.
Figure 1.
Figure 1. Spectrum model of control in SYNTHComm. Frequency (ƒ) represents variation in operational conditions, while amplitude denotes relative influence. A(ƒ) (autonomy), S(ƒ) (supervisory constraints), and W(ƒ) (contextual weighting) define a continuous control regime bounded by acceptable autonomy. As conditions become more dynamic, influence shifts from supervisory control to autonomous response, with W(ƒ) modulating alignment to mission objectives and rules