Last month, defense startup Anduril announced that its Fury prototype, the YFQ‑44A “drone wingman,” successfully made its first flight. The U.S. Air Force’s Collaborative Combat Aircraft (CCA) program now has two flying prototypes (the other is General Atomics’ YFQ‑42A), and Anduril’s CEO has labeled the test a launch of “a new paradigm” in air power. Designed as a fighter-sized unmanned aircraft, according to Anduril, the YFQ‑44A can execute an entire mission plan independently, which includes managing flight control, weapons targeting and even returning to base.
In practical terms, this means the YFQ‑44A flies largely on its own: Its onboard autonomy handles navigation, sensor analysis and “return-to-base” commands with minimal human input. The company emphasizes that a human operator remains responsible for oversight and able to intervene or abort missions but does not micromanage every control. As Jason Levin, Anduril’s Air Systems vice president, put it: The aircraft “executes a mission plan on its own ... and returns to land at the push of a button, all under the watchful eye of an operator ‘on the loop.’”
This semi‑autonomous mode is pitched as a middle ground between legacy remotely piloted drones and fully “killer robot” systems. For example, U.S. military surveys have found that armed forces trust AI weapons only when humans supervise them and the software is very accurate (with low false-positives). In other words, keeping a person "on" the loop addresses concerns about accountability and errant targeting: The human can override any unwanted action.
Apart from that, in tests, U.S. Air Force AI tools planned attacks hundreds of times faster than human crews. Autonomous systems like YFQ‑44A can process sensor data and respond in milliseconds, far quicker than any pilot, which could be decisive in fast-moving combat.
The drone is specifically meant to team with crewed fighters. As Levin explains, it is engineered “to enhance survivability, lethality, and mission effectiveness by teaming with crewed fighter aircraft or operating independently.” Think of it as a loyal wingman that carries extra weapons or sensors, coordinated by advanced networking.
These innovations have the potential to shape a win-win trajectory for the future of autonomous weapons. On one hand, autonomy leverages modern AI to analyze the battlefield in real time. By crunching incoming data far faster than human pilots, an autonomous wingman can identify threats and targets in milliseconds. Some studies reported that AI planning tools generated battle plans about 400 times faster than human users. This speed advantage means better reaction to sudden changes, safer engagement envelopes, and less cognitive overload for pilots.
On the other hand, retaining an “operator on the loop” means a human is still legally and ethically in control. Military experts stress that as AI systems get faster, human oversight remains crucial. As one U.S. Air Force general noted of AI planning tools, “There’s still going to have to be a human in the loop for the foreseeable future to make sure that (the plans) are all viable.” Keeping a person supervising the YFQ‑44A thus helps address the classic “who is responsible” dilemma of autonomous weapons: The human can intervene to prevent a “false positive” (mistaken targeting) or other unintended consequence.
This development sits squarely in the context of the U.S. military’s ongoing push to merge Silicon Valley innovation with warfare. Anduril itself was founded by former tech entrepreneurs and has become emblematic of the trend. Its rapid prototyping of systems – from “ghost” counter-drone UAVs to autonomous sentry towers – shows how private tech firms are accelerating Defense Department capabilities. The U.S. Air Force calls this part of its next-generation family of systems; other fighters (F‑35, F‑15EX, even F‑22) may soon fly alongside “drone wingmen” under common control. In fact, insiders expect hundreds of these semi-autonomous drones could join the force in the coming decades.
Of course, this step also highlights the ongoing debate around autonomous weapons. Critics worry that ceding any combat decisions to AI risks accidental escalation or civilian harm. Proponents counter that intelligent machines can reduce collateral damage by making faster, more precise decisions than humans can manage under stress. As one analysis notes, critics “caution against heightened autonomy in war, citing the potential for abuse ... including civilian casualties,” while advocates argue AI can help maintain overmatch “more justly” than purely human errors would allow. The U.S. approach – semi-autonomy with human oversight – is a way to navigate that divide. It promises the tactical advantages of AI (speed, consistency, non-fatigued performance) without completely relinquishing the pilot’s ethical judgment. How well it balances those aims will be seen only in practice.
In sum, the YFQ‑44A’s first flight marks a significant moment in aviation weapons: a working example of “operator on the loop” in action. It shows Washington is charging ahead in the AI arms race, pushing the envelope of digitally enabled deterrence. As we noted previously, Silicon Valley tools are increasingly embedded in defense strategy. For Daily Sabah readers, this milestone underlines how the U.S. is reshaping its military doctrine around algorithms and autonomy. Whether one welcomes this change or not, the “drone wingman” test underscores a crucial point: The future of aerial warfare will be hybrid, pairing superhuman machines with humans still holding the reins. That delicate fusion of speed and supervision – the very essence of “on the loop” – is the core of this next step in the autonomous weapon vision in the international system.