Air Force Tests ‘Responsible’ AI Combat System


The Air Force has put artificial intelligence that was trained by machine learning in the pilot seat of a drone and said it did just fine. A government news release said the three-hour flight of the high-performance jet XQ-58A Valkyrie happened July 25 at the Eglin Test and Training Complex and was the culmination of a four-year collaboration with Skybog Vanguard and the Autonomous Aircraft Experimentation program. Researchers refined the algorithms with “millions of hours” of simulations and sorties in the drone and other platforms. The Valkyrie didn’t just take off, navigate and land. The Air Force Research Laboratory brainiacs threw some curve balls at it in flight and they said it was able to deal with them.

“The mission proved out a multi-layer safety framework on an AI/ML-flown uncrewed aircraft and demonstrated an AI/ML agent solving a tactically relevant ‘challenge problem’ during airborne operations,” said Col. Tucker Hamilton, DAF AI Test and Operations chief. “This sortie officially enables the ability to develop AI/ML agents that will execute modern air-to-air and air-to-surface skills that are immediately transferrable to other autonomy programs.” 

The test wasn’t just a demonstration of the tech, it was also a test of the responsible use of it. “AI will be a critical element to future warfighting and the speed at which we’re going to have to understand the operational picture and make decisions,” said Brig. Gen. Scott Cain, AFRL commander. “AI, Autonomous Operations, and Human-Machine Teaming continue to evolve at an unprecedented pace and we need the coordinated efforts of our government, academia and industry partners to keep pace.”

Russ Niles
Russ Niles is Editor-in-Chief of AVweb. He has been a pilot for 30 years and joined AVweb 22 years ago. He and his wife Marni live in southern British Columbia where they also operate a small winery.

Other AVwebflash Articles


  1. That stock photo, sure not Eglin AFB Florida. We haven’t had mountains and desert here in a long time…..maybe never.

  2. “Responsible AI” = Only kills random people 1% of the time, and then, only rarely out of anger? 99% not killing everyone seems pretty good! Let’s do it!

  3. ah, what the heck! they’re in the killing of people business anyway. so what’s a few collateral deaths? public policy in the US kills more people through lack of health care, education, housing…need I go on? just the bungled Covid response resulted in how many preventable illnesses and deaths??? Progress! (is not necessarily a good thing, it merely is moving forward)

  4. It looks like a cross between an F-35 and the Evel Knievel rocket from Snake River Canyon Days…proving that sarcasm can span decades in one fell swoop.