Autonomous systems are changing the future of warfare, but until our Defense industry can start having the right conversations with the right people, it’s a battle we risk losing. To kickstart this crucial dialogue on autonomy and its role in national security, H4XLabs recently hosted a virtual panel with four key leaders representing both the government and commercial sectors. They offered valuable insights into opportunities and challenges in the autonomous landscape. The panel, titled The Race for Autonomy, is the first in a series bent on bridging the existing gaps and shortfalls in America’s collective understanding of this critical issue.
Autonomous systems will be key to any military force formation that is centered around the philosophy of ‘the small, the agile, and the many.’ Unmanned capabilities have rapidly changed the nature of the battlefield, and many unmanned vehicles, whether in the air, on land, or at sea, will soon leverage autonomous systems to operate independently. America’s adversaries continue to make significant investments in AI and autonomy, and without rapid innovation, the U.S. and its allies risk falling behind. The DoD must prioritize autonomous capabilities if they are to succeed on the battlefield in a software-centric, data-enabled world. But developing these new and dynamic capabilities will require the public sector to leverage vital private partnerships and commercial innovation. Currently, that discourse is stagnant, siloed, and fraught with bureaucracy.
In an effort to break down these crippling barriers, we turned to the following experts:
- Lt. Gen. Jack Shanahan (USAF, Retired), former Director of the Joint AI Center
- Christine Moon, Co-Founder and President of BlueSpace.ai
- Ikram Mansori, leading innovation and security expert
- Dr. Deji Coker, Autonomy Portfolio Manager, Office of Naval Research
Moon emphasized that military deployments of autonomous systems require inherently unique attributes, and outlined the need for a generalizable, scalable, and reliable autonomous system for the military. She identified three key facets of military environments that make them distinct for autonomous systems:
- Any system would be operating in latent environments, and would require fast reaction times due to the inherent circumstances of the battlefield
- The system will have to accommodate a lack of mapping data and situational awareness in foreign environments
- There is either not enough training or a lack of quality training data to inform the system
Moon described Bluespace’s disruptive approach to creating a “math- and physics-based approach” to their own autonomous systems – highlighting the fact that the commercial sector is already working on groundbreaking solutions for these difficulties.
Shanahan echoed Moon’s points: “Trust is more important than it’s ever been before. The future of autonomy is partnerships between public and private, not in an us vs. them way.”
He highlighted particular challenges that come with “edge cases,” or, situational cases where autonomous systems would not have training data to address. As long as those edge cases exist, systems will be limited despite rapid advancement, he said.
Coker built on this thread by arguing that autonomous systems need to have capabilities that mimic a human’s “gut-feelings” – the ability people have to ingest training data over their lifetimes, then subconsciously assess certain outcomes as a result. It is imperative that autonomous systems engage similarly if they are to enable insight-driven actions, Coker said. He gave as an example that the Navy currently has a gap in its ability to analyze and dissect copious amounts of data. Yet, artificial Intelligence and autonomy presents the solution.
Central to all the panelists’ concerns was establishing trust, or justified confidence, in autonomous systems, through a variety of means and deployments. Coker suggested that focusing on virtual experimentation spaces to validate hypotheses and more deliberately plan out physical experimentation around autonomous systems would be helpful. Rapidly and iteratively testing autonomous systems under a variety of conditions in the virtual space help gain understanding how these systems might behave in non-deterministic conditions – thereby building user trust. Moon suggested that day-to-day user experience with autonomous systems, including through the medium of public transit, is key to building wider cultural trust. In essence, the key is demonstrative action. How do technologists show the public and stakeholders that autonomous systems actually work, and how they can beneficially fit into the lives of users?
All speakers emphasized dialogue – particularly among government, technologists, and academia, in addition to integrating global partners.
As Mansori noted, bureaucracy and policies have lagged years behind the actual technology. Building on Mansori’s statement, at H4XLabs, we’ve observed that the conversations can often be siloed within private industry or the public sector. It is imperative to enhance constant, valuable, cross-sectoral and multi-industry conversations.
Said Moon, “Solving autonomy is a very difficult problem - we solve it by dialogues like this.”
H4Xlabs is building a dialogue around The Race for Autonomy. Look for new conversations in the near future.
Photo caption: Drone deployment test flight file image. U.S. Air Force/Public Domain