DARPA seeks proposals for Assured Neuro Symbolic Learning and Reasoning (ANSR) program.
Despite recent improvements to machine learning (ML) algorithms and assurance technologies, high levels of autonomy still remain elusive.
The reasons for this are twofold. First, data-driven ML lacks transparency, interpretability, and robustness and has unsustainable computational and data needs. Second, traditional approaches to building intelligent applications and autonomous systems that rely on knowledge representations and symbolic reasoning can be assured but are not robust to the uncertainties encountered in the real world.
DARPA’s newest artificial intelligence (AI) program, Assured Neuro Symbolic Learning and Reasoning (ANSR), seeks to address these challenges in the form of new, hybrid (neuro-symbolic) AI algorithms that deeply integrate symbolic reasoning with data-driven learning to create robust, assured, and therefore trustworthy systems.
“Motivating new thinking and approaches in this space will help assure that autonomous systems will operate safely and perform as intended,” said Dr. Sandeep Neema, DARPA ANSR program manager. “This will be integral to trust, which is key to the Department of Defense’s successful adoption of autonomy.”
ANSR will explore diverse, hybrid architectures that can be seeded with prior knowledge, acquire both statistical and symbolic knowledge through learning, and adapt learned representations. The program includes demonstrations to evaluate hybrid AI techniques through relevant military use cases where assurance and autonomy are mission-critical. Specifically, selected teams will develop a common operating picture of a dynamic, dense urban environment using a fully autonomous system equipped with ANSR technologies. The AI would deliver insights to the warfighter that could help characterize friendly, adversarial and neutral entities, the operating environment, and threat and safety corridors.