Securing Artificial Intelligence for Battlefield Effective Robustness (SABER)
This grant provides funding to U.S. organizations capable of handling classified information to develop and enhance security measures for AI systems used in military applications, focusing on identifying and mitigating vulnerabilities in battlefield technologies.
Description
The Defense Advanced Research Projects Agency (DARPA) is soliciting proposals under the Securing Artificial Intelligence for Battlefield Effective Robustness (SABER) program. This initiative, led by DARPA’s Information Innovation Office (I2O), focuses on enhancing the security and operational resilience of AI-enabled battlefield systems. The program seeks to establish a rigorous AI red teaming framework to evaluate vulnerabilities in AI-driven military applications and mitigate risks associated with adversarial AI threats. The SABER initiative will advance AI security operational test and evaluation (OT&E) by developing new techniques, tools, and procedures to assess and improve AI system robustness.
SABER aims to create an operational AI red teaming construct for AI-enabled battlefield systems, particularly autonomous ground and air vehicles. The program seeks to analyze AI attack vectors, including cyber threats, electronic warfare (EW), and adversarial physical manipulations, to better understand and mitigate the impact of AI vulnerabilities. The research performed under SABER will focus on integrating counter-AI technologies into an assessment framework and toolkit to conduct AI security OT&E exercises known as SABER-OpX.
The program will consist of multiple awards issued under procurement contracts or Other Transactions (OT) agreements. DARPA anticipates awarding funding to multiple teams under Technical Team 1 (TT1), which is subdivided into two research areas: TT1.1 for AI attack effect techniques and tools, and TT1.2 for integrating these techniques into a unified AI red teaming framework. Participants will collaborate with government teams and DARPA-selected research performers to conduct AI security OT&E evaluations. The program spans 24 months, divided into two nine-month testing phases, each concluding with operational evaluations. Proposers should plan for meetings and field evaluations at multiple U.S. locations.
Eligible applicants include U.S. entities with the capability to handle classified information at the SECRET level. Organizations must have at least three U.S. citizens with final SECRET clearances and demonstrate their ability to establish secure computing environments. Due to the sensitivity of the research, non-U.S. organizations and individuals are ineligible to apply. Universities, small businesses, and research institutions are encouraged to submit proposals, though Federally Funded Research and Development Centers (FFRDCs) and University Affiliated Research Centers (UARCs) are discouraged from applying.
Key deadlines for the program include an abstract submission deadline of March 31, 2025, with full proposals due by May 6, 2025. Abstract submissions are strongly encouraged to receive DARPA feedback before submitting a full proposal. Proposers must comply with detailed security guidelines for classified and controlled unclassified information (CUI) submissions. Questions regarding the solicitation must be submitted by March 31, 2025. Award decisions will be based on scientific and technical merit, relevance to DARPA’s mission, and cost realism.
For additional information and submission guidelines, applicants may contact the BAA Coordinator at SABER@darpa.mil or refer to DARPA’s submission portal for detailed proposer instructions. All classified proposal materials must be handled per DARPA security requirements, with classified addendum requests due by March 31, 2025.