N241-011 TITLE: Generative Artificial Intelligence for Scenario Generation and Communications Analysis
OUSD (R&E) CRITICAL TECHNOLOGY AREA(S): Trusted AI and Autonomy
The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR Parts 120-130, which controls the export and import of defense-related material and services, including export of sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s) in accordance with the Announcement. Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data under US Export Control Laws.
OBJECTIVE: Develop the capability to rapidly generate high threat density scenarios with tactically representative red (adversary) threats that adapt in real time. Additionally, this effort will develop the capability to conduct automatic analysis of blue (friendly) communications to understand speed and accuracy of information exchange.
DESCRIPTION: As the carrier airwing of the future prepares for the high-end fight, the training paradigm will shift to almost exclusively Live, Virtual, Constructive (LVC) environments due to expanded range capabilities of the peer threat competitors and Operations Security (OPSEC) considerations. As a result, warfighters are able to train as they fight with higher fidelity scenarios that more accurately represent red kill chains. This high-fidelity, data rich environment provides unique opportunities for instructional strategies to better support end-to-end training and improve readiness. Specifically, LVC environments increase the amount of—and access to—data that can support improved scenario generation, performance assessment, and debrief when utilized appropriately. However, LVC training is not without its challenges. These challenges include resource requirements to develop these high-fidelity scenarios as they can be cumbersome and labor intensive. Moreover, scenarios that do not contain significant variations may lose utility very quickly as operators can begin to anticipate scenario outcomes after a few exposures. Consequently, a need exists for rapid generation of real-time, adaptive, high-fidelity scenarios.
Additional challenges lie in the assessment of performance. The carrier airwing of the future will rely on integrated tactics that require a level of coordination and information exchange across platforms that have not been required in past tactics. The complexity of coordination associated with integrated tactics necessitates a significant amount of voice communications across the different platforms to provide Situational Awareness (SA) and elicit decision-making. While communication is critical to cross platform coordination and overall tactical execution, it remains one of the most challenging training objectives to meet during Air Defense events.
As such, this effort seeks to alleviate identified challenges with scenario generation and performance assessment through the investigation of generative artificial intelligence (AI) (e.g., DALL-E, ChatGPT) or other forms of AI to support scenario generation and communications assessment. This SBIR effort shall focus on utilizing AI to learn from pilot-in-the-loop red threat behavior to rapidly generate constructive threat presentations that adapt to trainee behavior in a tactically feasible manner. Additionally, AI shall be applied to further the state-of-the-science in communications analysis [Ref 6]. Specifically, AI shall support analysis of blue recorded communications and provide an initial assessment in terms of accuracy of the words said (relative to ground truth) and speed at which they are said. This analysis will include digesting communication recordings, assessing quality of communications-based accuracy and speed, and then providing these results via automated debrief.
These capabilities will improve the quality of training and readiness via end-to-end training enhancements. First, high-fidelity Air Defense scenarios that can be rapidly generated and are adaptive will yield greater training utility and provide cost avoidance associated with scenario development manpower and human-in-loop threat support manpower. Next, development of a communications analysis and debrief capability will improve SA, and decision making will benefit the Fleet by decreasing instructor workload, reducing human error and manpower time requirements, and automatically provide instructors with information on communication protocol adherence and timeliness to improve SA and increase debriefing capabilities.
This effort will specifically look at Air Defense training scenarios within LVC environments to increase speed at which high-fidelity, adaptive scenarios can be generated and assessed to enhance operator performance. This capability will be developed with the intention of a transition path to the Next Generation Threat System (NGTS).
Work produced in Phase II may become classified. Note: The prospective contractor(s) must be U.S. owned and operated with no foreign influence as defined by 32 U.S.C. § 2004.20 et seq., National Industrial Security Program Executive Agent and Operating Manual, unless acceptable mitigating procedures can and have been implemented and approved by the Defense Counterintelligence and Security Agency (DCSA) formerly Defense Security Service (DSS). The selected contractor must be able to acquire and maintain a secret level facility and Personnel Security Clearances. This will allow contractor personnel to perform on advanced phases of this project as set forth by DCSA and NAVAIR in order to gain access to classified information pertaining to the national defense of the United States and its allies; this will be an inherent requirement. The selected company will be required to safeguard classified material during the advanced phases of this contract IAW the National Industrial Security Program Operating Manual (NISPOM), which can be found at Title 32, Part 2004.20 of the Code of Federal Regulations. Reference: National Industrial Security Program Executive Agent and Operating Manual (NISP), 32 U.S.C. § 2004.20 et seq. (1993). https://www.ecfr.gov/current/title-32/subtitle-B/chapter-XX/part-2004
PHASE I: Research and develop an integration plan for development of a proof of concept, standalone, capability to rapidly generate high-threat density scenarios with tactically representative red threats that adapt in real time. This will include investigating unclassified sample data to determine appropriate AI models for future development. Additionally, Phase I will focus on identifying the most appropriate AI model or models to support automatic analysis of blue communications based on accuracy and speed results. An unclassified sample dataset will be provided to help support this investigation and to understand speed and accuracy of information exchange. Both objectives will use generative or other forms of AI. Performance assessment should focus on communications but may also include tactical assessments. Noise filtering shall be investigated to support communication analysis as the noise content in the operational environment for Air Defense is significant.
Demonstrate the feasibility of application into the larger, integrated training system. The plan shall detail integration into NGTS to allow for transition into an operational LVC environment. Additionally, the plan shall include a Subject Matter Expert (SME) evaluation of capabilities and methods for conducting an Analysis of Alternatives to identify best practice method moving forward for training delivery.
Provide prototype plans to be developed under Phase II.
PHASE II: Research, develop, design, and deliver a proof-of-scenario generation and communication assessment capabilities for Air Defense training scenarios through execution of the integration plan developed in Phase I. During Phase II, the sample data provided will be more tactically and operationally relevant and classified at the SECRET level. Developers can expect the scenarios to be more tactically complex, have larger amount of communication, and communications will include significant background noise. Noise will include, but is not limited to, background noise (engines, alerts, etc.), static, and the like. Integration with NGTS will enhance the capability with scenarios and performance data already resident in NGTS. Design and develop the tool to include visualizations, usability documentation, and technology evaluation. Demonstration of the tool, along with documentation of usability of the training software is critical. Risk Management Framework guidelines should be considered and adhered to during the development to support information assurance compliance.
Work in Phase II may become classified. Please see note in the Description paragraph.
PHASE III DUAL USE APPLICATIONS: Introduce additional data from NGTS as well as other Live-and-Virtual entities within the scenario. Scenario generation shall be enhanced to include external (live and/or virtual) entities. The AI implementation should account for any differences or effects external entities may have on the AI model. The voice communication assessment capabilities shall be flexible as to be deployed in varying training configurations. Training locations may differ in their setup of radios and networked communications, which will require easy and configurable settings and controls. Integration testing and demonstration of capabilities will be conducted in a distributed simulation via Distributed Interactive Simulation (DIS) protocol at the SECRET level. Software shall be integrated with NGTS to facilitate transition into operational LVC environment. Documentation and any supporting materials shall be delivered to NGTS team for maintenance and future enhancements.
The AI voice assessment model can be leveraged in the private sector as a speech-to-text model in environments with high noise or when non-standard English speech is in use, such as the brevity communications made during a tactical aviation scenario. Most AI speech models are trained with common English phrases. The data and voice communications from the tactical aviation domain will provide more robust speech-to-text analysis for the private sector in areas such as air traffic control or brevity communications training.
REFERENCES:
KEYWORDS: Artificial Intelligence (AI); Scenario Generation; Communications Assessment; Voice Analysis; Live, Virtual, Constructive; Automated Debrief
** TOPIC NOTICE ** |
The Navy Topic above is an "unofficial" copy from the Navy Topics in the DoD 24.1 SBIR BAA. Please see the official DoD Topic website at www.defensesbirsttr.mil/SBIR-STTR/Opportunities/#announcements for any updates. The DoD issued its Navy 24.1 SBIR Topics pre-release on November 28, 2023 which opens to receive proposals on January 3, 2024, and now closes February 21, (12:00pm ET). Direct Contact with Topic Authors: During the pre-release period (November 28, 2023 through January 2, 2024) proposing firms have an opportunity to directly contact the Technical Point of Contact (TPOC) to ask technical questions about the specific BAA topic. Once DoD begins accepting proposals on January 3, 2024 no further direct contact between proposers and topic authors is allowed unless the Topic Author is responding to a question submitted during the Pre-release period. SITIS Q&A System: After the pre-release period, until January 24, 2023, at 12:00 PM ET, proposers may submit written questions through SITIS (SBIR/STTR Interactive Topic Information System) at www.dodsbirsttr.mil/topics-app/ by logging in and following instructions. In SITIS, the questioner and respondent remain anonymous but all questions and answers are posted for general viewing. Topics Search Engine: Visit the DoD Topic Search Tool at www.dodsbirsttr.mil/topics-app/ to find topics by keyword across all DoD Components participating in this BAA.
|
1/17/24 | Q. | How critical is it to handle both Scenario Generation and Communications Analysis? Is it okay if a company can do rapid scenario generation very well but lacks communication experience? |
A. | From a technical perspective the two are weighed equally meaning you don’t have to be an expert but should have a plan that is technically feasible and address all requirements in the call. | |
1/5/24 | Q. | This is a post to provide questions and answers from the NAVAIR TPOCs for topic N241-011. Links to frequently asked questions, and questions and answers that may add clarification to this topic are provided in the response. |
A. | See: navysbir.com/n24_1/Navy_SBIR-N241-011_QA1.pdf navysbir.com/n24_1/Navy_SBIR-N241-011_QA2.pdf |
|
12/28/23 | Q. | Regarding communication analysis, does the dataset incorporate simulation states and events, or is it limited to speech recordings only? |
A. | Sample data sets will provide a mix of three forms of data: states/events/speech, states/events. Speech only | |
12/28/23 | Q. | Could we have clarification on whether scenario generation encompasses modifying the "per-agent" behavior graph, or is it excluded from this process? |
A. | Per agent behavior graph | |
12/28/23 | Q. | 1. What specifically constitutes "rapid generation of scenarios" and "adaptation in a tactically feasible manner"?
2. Are we anticipating the AI to adapt in real-time during runtime, or is the adaptation focused on preparing for the next scenario? |
A. | 1. Rapid generation is TBD based the findings of this research, however, a more accurate statement would be “near real-time”. It is intended that this research provide a better understanding of acceptable latency.
2. AI should adapt during runtime. |
|
12/27/23 | Q. | I’m working with a company that has developed a platform that provides a new way to deploy and train with AR/VR content and integrate with additional learning channels. This platform has an easy-to-use AR/VR courseware creation tool that gives instructors and content managers the ability to rapidly deploy and update new AR/VR training content, assign AR/VR training content to students and classes, and an analytics system to track AR/VR student training activity and performance. Is that kind of capability relevant to this topic? |
A. | While the current use case does not use AR/VR platforms, there may be interest in using this technology in the future. I would suggest the offeror start in a standard low cost training device for F/A-18 or F-35 with a plan for transitioning to the AR/VR platform. | |
12/20/23 | Q. | 1. In regards to the scenario generation for this SBIR, what specific criteria and parameters should the AI-driven scenario generator consider to ensure high-fidelity and adaptability? Are there preferred formats or structures for the scenarios that the AI should adhere to? How often do you anticipate the need for new scenarios, and what is the expected lifespan/utility of each scenario?
2. For the training data, what data sources will be available, especially regarding pilot-in-the-loop red threat behavior? 3. In terms of the communications analysis portion of the SBIR, what types of communication are crucial for analysis, and are there specific nuances or criteria that the AI should consider? What are the various communication styles and modes expected for Phase 1? |
A. | 1. Scenarios will include but are not limited to: entity starting positions, behavior/AI models for individual entities, terrain data, etc. Currently, scenarios are generated manually by a subject matter expert. The term rapid implies the generation of many scenarios with varying conditions or parameters that can be produces faster than the current time if would take for a subject matter expert and may include AI generated variations of the variables mentioned above (e.g., starting positions, behaviors, etc).
An unclassified version of the Next Generation Threat System will be provided as GFI to awardees to support this requirement along with an unclassified sample dataset will include an NGTS scenario file. Scenario files are NGTS specific formatted JSON files with a .nscen file extension. Sample DIS/HLA traffic logs will also be provided in ch10 format. Tools for reading the ch10 file format are provided with NGTS. The frequency and utility of a scenario developed using this technology are TBD and part of the goals to understanding these requirements in the future. 2. An unclassified version of the Next Generation Threat System will be provided as GFI to awardees to support this requirement along with an unclassified sample dataset will include an NGTS scenario file. Scenario files are NGTS specific formatted JSON files with a .nscen file extension. Sample DIS/HLA traffic logs will also be provided in ch10 format. Tools for reading the ch10 file format are provided with NGTS. Additionally, Awardees will be expected to use NGTS to create relevant training data sets as necessary. Awardees will be provided access to NGTS SME’s and aircrew as deemed appropriate by TPOCs. 3. Aircrew are assessed based on comm brevity, comm accuracy, and timeline adherence. These data points will need to be associated with contextual data and events from the related data sources (DIS/HLA/CH10), so additional context or understanding of what is being said and what is related to (e.g., context) is desired. Audio data is currently being captured from the DIS/HLA network traffic and written to a ch10 file format. Tools for reading the ch10 file format are provided with NGTS. |
|
12/03/23 | Q. | Aligning with greater CACW requirements in the modeling and simulation community, should integration with other agency's red threat models be a priority for a phase I outline e.g., should we focus on outlining a plan that can integrate AFSIM, ITASE, ODESSA, and other environment models into this suite for integration into NGTS? |
A. | The focus for this effort is integration within the Next Generation Threat System. Looking forward to other environments for out years is great but again the focus is on NGTS. |