N232-103 TITLE: Machine Readable Contextual Understanding and Drilldown
OUSD (R&E) CRITICAL TECHNOLOGY AREA(S): Integrated Network Systems-of-Systems;Trusted AI and Autonomy
The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR Parts 120-130, which controls the export and import of defense-related material and services, including export of sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s) in accordance with the Announcement. Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data under US Export Control Laws.
OBJECTIVE: Machine reasoning logic and semantic interoperability for contextual understanding, auto-alert cuing, and drilldown of anomalous events and activities in multidomain littoral zones. Domain independent ontologies for seamless unambiguous knowledge representation with spatiotemporal tags and tracks associated with events, entities, relations, and transactions.
DESCRIPTION: Context is considered as any information that can be used to characterize a situation that is relevant to the interaction between entities in their environment, for example, detecting the preparation signs of hostile amphibious warfare or sea-lane blockade. Lack of context significantly hinders effective decision-making, command, and control. Providing context dramatically facilitates accurate interpretation. Contextual understanding allows an increased level of interoperability for human-machine and machine-machine interactions. Effective collaboration requires proper information formats that can be exchanged between devices without a loss of contextual meaning. Decision-makers and analysts supporting naval missions on the Ops-Floor develop actionable intelligence from an extensive array of decentralized multi-intelligence (multi-INT) and Open Source intelligence OSINT data sources varying in size, modalities, velocities, and types (i.e., structured and unstructured data). The challenge is to develop a trusted Artificial Intelligence (AI) perception method that will significantly reduce the Ops-Floor course of action decision timeline to less than an hour (currently it takes about a day) to support Pacific Command Counter Intelligence Surveillance and Reconnaissance and Targeting (PACOM C-ISRT) or Joint Interagency Task Force (JIATF)-South counter-narcotics operations.
Distributed systems today often use the Web Ontology Language (OWL) as a mechanism to convey the meaning and context of information sources. OWL allows for the description of classes and logical relationships in an ontology for use by machines. OWL is used to explain references and descriptions in a data feed, encoded using the Resource Description Framework (RDF). RDF is extensively used in Business-to-Business e-commerce exchanges. It provides a mechanism to explain the precise meaning of particular parts of an XML chain concerning conventional definitions.
Based on this success, several prototypes have sought to extend the methodology for use in distributed analytic applications in the defense community. So far, the success has been limited to applications that use a relatively static ontology. A rapid change in ontology makes it difficult for constituent systems to adhere to a set of representations of context and the meanings that will change quickly. For example, machine-readable ontologies have worked well in pharmaceutical fields where the underlying DNA strands are relatively stable over time or in the air traffic controls where the flight rules do not change. However, when applied to specific military activities like monitoring the enemy�s course of action, the ontologies require a precise method to update and synchronize across relevant distributed systems. Each system manages its ontology while requiring significant software development to transform information at system boundaries. By doing so, risking a considerable loss of information during the transfer that leads to incorrect analysis.
Note 1: Work produced in Phase II may become classified. The prospective contractor(s) must be U.S. owned and operated with no foreign influence as defined by DoD 5220.22-M, National Industrial Security Program Operating Manual, unless acceptable mitigating procedures have been implemented and approved by the Defense Counterintelligence Security Agency (DCSA). The selected contractor must be able to acquire and maintain an appropriate security-level facility and Personnel Security Clearances to perform on advanced phases of this project as set forth by DCSA and ONR to gain access to classified information about the national defense of the United States and its allies. This will be an inherent requirement. The selected company will be required to safeguard classified material IAW DoD 5220.22-M during the advanced phases of this contract.
Note 2: Phase I will be UNCLASSIFIED and classified data is not required. For test and evaluation, an awardee needs to define the ground truth for the scenarios and develop a storyboard for each to guide the test and evaluation of this SBIR technology in a realistic context. Supporting datasets must have acceptable real-world data quality, content, and complexity for the case studies. For example, an image/video dataset of at least 4000 collected images and frames for a case study is considered content rich.
Note 3: Awardees must provide appropriate dataset release authorization for use in their case studies, tests, and demonstrations. They must certify that there are no legal or privacy issues, limitations, or restrictions with using the proposed data for this SBIR project.
PHASE I: Machine contextual understanding or "perception" will consist of four key functional components: 1) contextual multi-INT/OSINT data acquisition and content recognition (i.e., video, multispectral imagery, audio, text), 2) contextual learning and representation ("modeling"), 3) contextual reasoning and classification logic, and 4) contextual human-machine collaboration and query. Develop an ontological framework consisting of "Scene Ontology" and "System Ontology" for cross-domain contextual representation that enable rich context expressions and strong validation. Develop geospatial models to represent the physical space and location of the entities and sensors with spatiotemporal ontologies expressing contextual information. Develop knowledge graphs to reason over multimodal data sources for latent contextual feature representation of entities and relations. In other words, the ontological reasoning logic must overcome data impurities and scene ambiguities manifested through spoofing, deception, clutter, and noisy environments.) Develop question-answering methods to probe, query, and share machine spatiotemporal contextual insights. Develop three compelling maritime cross-domain scenarios of naval concerns. Develop each scenario with at least ten complementary events that evolve. Demonstrate the extendibility of the ontologies.
Phase I baseline performance metrics for evaluating machine perception algorithms against the multimodal datasets (video, multispectral imagery, audio, text) are:
� Machine Performance Accuracy: Structured Data Translation and Distillation - Accuracy 90% over 95% captured content; Unstructured Data Translation and Distillation � Accuracy 85% over 90% captured content.
� Precision: Proportion of retrieved machine perception material that is relevant; Precision = TP/(TP+FP), True Positives (TP) and False Positives (FP). Maximizing Precision minimizes FP.
� Recall: Proportion of relevant perception material that is retrieved; Recall = TP/(TP+FN), False Negatives (FN). Maximizing Recall minimizes FN.
� Fi Measure = [(1+i2) x Precision x Recall] / [i2 x Precision + Recall]; allows variation of Fi to shift importance of Precision vs. Recall, e.g., F0.5: makes Precision more important; F1: balances the Precision and Recall; F2: makes Recall more important.
� Novelty: Precision and recall having same values but calculated for novel information retrieved.
� Accurate Perception Retrieval Rate = (TP+TN)/(TP+TN+FP+FN); True Negatives (TN).
Deliverables (in addition to standard Phase I contract deliverables): end-to-end initial prototype technology, T&E, demonstration, a plan for Phase II, and a final report.
PHASE II: Develop a prototype software and supporting hardware system incorporating the candidate technologies from Phase I. Incorporate the three scenarios developed in Phase I with representative operational data sources for the prototype design. Demonstrate synchronization of at least ten disparate data-feed streams in real-time, with relationship information relevant to mission scenario models. Apply datasets provided by the end-users (i.e., Pacific Fleet [PACFLT] or JIATF-South) for Phase II development. This will show a well-established relationship for a potential transition. By the end of Phase II, validate and verify the overall technology performance against the end-user-defined tests, evaluations, and demonstration benchmarks. Test and demonstrate the prototype software against the benchmark datasets. Validate and verify the overall accuracy of software tools based on the performance metrics detailed for Phase I in addition to the following performance enhancement metrics. Phase II Machine Performance Accuracy: Structured Data Translation and Distillation - Accuracy 95% over 95% captured content; Unstructured Data Translation and Distillation � Accuracy 90% over 95% captured content.
Demonstrate that Ops-Floor end-to-end processing and execution timelines are in-step with operational requirements. Develop a plan for Phase III with a transition path to a program of record. Deliverables: prototype software, systems interface requirements for mobile and stationary devices, design documentation, source code, user manual, and a final report.
Note 4: It is highly likely that the work, prototyping, test, simulation, and validation may become classified in Phase II (see Note 2 in the Description for details). However, the proposal for Phase II will be UNCLASSIFIED.
Note 5: If the selected Phase II awardee(s) does not have the required facility certification for classified work, ONR or the related DON Program Office will work with the awardee(s) to facilitate certification of a related facility.
PHASE III DUAL USE APPLICATIONS: Advance these capabilities to TRL-7 and integrate the technology into the Maritime Tactical Command and Control POR or Intelligence, Surveillance, and Reconnaissance (ISR) processing platforms at Marine Corps Information Operations Center. Once validated conceptually and technically, demonstrate dual use applications of this technology in the financial/banking sectors and relevant data centers.
This technology has broad applications in government and private sectors to monitor and discover unlawful transactions, commerce, and national security threats. In government, it has numerous applications in military, intelligence communities, law enforcement, homeland security, and state and local governments to counter a variety of threats or natural crises. In the commercial sector, the technology has applications in the healthcare industry, financial sectors, and security services.
REFERENCES:
KEYWORDS: Machine-Contextual-Learning; Machine-Recognition; Contextual-Reasoning; Contextual-Understanding; Machine-Perception; Classification-Logic
** TOPIC NOTICE ** |
The Navy Topic above is an "unofficial" copy from the Navy Topics in the DoD 23.2 SBIR BAA. Please see the official DoD Topic website at www.defensesbirsttr.mil/SBIR-STTR/Opportunities/#announcements for any updates. The DoD issued its Navy 23.2 SBIR Topics pre-release on April 19, 2023 which opens to receive proposals on May 17, 2023, and closes June 14, 2023 (12:00pm ET). Direct Contact with Topic Authors: During the pre-release period (April 19, 2023 through May 16, 2023) proposing firms have an opportunity to directly contact the Technical Point of Contact (TPOC) to ask technical questions about the specific BAA topic. Once DoD begins accepting proposals on May 17, 2023 no further direct contact between proposers and topic authors is allowed unless the Topic Author is responding to a question submitted during the Pre-release period. SITIS Q&A System: After the pre-release period, until May 31, (at 12:00 PM ET), proposers may submit written questions through SITIS (SBIR/STTR Interactive Topic Information System) at www.dodsbirsttr.mil/topics-app/ by logging in and following instructions. In SITIS, the questioner and respondent remain anonymous but all questions and answers are posted for general viewing. Topics Search Engine: Visit the DoD Topic Search Tool at www.dodsbirsttr.mil/topics-app/ to find topics by keyword across all DoD Components participating in this BAA.
|
5/25/23 | Q. | 1. Do you have an existing Program of Record or any future potential POR?
2. What existing data querying and storage technologies are used? Are these all proprietary to the Navy, or are any public solutions in use? e.g., SPARQL? 3. Are any additional modalities of interest beyond audio, video, multispectral images, and text? 4. Three compelling maritime cross-domain naval concerns - what is of most interest? What kind of events? What terrain? 5. Are recent Large Language Models in scope or is there no limit to the proposed AI technology? 6. Can you provide any additional information on MULTI-INT Open Source / Public Domain data modalities of interest? 7. Separate ontologies lead to information loss at the interface, however they�re easier to implement and use in the existing system. Are separate ontologies (e.g., for each data stream) desired, or would two ontologies (Scene / System) suffice for all querying needs? 8. What TRL is desired for Phase I deliverables? Specifically these: (1-4) four key machine perception functionalities, (5) scene and system ontological frameworks, (6) geospatial models, (7) ontological reasoning logic , (8) Q&A methods, (9) compelling cross-domain scenarios, (10) demonstration of the ontology extendibility |
A. | 1. Yes, several.
2. For the proof of concept demonstration, commercial/public data query and storage solutions are acceptable. 3. Active acoustics. 4. Any act of denial or interference with the freedom of navigation and commerce in international waters. In scripting the challenge problems/scenarios for the Phase-I, contractors are expected to utilize commercially available datasets relevant to maritime cross-domain activities (air, sea, space, land, cyber) that showcase adversarial trending events. 5. No restrictions on language models. 6. Intercepted cellular communications, public records, commercial transactions, social-media activities, etc. are considered OSINT (Open Source Intelligence). 7. The goal is to enable trusted AI-enabled cross-domain sense-making and understanding with minimal contextual loss of awareness (potentially attributable to erratic ontologies) about hostile intents and/or deceptive activities. 8. TRL-3 for Phase-I and TRL-4 for Phase-II. |