AFRL updates AMUSED BAA
On June 3, the Air Force Research Laboratory posted an updated version of the Adaptive Multi Source Exploitation of Documents (AMUSED)broad agency announcement (BAA). For best funding consideration in FY22, AFRL recommends that white papers be received by March 26, 2021.
The Air Force Research Laboratory, Information Directorate is seeking innovative analytics, analytical tools, algorithm developments, projects, and experiments focused on achieving Adaptive Multi-Source Exploitation of Documents (AMUSED). The three research areas within this AMUSED BAA are (1) Global Threat Discovery and Identification (GTD-ID), (2) Emerging Threat Analytics (ETA) and (3) Text Analytics for Cyber Domain (TA4CD) with each containing the key technical Focus Areas for development. Submissions/White Papers should clearly identify the AMUSED research area or specify the individual Focus Area being addressed.
This BAA is a follow-on to BAA AFRL-RIK-2015-0019 Multi-Source Information Extraction and Network Analysis (MUSIENA).
The Information Directorate, Information Fusion Branch, is soliciting white papers under this announcement for unique and innovative technologies to explore and develop Adaptive Multi Source Exploitation of Documents (AMUSED) capabilities including but not limited to, analytics, analytical tools, algorithm developments, projects, and experiments that will provide the Air Force the means to better conduct analytical operations in support of their Intelligence, Surveillance, and Reconnaissance mission including Cyber. This announcement is comprised of three research areas: (1) Global Threat Discovery and Identification (GTD-ID); (2) Emerging Threat Analytics (ETA); and (3) Text Analytics for Cyber Domain (TA4CD), where each has research areas that taken together comprise the focus of AMUSED research and development.
Past research in text analysis has led to the automated capabilities that are now in use to extract relevant information from large volumes of textual data. The development of this technology has reduced textual data overload, increased the accuracy of analysis, and decreased the cycle time and manpower requirements needed to assess threats and vulnerabilities. However, this is a situation that has not remained static from either the perspective of the anticipated number of data sources or projected analytical needs. Further development is required to not just keep pace but to move beyond current performance levels, to overcome limitations in moving to new data types and domains, and to achieve new, more sophisticated capabilities.
Fundamentally the analysis of textual content must produce higher levels of comprehension and understanding than presently exists. As textual information has increased in both quantity and complexity the demands for greater analytical capabilities have also grown dramatically. While basic documents still comprise a large portion of textual information, valuable content can now be extracted from a range of other sources including a variety of social media material (chat, email, blogs, etc.), many open source materials and the metadata descriptors that relate back to additional media forms (video, imagery, speech, etc.). The value of textual analysis going forward will now be gauged by the ability to work effectively in and across these and other components of a complex data environment while advancing the capabilities in exploiting traditional sources.
Current network discovery and analysis science has focused on static relationship or event based networks of interest. This occurs primarily on one or two particular data sources. These capabilities are adept at enabling an analyst to effectively analyze network data within a single data source, but the analyst is then left to make mental correlations of observations and conclusions drawn from one data source to other data sources. Furthermore, current input methods do not account for semantic equivalences during the ingestion of the data, making the analyst’s job even more difficult.
One of the greatest technical challenges facing all decision support systems is the heterogeneous aspect of the data that is collected by millions of sensors and the different stovepipe architectures used to store this data. In order to perform useful analytics, a composite picture of the key entities, events, and locations need to be pieced together from the original disparate data sources. The ingesting and integrating of information from disparate data sources remains a difficult and unresolved problem.
In the Cyber Domain, multiple analyst groups, with diverse Mission Areas, need rapid, effective means to identify Essential Element of Interest (EEIs), in support of both Cyber Operations and Defensive Cyber Analysis. EEIs are pieces of information that answer questions deemed critical to mission accomplishment (see formal definition in Joint Publication 2-0, Joint Intelligence, dated 22 Oct 2013).
Full information is available here.