CERDEC posts RFI for target detection and recognition tech

On July 18, the U.S. Army posted a request for information for Aided Target Detection and Recognition Technologies for Manned and Unmanned Ground Vehicles in Complex Environments (Solicitation Number: W909MY-18-R-C014). Responses are due by 3:00 p.m. Eastern on August 17.

The primary objective of this RFI is to canvas a wide community of traditional and non-traditional providers of technology solutions and services to help identify current state of art in the areas of aided and automatic target detection, recognition and tracking for Electro Optical Infra-Red (EOIR) sensors. This includes, but is not limited to, machine learning and deep learning approaches for real-time exploitation and enhanced situational awareness for ground-based manned and unmanned platforms operating in complex rural and urban environments. The primary emphasis of the RFI is to identify Aided Target Detection (AiTD) and Aided Target Recognition (AiTR) algorithms, image and video processing, machine vision, and sensor exploitation technologies for manned and unmanned ground vehicles to enable increasing levels of artificial intelligence. However, new and emerging computing hardware technologies aimed at SWAP-C constrained implementation of the state of the art sensor exploitation algorithms in real-time are also of interest.

Request for Information (RFI)

Aided Target Detection and Recognition Technologies for Manned and Unmanned Ground Vehicles in Complex Environments

RFI Objectives:

o Help identify mature, reliable and robust Aided Target Detection (AiTD) and Aided Target Recognition (AiTR) algorithms, processing modules, and computing technologies that can operate in real time on EOIR sensors mounted on ground combat systems to improve situational awareness and reduce workload of the Soldier.

o Help identify current state of the art for AiTD and AiTR approaches and behaviors in increasingly difficult environments to include complex urban and rural terrain.

o Help identify potential capabilities for near term integration into current ground combat platforms.

o Help identify candidate capabilities for maturation and possible later integration into current fleet and new development vehicles.

o Help identify well qualified providers of reliable and robust capabilities, products and services in the areas of algorithm design, development, implementation and integration for manned and unmanned ground combat systems.

BACKGROUND DESCRIPTION:

The U.S. Army Contracting Command-Aberdeen Proving Grounds Belvoir Division, on behalf of the CERDEC Night Vision and Electronic Sensors Directorate (NVESD), is requesting information on interested and capable sources for potential award(s) of a contract or contracts related to affordable and mature processing and exploitation algorithms, artificial intelligence and machine learning technologies suitable for application in manned and unmanned ground combat systems to improve soldier capabilities and effectiveness. The desired product must be able to be fully integrated with EOIR sensors on manned and unmanned military ground vehicles (High Mobility Multipurpose Wheeled Vehicle (HMMWV), Bradley, Abrams, Stryker, MRAP, NGCV, etc.) while observing their specific space constraints and operational environments.

Advancements in EOIR sensor technology have enabled integration of cameras with higher resolutions, improved sensitivity, and multi-spectral imaging onto ground vehicle platforms. These sensors play a critical role in movement, situational awareness and target acquisition in combat environments day and night and have become an integral part of the warfighter’s capabilities. Current imaging sensors however largely rely on soldier’s continued attention on the image/video display. An abundance of sensors and the complexity of tasks in complex environments has made this a daunting task for the Soldier. Recent advances in image exploitation, artificial intelligence and machine learning coupled with a surge in demand for increased autonomy of ground platforms have reinvigorated the interest in automatic target detection, recognition, identification and tracking technologies. AiTD and AiTR will be used on manned platforms to help reduce Soldier workload, improve situational awareness, and reduce response times. On unmanned ground platforms, AiTD and AiTR become a fundamental enabling technology for autonomous operation and mission execution.

Traditional AiTD and AiTR algorithm development has focused on Moving Target Indication (MTI) and Static Target Indication (STI) of military targets in relatively unpopulated low clutter rural environments. While this is still an important function, in the future, manned and unmanned ground vehicle platforms will be operated in increasingly complex environments, to include high clutter rural environments such as vehicles or ATGM teams in defilade; as well as urban areas where the enemies may blend in with natural patterns of life. Future AiTD and AiTR approaches will be required to perform basic detection and recognition functionalities in this environment, but will also need to provide more advanced automated behaviors to discriminate and prioritize potential threats.

Significant technological challenges exist to successfully utilizing traditional computer vision and emerging machine learning technologies in operational combat environments. Some of these challenges include robustness and reliability of algorithms, SWAP-C of computational hardware, and limited data communication and computational bandwidth. However, arguably a bigger challenge to successfully deliver of AiTR/AiTD capabilities to the warfighter has been the mismatch between the operational requirement/expectation of the soldier and the capabilities and readiness of the AiTR/AiTD technology. As such, this RFI is aimed to better understand the state of the art and identify capable technology providers in the broad area of image and video processing and exploitation in complex settings.

Full information is available here.

Source: FedBizOpps