IARPA announces upcoming CORE3D program proposers’ day

IARPA 112On February 29, the Intelligence Advanced Research Projects Activity announced an upcoming proposers’ day conference in support of the CORE3D program. Attendees must register no later than 5:00 pm Eastern on March 23, 2016; space is limited to the first 150 registrants.

The Intelligence Advance Research Projects Activity (IARPA) will host a Proposers’ Day Conference for the CORE3D Program on March 30, 2016, in anticipation of the release of a new solicitation in support of the program. The Conference will be held from 9:00AM to 5:00PM in the Washington, DC metropolitan area. The purpose of the conference will be to provide information on the CORE3D Program and the research problems the program aims to address, to address questions from potential proposers and to provide a forum for potential proposers to present their capabilities for teaming opportunities.

A persistent U.S. Government need – for global situational awareness as well as military, intelligence, and humanitarian mission planning – is timely access to geospatially accurate 3D object data. The aims of the CORE3D Program are (a) physical modeling – fully automated methods for timely 3D model creation leveraging spectral, textural, and dimensional information from satellite data to yield models that are dimensionally true and accurately render textures, materials, and types of objects within the scene; and (b) functional modeling – fully automated methods for object recognition and scene understanding.
The manmade objects for physical modeling in the CORE3D Program are invariant and relatively large in size. They may include buildings, roads, walls, bridges, towers, dams, or other static structures. Simplified 3D representation such as Constructive Solid Geometry (CSG), where 3D shapes are built from Boolean operations on simple shape primitives such as cubes, cylinders, or spheres, will be used to fit and store the geometry of 3D models. A core library of 3D shape primitives will be provided to all performers. Custom representations will be allowed within the constraints of each phase of the program, if pre-defined primitives cannot adequately represent object geometry with the fidelity required by each phase’s metrics.
The functional modeling of the CORE3D program is focused on wide area manmade object recognition and scene understanding. Proposed methods shall demonstrate that they can automatically recognize, tag, and update pre-defined object categories from satellite imagery. Similar to the targets of the physical models, the object categories for functional modeling shall consist of static structures such as communication towers, air fields, power plants, water towers, light houses, schools, and hospitals. Performer teams shall develop and exploit novel learning frameworks that are customized and optimized for satellite imagery and multi-modal data fusion. Hybrid approaches that do not rely on a single computer vision or learning modality are encouraged.
To meet the overall goal of the CORE3D program and its individual objectives, proposed methods shall address the technical challenges to include, but not limited to (1) Multi-modal fusion to include satellite panchromatic, multispectral, point clouds, and maps; (2) Multi-level fusion to include data level, feature level, and decision level fusion; (3) Object level segmentation and classification; (4) Point cloud generation from multi-view satellite images; (5) Representations of complex scene geometry; (6) Accurate 3D model fitting/ statistical inferencing; (7) Novel deep learning (DL) framework optimized for satellite imagery scene recognition; and (8) Hybrid image understanding module using DL, traditional, and new innovative image understanding algorithms.
Collaborative efforts and teaming among potential performers are strongly encouraged. It is anticipated that teams will be multidisciplinary, including but not limited to expertise in disciplines such as computer vision, remote sensing, machine learning, geology, statistical inference, and photogrammetry.

Full information is available here.

 

Source: FedBizOpps