DARPA releases IP2 AIE

On May 5, the Defense Advanced Research Projects Agency (DARPA) issued an Artificial Intelligence Exploration (AIE) opportunity for In-Pixel Intelligent Processing (IP2). Proposals in response to this notice are due no later than 4:00 p.m. Eastern on June 3.

DARPA is issuing an Artificial Intelligence Exploration (AIE) Opportunity inviting submissions of innovative basic or applied research concepts in the technical domain of low power data extraction at the edge. This AIE Opportunity is issued under the Program Announcement for AIE, DARPA-PA-20-02. All awards will be made in the form of an Other Transaction (OT) for prototype project. The total award value for the combined Phase 1 base and Phase 2 option is limited to $1,000,000. This total award value includes Government funding and performer cost share, if required or if proposed.

Background

In the mid-80s, models for visual attention (object tracking) began to adopt biological inspiration to improve accuracy and functionality. Significant progress has been made during the following decades in producing and refining these models. Since approximately 2015, trends have shifted towards implementing these models in deep Neural Networks (NNs), such as Resnet and VGGNet, because they greatly outperform conventional heuristic methods in both accuracy and generalizability.

To achieve high accuracy in such NNs today, AI processing for video is largely performed in the data center by spatial algorithms that lack temporal representation between frames. AI processing of video presents a uniquely difficult problem because the high resolution, high dynamic range, and high frame rates desired generate significantly more data in real time than other edge sensing modalities. The number of parameters and memory requirement for state of the art (SOA) AI algorithms typically is proportional to the input dimensionality and scales exponentially with accuracy requirements.

To accommodate this large data stream, current SOA deep neural networks (DNN), require hundreds of millions of parameters for tens of billions of operations to produce a single accurate AI inference. Today’s current solution is incompatible with embedded implementation at the sensor edge due to power and latency constraints and the result is that embedded solutions for vision sensing at the mobile edge abandon SOA accuracy in favor of even marginally accurate solutions that can operate within the size and power envelope. To move beyond this paradigm, the IP2 exploration must solve fundamental technical challenges.

Scope

The objective of the In-Pixel Intelligent Processing (IP2) exploration is to reclaim the accuracy and functionality of deep neural networks (NNs) in power constrained sensing platforms. The IP2 exploration will develop innovative AI algorithms matched to in-pixel mesh processing layers in order to bring the front end of the NN into the sensor pixels and inject intelligence into the datastream at the sensor edge.

IP2 will enable efficient and embedded 3rd-wave AI for saliency and spatio-temporal prediction to perform object detection and tracking in large-format high-frame rate vision sensors. Innovative new front-end AI algorithms designed into the sensor pixels will learn saliency through the statistics in the data to create a reduced dimensionality data-stream, enabling 10x more efficient compact AI processing (such as RNNs) in the back end. In addition, a low latency feedback loop will be employed sending task-oriented information from the back end to the front end to ensure high accuracy in the detection of salient information.

The IP2 program seeks to demonstrate revolutionary advances in embedded 3rd-wave AI functionality at the edge by developing a new mesh NN hardware scheme that brings the front end of the NN directly into the sensor pixels. This NN will enable significant advances in both data stream complexity reduction and back-end 3rd-wave AI processing efficiency and functionality, without loss of accuracy.

Full information is available here.

Source: SAM