DARPA posts Geometries of Learning (GoL) funding opp

On January 5, the Defense Advanced Research Projects Agency (DARPA) issued an Artificial Intelligence Exploration (AIE) Opportunity inviting submissions of innovative basic or applied research concepts in the technical domain of high-dimensional geometric analysis of deep neural networks (DNNs) applied to image data. Responses are due by 4:00 p.m. Eastern on February 3.

The Geometries of Learning (GoL) AIE Opportunity aims to advance the theory of Deep Learning (DL) by better understanding the geometry of natural images in image space, the geometry of functions that map from image space to feature and label space, and the geometry of how these mapping functions evolve during training.

This AIE Opportunity is issued under the Program Announcement for AIE, DARPA-PA-21-04. All awards will be made in the form of an Other Transaction (OT) for prototype project. The total award value for the combined Phase 1 base (Feasibility Study) and Phase 2 option (Proof of Concept) is limited to $1,000,000. This total award value includes Government funding and performer cost-share, if required or if proposed.

The Department of Defense is committed to the rapid adoption of machine learning technology. Unfortunately, the practice of DL has outpaced the theory, creating barriers to adoption. The underlying hypothesis is that the set of images of an object form a manifold in image space, which in turn can be used to restrict the mapping from image space to label space and improve the training process for DNNs. A better understanding of the geometries of learning is expected to yield practical insights into many difficult problems. Examples include:

  • Adversarial AI: Adversarial examples are images or scenes that have been modified by an adversary to fool deep networks. One hypothesis about why deep networks are vulnerable to adversarial attacks is that they are forced to map all image space to label space, even though most the image space contains images of random noise. If we can understand the manifolds natural images form in image space, we can potentially restrict the domain of the function, making neural nets less susceptible to adversarial attacks.
  • Explainable AI: It is important for human operators to understand the basis for the output of a DNN, beyond the label the network assigns to an image (see DARPA Explainable Artificial Intelligence (XAI) program at https://www.darpa.mil/program/explainableartificial-intelligence). If we understand the manifold created by the images of an object, then the position of an image on the manifold conveys additional information, such as the pose of the object or how the scene is illuminated. Instead of producing just a label (e.g., “cat”), deep networks might provide descriptions (e.g., “cat, laying down, viewed from the side”).
  • Trainable AI: A barrier to machine learning in many applications is a lack of large enough training sets. Once again, if we understand the geometry of image manifolds, we can determine whether we have enough samples to model an object, and if not, what samples we need to add.
  • Trustworthy AI: Operators need to know when to trust machine learning systems and when not to. One reason to distrust a deep network is the stability of the learning process. If the insertion or removal of a few natural samples from the training set significantly impacts the decisions of the net, then those decisions may not be reliable.

GoL will exploit the classic observation that naturally occurring images lie on manifolds occupying only a tiny portion of image space2 , creating the opportunity for modern analysis approaches to be restricted to these manifolds. If successful, this will improve our theoretical understanding of issues related to adversarial AI, explainable AI, trainable AI, and trustworthy AI.

Review DARPA’s GoL funding opportunity.

Source: SAM

The right opportunity can be worth millions. Don’t miss out on the latest IC-focused RFI, BAA, industry day, and RFP information – subscribe to IC News today.