DARPA to release Enabling Confidence AIE opp
On May 5 the Defense Advanced Research Projects Agency (DARPA) issued a Notice of Future Artificial Intelligence Exploration Opportunity: Enabling Confidence.
The purpose of this Special Notice (SN) is to provide public notification of additional research areas of interest to DARPA, specifically the Artificial Intelligence Exploration (AIE) program. The AIE program is one key element of DARPA’s broader AI investment strategy that will help ensure the U.S. maintains a technological advantage in this critical area. Past DARPA AI investments facilitated the advancement of “first wave” (rule based) and “second wave” (statistical learning based) AI technologies. DARPA-funded R&D enabled some of the first successes in AI, such as expert systems and search, and more recently has advanced machine learning algorithms and hardware. DARPA is now interested in researching and developing “third wave” AI theory and applications that address the limitations of first and second wave technologies.
At this time, the DARPA Microsystems Technology Office (MTO) is interested in the following research area to be announced as a potential AIE topic under the Artificial Intelligence Exploration program:
Background: Accurate processing of covariance information related to environmental variations and sensor noise is paramount to the performance of statistics-based estimators (e.g., Kalman filters) and is the key enabler for optimally combining information originating from multiple heterogeneous sensors and subsystems.
Objective: EC will develop scalable methods to generate accurate covariance information for the outputs of machine learning (ML) systems to enable enhanced performance when using this information to combine multiple subsystems. The EC AIE will encourage performers to consider a range of ML techniques – e.g., deep learning, Bayesian techniques, etc. – in addressing the following research questions:
1) Can input sensor and environmental covariance information be faithfully reflected in a ML system output covariance matrix in a way that is computationally tractable?
2) Can confident ML subsystems be composed hierarchically to increase inference accuracy, and can they be combined with statistics-based estimation systems (e.g., Kalman filters) to reduce errors?
Challenges: Any approach to achieving the EC objective will need to overcome the following two technical challenges: First, simple covariance measures are often insufficient to accurately characterize output uncertainties after sensor and environmental variations are propagated through the ML architectures. Second, nonlinearities in existing ML architectures make direct calculations intractable, and approximate methods degrade performance.
IC News brings you business opportunities like this one each week. If you find value in our work, please consider supporting IC News with a subscription.