DARPA seeks TAMI proposals

On September 15, the Defense Advanced Research Projects Agency (DARPA) issued an Artificial Intelligence Exploration (AIE) Opportunity (DARPA-PA-20-02-03) for Time-Aware Machine Intelligence (TAMI). Proposals in response to this notice are due no later than 4:00 p.m. Eastern on October 15.

DARPA is issuing an Artificial Intelligence Exploration (AIE) Opportunity inviting submissions of innovative basic or applied research concepts in the technical domain of time-aware neural network architectures that introduce a meta-learning capability into data-driven machine learning to enable time-based machine cognition and intelligence. This AIE Opportunity is issued under the Program Announcement for AIE, DARPA-PA-20-02. All awards will be made in the form of an Other Transaction (OT) for prototype project. The total award value for the combined Phase 1 base and Phase 2 option is limited to $1,000,000. This total award value includes Government funding and performer cost share, if required or if proposed.

The Time-Aware Machine Intelligence (TAMI) AIE Opportunity will develop new time-aware neural network architectures that introduce a meta-learning capability into machine learning. This meta-learning will enable a neural network to capture the time-dependencies of its encoded knowledge.

As a neural network learns knowledge about the world and encodes it in its internal weights, some learned weights may encode knowledge whose activation should be conditioned based on time. Examples of such time dependencies are the weights mapped to the appearance features of a person in a convolutional neural network (CNN) for object recognition or the weights mapped to the dynamic features of a person’s gait in a recurrent neural network (RNN) for activity recognition – both are only valid for a finite interval of time. Current neural networks do not explicitly model the inherent time characteristics of their encoded knowledge. Consequently, state-of-the-art (SOA) machine learning does not have the expressive capability to reason with encoded knowledge using time. An inference network, for example, cannot discount activations of weights for time-conditioned knowledge as features become less relevant over time. This lack of time dimension in a network’s knowledge encoding limits the “shelf life” of the systems, leading to outdated decisions and requiring frequent and costly retraining to optimize performance.

TAMI’s vision is for an AI system to develop a detailed self-understanding of the time dimensions of its learned knowledge and eventually be able to “think in and about time” when exercising its learned task knowledge in task performance.

TAMI draws inspiration from ongoing research on time processing mechanisms in human brains. A large number of computational models have been introduced in computational neuroscience to explain time perception mechanisms in the brain. TAMI will go a step further from such research to develop and prototype concrete computational models. TAMI will leverage the latest research on meta-learning in neural networks. Recent neural network models with augmented memory capacities are possible starting points for investigating the meta-learning of time dependencies.

TAMI seeks to develop a new class of neural network architectures that incorporate an explicit time dimension as a fundamental building block for network knowledge representation. TAMI will develop new time-modeling components into such networks and investigate learning paradigms that can simultaneously learn task knowledge and be able to develop a self-reference to the details of the time dependencies of its knowledge encoding as meta-knowledge.

Full information is available here.

Source: SAM