DARPA releases AIMEE call for proposals
On October 9, the Defense Advanced Research Projects Agency (DARPA) released the program announcement for the Artificial Intelligence Mitigations of Emergent Execution (AIMEE) program. Responses are due by 12:00 p.m. Eastern on November 8.
DARPA is issuing an Artificial Intelligence Exploration (AIE) Opportunity, inviting submissions of innovative basic research concepts in the technical domain of machine learning classifier diversity.
Objective and Scope
Modern computing systems demonstrate strong propensity for unintended, emergent computations and the related unintended, emergent programming models (colloquially known as “weird machines”1 that enable or amplify cyber-attacks. Computing mechanisms built for a particular purpose and with particular intended models of execution in mind prove to be capable of executing unintended computing tasks outside of their original specification and their designers and programmers’ mental models.
Today, we start examining systems for signs of emergent behavior—with methods such as fuzztesting—only after they are fully built. Despite the empirical prevalence of emergent execution phenomena, theory describing emergent execution is scarce.
However, recent research strongly suggests that a system’s exploitability models and propensity for emergent execution arise—and can also therefore be mitigated—already at the design stage, when the system’s programming abstractions and intended behaviors at a particular layer are translated into the more granular states and logic of the next computing substrate layer down in the computing stack. As programming abstractions must be translated to ancillary layers of more granular states and logic to implement and optimize these abstractions, each layer creates latent opportunities for emergent execution and unintended programmability, which are inherent in a few key design decisions of the translation, and will manifest themselves regardless of a wide variety of other implementation details and choices.
The Artificial Intelligence Mitigation of Emergent Execution (AIMEE) will address the problem of anticipating, at a system’s design stage, the models of emergent execution inherent in its design, and thus mitigate its propensities for exploitability before they lead to actual vulnerabilities in complete deployed systems.
The AIMEE AI exploration will explore whether a combination of recent advances in AI techniques such as autoencoders, evolutionary programming, deep representation learning, neural sketch learning, etc., can be used to detect, describe, and model the primitives of emergent execution directly in design-level prototypes of performance optimizations and programming abstractions rather than in complete-system implementations.
AIMEE proposals will propose case studies of how AI methods could be applied to design prototype-level representations of several common computing layers known to manifest emergent execution behaviors and programming models (“weird machines”), to enable effective anticipation of these behaviors and models. Strong proposals will discuss ways to generalize these case studies and methods for use with the exemplar designs such as layered APIs, ABIs, or CPU microarchitectures.
Full information is available here.