DARPA issues machine learning funding opp
On July 1, the Defense Advanced Research Projects Agency (DARPA) released an Artificial Intelligence Exploration (AIE) Opportunity entitled Quantifying Ensemble Diversity for Robust Machine Learning (QED RML). Submissions are due by 12:00 p.m. Eastern on July 30.
The Defense Advanced Research Projects Agency (DARPA) is issuing an Artificial Intelligence Exploration (AIE) Opportunity, inviting submissions of innovative basic research concepts in the technical domain of human language technology, cognitive science, and language acquisition.
This AIE Opportunity is issued under the Program Announcement for AIE, DARPA-PA-18-02. All proposals in response to the Quantifying Ensemble Diversity for Robust Machine Learning (QED RML) opportunity, as described herein, will be submitted to DARPA-PA-18-02-09.
Background
Researchers have demonstrated effective attacks on machine learning (ML) algorithms. These attacks can cause high-confidence misclassifications of input data, even if the attacker lacks detailed knowledge of the ML classifier algorithm and/or training set.1 Achieving effective defenses against such attacks is essential if machine learning is to be used for defense, security, or health and safety applications.
Recent evidence suggests that diverse ensembles of ML classifiers are more robust to adversarial inputs. 2 However, practice has outpaced theory in this area. The objective of the QED for RML effort is to develop the theoretical foundations for understanding the behavior of diversified ensembles of ML classifiers and quantifying their utility when under attack. This foundation is necessary for creating provable defenses against classes of attacks or regions of input-space in ML classifiers. QED for RML will explore what types of diversity metrics could enable formal guarantees of ensemble-based classifier performance against various classes of attack. Proposals should provide definition(s) of classifier diversity and the corresponding attack classes anticipated to be deterred.
There are many possible sources of diversity in a machine learning classifier, such as model type, network topology, feature selection, hyper-parameters, optimizers, and loss/fitness functions. However, even with this many degrees of freedom, black-box attacks seem to be broadly transferable. Strong proposals will identify the most relevant of these sources of diversity for countering likely attacks.
Full information is available here.
Source: FedBizOpps