IARPA posts new BAA for SAILS program
On December 11, the Intelligence Advanced Research Projects Activity posted a new broad agency announcement for the Secure, Assured, Intelligent Learning Systems (SAILS) program (Solicitation Number: IARPA-BAA-19-02). Feedback is due by January 31, 2019.
The Intelligence Advanced Research Projects Activity (IARPA) often selects its research efforts through the Broad Agency Announcement (BAA) process. The use of a BAA solicitation allows a wide range of innovative ideas and concepts. The BAA shall appear first on the FedBizOpps website, http://www.fedbizopps.gov/, then the IARPA website at http://www.iarpa.gov/. The following information is for those wishing to respond to this Program BAA.
This BAA (IARPA-BAA-19-02) is for the Secure, Assured, Intelligent Learning Systems (SAILS) Program. IARPA is seeking innovative solutions for the SAILS Program in this BAA. The SAILS Program is envisioned to begin November 2019 and end by December 2021.
Across numerous sectors, a variety of institutions are adopting artificial intelligence and machine learning (AI/ML) technologies to streamline business processes and aid in decision making. These technologies are increasingly trained on proprietary and sensitive datasets that represent a competitive advantage for the particular entity. Recent work has demonstrated, however, that these systems are vulnerable to a variety of attack vectors including adversarial examples, training time attacks, and attacks against privacy. Each of these vectors represents a potential degradation in the usefulness of AI/ML technologies. In light of the use of sensitive training sets, however, attacks against privacy represent a particularly damaging threat.
In general, attacks against privacy are comprised of methods that aim to reveal some form of information used in the training procedure of AI/ML models. Of particular interest are model inversion attacks and membership inference attacks. Model inversion attacks aim to reconstruct some representation of the data used to train a model, such as some recognizable feature of an individual’s face used in training a facial identification model. Membership inference attacks aim to determine whether a given individual’s data was used in training the model, thus potentially de-anonymizing that user.
The SAILS program aims to develop methods for creating AI/ML models robust to attacks against privacy. The goal is to provide a mechanism by which model creators can have confidence that their trained models will not inadvertently reveal sensitive information. Towards this end, SAILS will focus on a variety of problem domains, such as speech, text, and image, as well as black box (minimum knowledge of the targeted AI/ML model) and white box (full of knowledge of the targeted AI/ML model) access modes. Performers will be expected to develop techniques to defend against attacks while maintaining high-performance, both in terms of accuracy as well as time to train the model or perform a single inference, of the underlying AI/ML model. These techniques include but are not limited to new training procedures, new model architectures, or new pre-/post-processing procedures. Developed methods will be scored against state-of-the-art baselines within the chosen domain while using published model vulnerabilities.
Full information is available here.
Source: FedBizOpps