IARPA posts LLM RFI

On July 31, the Intelligence Advanced Research Projects Activity (IARPA) Posted a request for information (RFI) on characterizing large language model (LLM) biases, threats, and vulnerabilities. Responses are due by 5:00 p.m. Eastern on August 21.

IARPA is seeking information on established characterizations of vulnerabilities and threats that could impact the safe use of LLMs by intelligence analysts. This RFI is issued for planning purposes only, and it does not constitute a formal solicitation for proposals or suggest the procurement of any material, data sets, etc.

LLMs have received much public attention recently due, among other things, to their human-like interaction with users. These capabilities promise to substantially transform and enhance work across sectors in the coming years. However, LLMs have been shown to exhibit erroneous and potentially harmful behavior, posing threats to the end-users. This RFI aims to elicit frameworks to categorize and characterize vulnerabilities and threats associated with LLM technologies, specifically in the context of their potential use in intelligence analysis.

For the purpose of this RFI, IARPA is interested in the characterizations and methods for both “white box” models (some privileged access to parameters or code) and “black box” models (no privileged access to parameters and code).

Responses to this RFI are due no later than 5 p.m., Eastern Time, on 21 August, 2023.

Review the full IARPA LLM RFI.

Source: SAM

IC News brings you business opportunities like this one each week. If you find value in our work, please consider supporting IC News with a subscription.