How to Be a Responsible Government Chief Artificial Intelligence Officer

From IC Insider Clarifai

By Douglas Shapiro, Clarifai

Congratulations! It is June 2024 and you have just been appointed Chief AI Officer (CAIO) of your CFO Act agency. After convening with your other agency senior leaders you are firming up the shape of your newly formed AI Governance Board to brainstorm how your agency will govern and use AI.

As of the end of May, 2024, 18 of the 24 CFO Agencies had already designated CAIOs. If you are the archetypal CAIO, you are of late-middle career, about 25 years of total experience, with several years of tenure at your agency. You understand the core mission of your agency in your bones. It’s a near certainty you have been promoted from within or given the additional title of CAIO. You have a deep technical IT background but in all likelihood you do not have a background in AI, and you may not have any private sector experience in AI. But you’ve read the prescriptive follow-up memo on the major Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence) and are starting to gain a sense of what your obligations are from a regulatory perspective. Now what? How should you think and act to become a responsible government CAIO?

There are several components to the OMB memo update to EO 14110. It is important to start by looking at specific responsibilities articulated for CAIOs, rather than addressing the nuances from sections of Advancing Artificial Responsible Innovation and Managing Risks from the Use of Artificial Intelligence. Despite what is contained in the innovation section, most agencies will not successfully train and inference their own AI models from scratch. There are 21 explicit responsibilities for CAIOs in the memo, split into the categories of Coordinating Agency Use of AI, Promoting AI Innovation, and Managing Risks from the Use of AI. Some responsibilities are bureaucratic necessities, but others will actually make your agency more AI-ready.

Here are top responsibilities to consider:

Coordinating Use of AI (Responsibilities A – J)

These responsibilities are about the role you as a CAIO will play as a leader between agencies, planning compliance with this memorandum, and coordinating across agencies. Of special note from the full list below is section (F).

(F) Advising the Chief Human Capital Officer (CHCO) and where applicable, the Chief Learning Officer, on improving workforce capacity and securing and maintaining the skill sets necessary for using AI to further the agency’s mission and adequately manage its risks.

This responsibility is most crucial to advancing your agency’s AI capabilities. The difficult truth is that it will be enormously challenging to hire the best AI talent, because the opportunity cost of alternative employment opportunities relative to government roles is so massive. Recruiting and organizing technically adept people who are mission focused and willing to temporarily earn below their market value is no easy task.

Promoting AI Innovation (Responsibilities K – N)

These responsibilities are core to actually progressing the agency’s AI capabilities. Of note are (K), (L), and (M).

(K) Working with their agency to identify and prioritize appropriate uses of AI that will advance both their agency’s mission and equitable outcomes.

Many of the initial use cases that come to mind can be solved with technologies that pre-date the current generative AI boom like optical character recognition (OCR) and robotic process automation (RPA). Enterprise search for knowledge within your agency with the assistance of retrieval augmented generation (RAG) or fine-tuning are obvious ubiquitous applications to make your colleagues more efficient. The major challenge is data curation, data processing, and data pipelines, not AI. Imagineering where AI can make an impact for your agency is fundamental and you should be doing it on your Day 1.

(L) Identifying and removing barriers to the responsible use of AI in the agency, including through the advancement of AI-enabling enterprise infrastructure, data access and governance, workforce development measures, policy, and other resources for AI innovation.

Most agencies, for the foreseeable future will not own – and will not want to own – the entire enabling enterprise infrastructure for AI. Instead they will rely on Gov Cloud, pooled high performance computing, hardware from a GPU provider, and a workflow orchestration platform. Workforce development measures, like formal education, along with hands-on sessions to become more familiar with the possibilities and limitations of this technology will be sorely needed. If Congress is a proxy for the understanding of the nomenclature or applications of AI, there will be some effort required for civil servants to understand the capabilities of nascent tools. The largest barrier for many agencies would be legacy data issues from prior failed attempts at digital transformation. Will be hard to make use of effective enterprise docQA if data is siloed across many systems of record like databases and Sharepoint that are not speaking with one another.

(M) Working with their agency’s CIO, CDO, and other relevant officials to ensure that custom-developed AI code and the data used to develop and test AI are appropriately inventoried, shared, and released in agency code and data repositories…

For generative frontier models, training data are massive datasets. Many of the largest general models use huge amounts of data from the internet. But today, given the dearth of agencies providing comprehensive inventories on AI use cases most agencies will probably avoid operationalizing anything that could be construed as safety-impacting and rights-impacting prior to the end of calendar year 2025. As for the regulatory mandates on use of high-performance computing, AI sharing, and cataloging inventories, they are premature and most agencies will fail to respond comprehensively. Versioning models will be the modern form of knowledge management.

Managing Risks from the Use of AI (Responsibilities O through U)

These responsibilities are about various forms of compliance, at the extreme the cataloging of risks will discourage the promotion of innovation from the prior set of responsibilities. Of special note are (P) and (R).

(P) Working with relevant senior agency officials to establish or update processes to measure, monitor, and evaluate the ongoing performance and effectiveness of the agency’s AI applications and whether the AI is advancing the agency’s mission and meeting performance objectives.

This responsibility is practically very difficult to achieve. There is no universal consensus for evaluation of generative models. Evaluation approaches span LLMs as a judge, various benchmark(s), both general and task-specific, amongst others. Performance objectives will ultimately be subjective as well. The alternative to not using AI may be a constellation of different outcomes with respect to speed, cost, and accuracy contingent on the use case.

(R) Conducting risk assessments, as necessary, of the agency’s AI applications to ensure compliance with this memorandum.

Most CFO agencies will default to imitating the NIST framework.

There are several unstated pre-requisite responsibilities that are not explicit in the memo but would be foundationally necessary to carry out the spirit of the EO, such as clean, curated data, and an adaptive workforce. Now that you have a sense of your CAIO-mandated responsibilities, here is a roadmap you can start using today to better equip your agency and effectively manage AI’s use:


  • Start assessing your data across your organization. What data science and data engineering talent do you have? What are the volumes, location(s), formatting of agency data and stakeholder (taxpayer) data? What do your data pipelines look like? Ultimately, the foundation of any successful AI project will be strong data.
  • Start your AI impact assessment immediately, and think through the quantitative and qualitative costs and benefits of potential use cases.



  • Determine which AI use cases you deem to be safety-impacting or rights-impacting.
  • Decide whether it is realistic to operationalize (safety-impacting or rights-impacting) AI at your agency prior to December 1, 2024.


Later This Year

  • Rely heavily on NIST and OSTP for guidance on compliance and harmonization, rather than developing your own independent approach.
  • By September 24, 2024: Report a plan for compliance with the M-24-10 (EO follow-up) memo



  • Work to remain on the cutting edge of understanding new developments, especially static and dynamic information retrieval applications for enterprises. Don’t rely on other coordinating agencies to do that work for you.
  • You and your team should personally be experimenting with new tools and thinking critically about how they may apply to your domain. Future proof your efforts by avoiding getting locked into any one model and look for tools that can easily integrate any data type.


AI advances continue to progress at an unprecedented pace. There is a strong possibility that a hyperscaler or other well-funded tech entity will release their next frontier foundation model before any major parts of the EO are required to be implemented. Literally dozens of new techniques to improve generative performance, many fully transparently, on Arxiv will appear before the end of the year. A small sampling from just the last few weeks have been: infinite context windows for LLMs, easier fine-tuning of models, increasing efficiency of multimodal information retrieval, caching optimizations, and adaptively choosing LLMs (‘cascade’).  Many definitions are emerging and contested, including how to evaluate performance comparing various generative systems.

Despite all the technological changes, CAIOs should continue to focus on what’s not likely to change: the core mission of your organization, overcoming barriers to hiring technical talent, and leading change management to get your colleagues more comfortable using AI when it can enhance productivity. Fulfilling the responsibilities to becoming a responsible CAIO is merely a step on that journey.


Contact us to learn more about how Clarifai has accelerated AI success for the U.S. government for the last eight years.

About Clarifai

Clarifai provides teams and organizations with a single workflow to quickly build, manage, and orchestrate AI workflows across your organization. Already trusted by JSOC, NGA, and DHS, Clarifai empowers organizations to leverage AI in their most important workflows.Clarifai provides agencies with a cutting-edge platform to build enterprise AI faster, leveraging today’s modern AI technologies like Large Language Models (LLMs) and Retrieval Augmented Generation (RAG), data labeling, inference, and more. Founded in 2013, Clarifai is available in cloud, on-premise, or hybrid environments. Top secret facility clearance and cleared personnel. Learn more at

About IC Insiders

IC Insiders is a special sponsored feature that provides deep-dive analysis, interviews with IC leaders, perspective from industry experts, and more. Learn how your company can become an IC Insider.