Today, President Biden is issuing the first-ever National Security Memorandum (NSM) on Artificial Intelligence (AI). The NSM’s fundamental premise is that advances at the frontier of AI will have significant implications for national security and foreign policy in the near future. The NSM builds on key steps the President and Vice President have taken to drive the safe, secure, and trustworthy development of AI, including President Biden’s landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of AI.
The NSM directs the U.S. Government to implement concrete and impactful steps to (1) ensure that the United States leads the world’s development of safe, secure, and trustworthy AI; (2) harness cutting-edge AI technologies to advance the U.S. Government’s national security mission; and (3) advance international consensus and governance around AI.
The NSM is designed to galvanize federal government adoption of AI to advance the national security mission, including by ensuring that such adoption reflects democratic values and protects human rights, civil rights, civil liberties and privacy. In addition, the NSM seeks to shape international norms around AI use to reflect those same democratic values, and directs actions to track and counter adversary development and use of AI for national security purposes.
In particular, the NSM directs critical actions to:
Ensure that the United States leads the world’s development of safe, secure, and trustworthy AI:
- Developing advanced AI systems requires large volumes of advanced chips. President Biden led the way when he signed the CHIPS Act, which made major investments in our capacity to manufacture leading-edge semiconductors. The NSM directs actions to improve the security and diversityof chip supply chains, and to ensure that, as the United States supports the development of the next generation of government supercomputers and other emerging technology, we do so with AI in mind.
- Our competitors want to upend U.S. AI leadership and have employed economic and technological espionage in efforts to steal U.S. technology. This NSM makes collection on our competitors’ operations against our AI sector a top-tier intelligence priority, and directs relevant U.S. Government entities to provide AI developers with the timely cybersecurity and counterintelligence information necessary to keep their inventions secure.
- In order for the United States to benefit maximally from AI, Americans must know when they can trust systems to perform safely and reliably. For this reason, the NSM formally designates the AI Safety Institute as U.S. industry’s primary port of contact in the U.S. Government, one staffed by technical experts who understand this quickly evolving technology. It also lays out strengthened and streamlined mechanisms for the AI Safety Institute to partner with national security agencies, including the intelligence community, the Department of Defense, and the Department of Energy.
- The NSM doubles down on the National AI Research Resource, the pilot for which is already underway, to ensure that researchers at universities, from civil society, and in small businesses can conduct technically meaningful AI research. AI is moving too fast, and is too complex, for us to rely exclusively on a small cohort of large firms; we need to empower and learn from a full range of talented individuals and institutions who care about making AI safe, secure, and trustworthy.
- The NSM directs the National Economic Council to coordinate an economic assessment of the relative competitive advantage of the United States private sector AI ecosystem.
Enable the U.S. Government to harness cutting-edge AI, while protecting human rights and democratic values, to achieve national security objectives:
- The NSM does not simply demand that we use AI systems in service of the national security mission effectively; it also unequivocally states we must do so only in ways that align with democratic values. It provides the first-ever guidance for AI governance and risk management for use in national security missions, complementing previous guidance issued by the Office of Management and Budget for non-national security missions.
- The NSM directs the creation of a Framework to Advance AI Governance and Risk Management in National Security, which is being published today alongside this NSM. This Framework provides further detail and guidance to implement the NSM, including requiring mechanisms for risk management, evaluations, accountability, and transparency. These requirements require agencies to monitor, assess, and mitigate AI risks related to invasions of privacy, bias and discrimination, the safety of individuals and groups, and other human rights abuses. This Framework can be updated regularly in order to keep pace with technical advances and ensure future AI applications are responsible and rights-respecting.
- The NSM directs changes across the board to make sure we are using AI systems effectively while adhering to our values. Among other actions, it directs agencies to propose streamlined procurement practices and ways to ease collaboration with non-traditional vendors.
Advance international consensus and governance around AI:
- The NSM builds on substantial international progress on AI governance over the last twelve months, thanks to the leadership and diplomatic engagement of President Biden and Vice President Harris. Alongside G7 allies, we developed the first-ever International Code of Conduct on AI in 2023. At the Bletchley and Seoul AI Safety Summits, the United States joined more than two dozen nations in outlining clear principles. 56 nations have signed up to our Political Declaration on the Military Use of AI and Autonomy, which establishes principles for military AI capabilities. And at the United Nations, the United States sponsored the first-ever UN General Assembly Resolution on AI, which passed unanimously and included the People’s Republic of China as a co-sponsor.
- The NSM directs the U.S. Government to collaborate with allies and partners to establish a stable, responsible, and rights-respecting governance framework to ensure the technology is developed and used in ways that adhere to international law while protecting human rights and fundamental freedoms.
The release of today’s NSM is part of the Biden-Harris Administration’s comprehensive strategy for responsible innovation, and builds on previous actions that President Biden and Vice President Harris have taken.
Read the NSM here.
Read the frame work here.
###