President Joe Biden has unveiled a new strategy to harness Artificial Intelligence (AI) for national security, as the global race to advance AI technology intensifies. In a landmark National Security Memorandum (NSM) released Thursday, Biden emphasized the need for the United States to lead the way in the development of “safe, secure, and trustworthy” AI.
The memorandum directs U.S. agencies to strengthen semiconductor chip supply chains, integrate AI considerations into new government technology, and prioritize intelligence efforts to monitor foreign attempts to undermine U.S. leadership in AI. The Biden administration aims to outpace adversaries and mitigate the risks posed by the misuse of AI by foreign powers.
A White House official stated, “We must out-compete our adversaries and mitigate the threats posed by adversary use of AI.” The memo also stresses the need to use AI responsibly, protecting human rights and democratic values. It asserts that Americans should trust AI systems to operate safely and reliably, while agencies must actively monitor and address potential risks such as privacy violations, bias, discrimination, and other human rights concerns.
As part of the directive, the administration calls for collaboration with international allies to ensure AI development aligns with international law and safeguards human rights. This approach is part of broader efforts to mitigate the potential military and intelligence competition triggered by AI advancements.
The memo follows previous actions by the Biden administration, including an executive order aimed at addressing AI risks to consumers, workers, and national security. However, concerns remain. In July, over a dozen civil society groups, including the Center for Democracy & Technology, sent an open letter urging the government to include stronger safeguards in the NSM. The groups expressed concerns over the lack of transparency regarding AI use by government agencies and the potential for AI to perpetuate racial, ethnic, or religious biases, infringe on privacy, and violate civil liberties.
Next month, the U.S. will host a global AI safety summit in San Francisco, where allies will gather to develop better regulations for the rapidly evolving sector and coordinate policies for its responsible use. Generative AI, which has the ability to create text, images, and videos from open-ended prompts, has sparked both excitement and concern due to its potential for misuse and the existential risks it poses if not properly regulated.