The U.S. National Institute of Standards and Technology (NIST) is seeking public input on the implementation of a White House executive order that calls for safeguards in the development and deployment of artificial intelligence (AI). President Joe Biden’s executive order directs NIST to establish guidelines for AI developers, particularly those involved in dual-use foundation models, to conduct red-teaming tests. Red teaming is considered an effective tool to identify potential risks associated with AI, and the public input sought by NIST aims to gather feedback on the practical aspects of implementing these guidelines.
The executive order also leverages Cold War-era executive powers, requiring companies developing AI models with significant risks to national security, economic security, or public health and safety to share their test results with the federal government. This move reflects a recognition of the potential impact of AI on various aspects of society and a commitment to ensuring responsible development and deployment. Secretary of Commerce Gina Raimondo emphasized the importance of harnessing the power of AI for societal benefit while mitigating associated risks.
Additionally, the order directs NIST to identify consensus industry standards for a generative AI risk management framework and a secure software development framework for generative AI and dual-use foundation models. The focus is on developing frameworks that promote the responsible use of AI and address potential risks. The public feedback gathered by NIST will play a crucial role in shaping these guidelines and frameworks. This initiative marks a significant step in the United States toward regulating and guiding the development of AI technologies, recognizing the need for responsible and secure practices in the rapidly advancing field.
While NIST itself does not directly regulate AI, its role in developing frameworks, standards, research, and resources holds significant influence in informing regulations and promoting responsible AI development. The agency’s recent efforts include the release of an AI risk management framework and a report on bias in AI algorithms, contributing to the broader goal of ensuring trustworthy and responsible AI practices. The public input process aligns with broader efforts to engage stakeholders and experts in shaping policies that govern the ethical and responsible use of AI technologies.