The National Institute of Standards and Technology (NIST) is seeking participants in a new consortium supporting development of innovative methods for evaluating artificial intelligence (AI) systems to improve their safety and trustworthiness.
The consortium is intended to help equip and empower the collaborative establishment of a new measurement science that will enable the identification of proven, scalable, and interoperable techniques and metrics to promote development and responsible use of safe and trustworthy AI. It is a core element of the new NIST-led U.S. Artificial Intelligence Safety Institute (USAISI), announced on November 1 at the U.K.’s AI Safety Summit 2023.
“The U.S. AI Safety Institute Consortium will enable close collaboration among government agencies, companies, and impacted communities to help ensure that AI systems are safe and trustworthy,” said Under Secretary of Commerce for Standards and Technology and NIST director Dr. Laurie E. Locascio. “Together, we can develop ways to test and evaluate AI systems so that we can benefit from AI’s potential while also protecting safety and privacy.”
NIST is seeking a response from organizations with relevant expertise and capabilities to enter into a consortium cooperative research and development agreement (CRADA) to support and demonstrate pathways to enable safe and trustworthy AI. Expected contributions from members include:
Both the USAISI and the consortium are part of NIST’s response to the recently released Executive Order on Safe, Secure, and Trustworthy Development and Use of AI. NIST states that it will rely heavily on engagement with industry and relevant stakeholders in carrying out the EO, and that the new institute and consortium are central to those efforts.
“Participation in the consortium is open to all organizations interested in AI safety that can contribute through combinations of expertise, products, data, and models,” said Jacob Taylor, NIST’s senior advisor for critical and emerging technologies. “NIST is responsible for helping industry understand how to manage the risks inherent in AI products. To do so, NIST intends to work with stakeholders at the intersection of the technical and the applied. We want the U.S. AI Safety Institute to be highly interactive because the technology is emerging so quickly, and the consortium can help ensure that the community’s approach to safety evolves alongside.”
Interested organizations with relevant technical capabilities should submit a letter of interest by December 2, 2023. See NIST’s announcement and the Federal Register notice for more information.
Related Workshop: NIST plans to host a hybrid workshop on November 17, 2023, to engage in a conversation about AI safety. In-person attendance will take place at the Department of Commerce in Washington, DC. Additional details and registration information will be posed to NIST’s website in the coming weeks.