New artificial intelligence (AI) safety guidelines and software recently released by the U.S. Department of Commerce’s (DoC) National Institute of Standards and Technology (NIST) aims to help improve the safety, security, and trustworthiness of AI systems as they continue to advance.
Inside the New AI Guidance
NIST is offering a new software package aimed at helping AI developers and customers determine how well their AI software stands up to a variety of adversarial attacks. The open-source software, a response to Executive Order (4.1) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, can help the community, including government agencies and small- to medium-sized businesses, conduct evaluations to assess AI developers’ claims about their systems’ performance, the DoC reports.
Additionally, NIST has released three final guidance documents (originally distributed in April for public comment):
Further guidance is offered in the public draft, Managing Misuse Risk for Dual-Use Foundation Models (NIST AI 800-1), which outlines voluntary best practices for how foundation model developers can protect their systems from being misused to cause deliberate harm to individuals, public safety, and national security. The guidance, released by NIST’s U.S. AI Safety Institute, offers seven key approaches for mitigating the risks that models will be misused, along with recommendations for how to implement them and how to be transparent about their implementation. NIST is accepting comments from the public on its initial public draft by September 4.
“For all its potentially transformational benefits, generative AI also brings risks that are significantly different from those we see with traditional software,” said Laurie E. Locascio, Under Secretary of Commerce for Standards and Technology and NIST director. “These guidance documents and testing platform will inform software creators about these unique risks and help them develop ways to mitigate those risks while supporting innovation.”
Access more information about the guidance via NIST and the DoC websites.
Related News:
NIST Issues Call for Participants in New AI Safety Consortium