Search Icon White
trust AI

New NIST Guidance Focuses on Global Engagement for AI Standards, Evaluating and Mitigating Generative AI Risks

8/05/2024

New artificial intelligence (AI) safety guidelines and software recently released by the U.S. Department of Commerce’s (DoC) National Institute of Standards and Technology (NIST) aims to help improve the safety, security, and trustworthiness of AI systems as they continue to advance.

Inside the New AI Guidance

NIST is offering a new software package aimed at helping AI developers and customers determine how well their AI software stands up to a variety of adversarial attacks. The open-source software, a response to Executive Order (4.1) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, can help the community, including government agencies and small- to medium-sized businesses, conduct evaluations to assess AI developers’ claims about their systems’ performance, the DoC reports.

Additionally, NIST has released three final guidance documents (originally distributed in April for public comment):

  • A Plan for Global Engagement on AI Standards (NIST AI 100-5) is designed to drive the worldwide development and implementation of AI-related consensus standards, cooperation and coordination, and information sharing, DoC reports. The guidance was informed by priorities outlined in the NIST-developed U.S. Leadership in AI: A Plan for Federal Engagement in AI Standards and Related Tools, and is tied to the U.S. Government National Standards Strategy for Critical and Emerging Technology (USG NSSCET).

  • AI RMF Generative AI Profile (NIST AI 600-1) can help organizations identify unique risks posed by generative AI and proposes actions for generative AI risk management that best aligns with their goals and priorities. The guidance lists 12 risks, from “hallucinating” output to the production of misuse and disinformation, with more than 200 actions that developers can take to manage them. The guidance is intended as a companion resource for users of NIST’s AI Risk Management Framework.

  • Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (NIST Special Publication (SP) 800-218A) covers aspects of the training and use of AI systems, and identifies potential risk factors and strategies to address them. Among other recommendations, it suggests analyzing training data for signs of poisoning, bias, homogeneity and tampering. The document is designed to be used alongside the Secure Software Development Framework (SP 800-218).

Further guidance is offered in the public draft, Managing Misuse Risk for Dual-Use Foundation Models (NIST AI 800-1), which outlines voluntary best practices for how foundation model developers can protect their systems from being misused to cause deliberate harm to individuals, public safety, and national security. The guidance, released by NIST’s U.S. AI Safety Institute, offers seven key approaches for mitigating the risks that models will be misused, along with recommendations for how to implement them and how to be transparent about their implementation. NIST is accepting comments from the public on its initial public draft by September 4.

“For all its potentially transformational benefits, generative AI also brings risks that are significantly different from those we see with traditional software,” said Laurie E. Locascio, Under Secretary of Commerce for Standards and Technology and NIST director. “These guidance documents and testing platform will inform software creators about these unique risks and help them develop ways to mitigate those risks while supporting innovation.”

Access more information about the guidance via NIST and the DoC websites.

 

Related News:

Understanding AI’s Capabilities: NIST Launches Program to Advance Sociotechnical Testing and Evaluation for AI

Commerce Department Releases Strategic Vision on AI Safety for the U.S. Artificial Intelligence Safety Institute

NIST Issues Call for Participants in New AI Safety Consortium

CONTACT

Jana Zabinski

Senior Director, Communications & Public Relations

Phone:
212.642.8901

Email:
[email protected]

Beth Goodbaum

Journalist/Communications Specialist

Phone:
212.642.4956

Email:
[email protected]