Search Icon White
new tech

Provide Your Comments: NIST Draft AI Risk Management Framework

3/22/2022

Send Feedback by April 29

As part of its ongoing effort to manage risks posed by artificial intelligence (AI), the U.S. Department of Commerce's National Institute of Standards and Technology (NIST) is seeking comments by April 29 on an initial draft of its AI Risk Management Framework (AI RMF). Feedback will be used to inform the Framework, intended to better manage risks to individuals, organizations, and society associated with AI. The voluntary Framework is intended to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.

NIST’s work on the Framework is consistent with its broader AI efforts, recommendations by the National Security Commission on Artificial Intelligence, and the Plan for Federal Engagement in AI Standards and Related Tools. Work on the AI RMF was launched as NIST’s response to the Executive Order on Maintaining American Leadership in AI. NIST is also creating a companion guide to the AI RMF with additional practical guidance.

According to NIST, the Framework aims to foster the development of innovative approaches to address characteristics of trustworthiness including accuracy, explainability and interpretability, reliability, privacy, robustness, safety, security (resilience), and mitigation of unintended and/or harmful bias, as well as of harmful uses. The Framework should consider and encompass principles such as “transparency, accountability, and fairness during pre-design, design and development, deployment, use, and test and evaluation of AI technologies and systems.”

The draft builds on a concept paper released in December and NIST’s earlier Request for Information, for which ANSI issued a call to action within its stakeholder community.

Attend NIST’s Workshop on the AI Risk Management Framework

To inform stakeholders about the AI RMF, NIST will hold its second workshop on March 29-31. The first two days will address all aspects of the AI RMF, while the final day will provide deeper insights of issues related to mitigating harmful bias in AI.

In March, NIST reported that it has revised its publication entitled Towards a Standard for Identifying and Managing Bias in Artificial Intelligence (NIST Special Publication 1270), which reflects public comments the agency received on its draft version released last summer. The publication examines how “beyond the machine learning processes and data used to train AI software, bias is related to broader societal factors – human and systemic institutional in nature – which influence how AI technology is developed and deployed,” as NIST reports.

 

See related news:

There’s More to AI Bias that Biased Data, NIST Report Finds

Join NIST’s Spring Semiconductor Metrology R&D Workshops

CONTACT

Jana Zabinski

Senior Director, Communications & Public Relations

Phone:
212.642.8901

Email:
[email protected]

Beth Goodbaum

Journalist/Communications Specialist

Phone:
212.642.4956

Email:
[email protected]