Search Icon White
Graphic representation of artificial intelligence network.

U.S. Launches International AI Safety Network with Global Partners

11/25/2024

The U.S. Department of Commerce (DoC) and U.S. Department of State recently co-hosted the inaugural convening of the International Network of AI Safety Institutes, an effort that seeks to advance global coordination on safe artificial intelligence innovation. The initiative—launched at a two-day conference in San Francisco—aims to focus on three critical areas: managing synthetic content risks, testing foundation models, and conducting risk assessments for advanced AI systems.

The United States will serve as the inaugural chair and member of the global network. Other initial members include Australia, Canada, the European Union, France, Japan, Kenya, the Republic of Korea, Singapore, and the United Kingdom.

Ahead of the AI Action Summit, to be held in Paris in February 2025, the November 20-21 convening brought together technical experts from each member’s AI safety institute (or equivalent government-backed scientific office) to align on priority work areas.

To support the extensive efforts, the network has secured over $11 million in global research funding commitments, with substantial contributions from multiple countries and organizations. The United States, through USAID, is designating $3.8 million this fiscal year to “strengthen capacity building, research, and deployment of safe and responsible AI” in USAID partner countries overseas, including supporting research on synthetic content risk mitigation.

Also announced this month, the U.S. Artificial Intelligence Safety Institute at the DoC’s National Institute of Standards and Technology (NIST) announced the formation of the Testing Risks of AI for National Security (TRAINS) Taskforce, which brings together partners from across the U.S. government to identify, measure, and manage the emerging national security and public safety implications of rapidly evolving AI technology. TRAINS will enable coordinated research and testing of advanced AI models across critical national security and public safety domains, such as radiological and nuclear security, chemical and biological security, cybersecurity, critical infrastructure, conventional military capabilities, and more, NIST reports.

Read more about the International Network of AI Safety Institutes via NIST’s news page.

Since 2017, international standardization efforts for AI are led by ISO/IEC JTC 1, Subcommittee (SC) 42Artificial intelligence. SC 42 is the first-of-its-kind international standards committee looking at the full AI IT ecosystem; ANSI is the secretariat of the SC and the U.S. serves as the chair. SC 42 is responsible for 33 published ISO standards, including ISO/IEC 42001:2023, Artificial intelligence – Management systemISO/IEC 22989:2022Artificial intelligence concepts and terminologyISO/IEC 23894:2023Artificial intelligence — Guidance on risk management, and ISO/IEC TR 24368:2022Overview of ethical and societal concerns, among others, with over 30 under development.

 

Related News:

ANSI Organizes Two Key Events in Coordination with USAID to Launch Critical & Emerging Technology Activity for Standards Alliance: Phase 2

Advancing AI Standards Collaboration: ISO, IEC, and ITU Announce 2025 International AI Standards Summit

Using AI Responsibly: U.S. Leads Efforts to Develop ISO/IEC 42001, Artificial Intelligence Management System Standard

CONTACT

Jana Zabinski

Senior Director, Communications & Public Relations

Phone:
212.642.8901

Email:
[email protected]

Beth Goodbaum

Journalist/Communications Specialist

Phone:
212.642.4956

Email:
[email protected]