The U.S. Department of Commerce (DoC) and U.S. Department of State recently co-hosted the inaugural convening of the International Network of AI Safety Institutes, an effort that seeks to advance global coordination on safe artificial intelligence innovation. The initiative—launched at a two-day conference in San Francisco—aims to focus on three critical areas: managing synthetic content risks, testing foundation models, and conducting risk assessments for advanced AI systems.
The United States will serve as the inaugural chair and member of the global network. Other initial members include Australia, Canada, the European Union, France, Japan, Kenya, the Republic of Korea, Singapore, and the United Kingdom.
Ahead of the AI Action Summit, to be held in Paris in February 2025, the November 20-21 convening brought together technical experts from each member’s AI safety institute (or equivalent government-backed scientific office) to align on priority work areas.
To support the extensive efforts, the network has secured over $11 million in global research funding commitments, with substantial contributions from multiple countries and organizations. The United States, through USAID, is designating $3.8 million this fiscal year to “strengthen capacity building, research, and deployment of safe and responsible AI” in USAID partner countries overseas, including supporting research on synthetic content risk mitigation.
Also announced this month, the U.S. Artificial Intelligence Safety Institute at the DoC’s National Institute of Standards and Technology (NIST) announced the formation of the Testing Risks of AI for National Security (TRAINS) Taskforce, which brings together partners from across the U.S. government to identify, measure, and manage the emerging national security and public safety implications of rapidly evolving AI technology. TRAINS will enable coordinated research and testing of advanced AI models across critical national security and public safety domains, such as radiological and nuclear security, chemical and biological security, cybersecurity, critical infrastructure, conventional military capabilities, and more, NIST reports.
Read more about the International Network of AI Safety Institutes via NIST’s news page.
Since 2017, international standardization efforts for AI are led by ISO/IEC JTC 1, Subcommittee (SC) 42, Artificial intelligence. SC 42 is the first-of-its-kind international standards committee looking at the full AI IT ecosystem; ANSI is the secretariat of the SC and the U.S. serves as the chair. SC 42 is responsible for 33 published ISO standards, including ISO/IEC 42001:2023, Artificial intelligence – Management system, ISO/IEC 22989:2022, Artificial intelligence concepts and terminology, ISO/IEC 23894:2023, Artificial intelligence — Guidance on risk management, and ISO/IEC TR 24368:2022, Overview of ethical and societal concerns, among others, with over 30 under development.
Related News: