2/29/2024
What should the AI standards landscape for the healthcare and financial sectors look like, and what stakeholders could be better represented in related standardization efforts? These are among the questions considered at an industry listening session hosted by the American National Standards Institute (ANSI) on February 27.
Facilitated by Mary Saunders, ANSI senior vice president of government relations and public policy, input from the session will help inform the National Institute of Standards and Technology (NIST)’s implementation strategy for the President’s Executive Order (EO) 14110 on Safe, Secure, and Trustworthy Artificial Intelligence, released in October 2023. The EO directs NIST to develop evaluation, red-teaming, safety, and cybersecurity guidelines; facilitate development of consensus-based standards; and provide testing environments for evaluation of AI systems.
As the Promise of AI Grows, Rethinking Approaches to Standards Development
While conventional and generative AI have already been deployed across the financial sector, used in everything from letter writing to data analytics, challenges—including risks for financial fraud from identity theft and synthetic identities—are also on the rise, according to a participant.
Within healthcare, AI has revolutionized everything from health plans to delivery of care. But the industry’s rapid AI adoption also poses a number of risks involved for both administration and patients.
Experts representing both sectors voiced areas of concern, noting that while the “black box nature” of deep learning and neural networks creates new requirements, rules are often created with wide-ranging definitions, which could lead to conflict of interest and confusion. To that end, attendees expressed a need for more specific and narrow definitions and terminology in AI. Healthcare representatives noted the need for greater clarity on AI nomenclature and taxonomy, especially as different areas of the sector use terminology in different ways. Having a specific nomenclature and taxonomy for different technologies can be helpful for standards and regulations, and to understand risks associated with each application of the technology. Safety and equity are also leading concerns.
With AI at the forefront of transforming the healthcare and financial sectors at such a rapid pace, Saunders enquired how the standards community can keep pace to address standards needs amid rising challenges. One participant noted that their organization has committed to working on the completion of a standard within 12-18 months to keep pace with technology changes. Attendees discussed the benefits of standards that address incremental changes and updates, as compared to large-scale guidelines that seek to solve all challenges within one document.
Identifying Stakeholders and Outreach in Standards Development
The session also addressed what stakeholder types could be better represented in standards development. Participants from both sectors noted the importance of diversity across the standards development process. In the AI supply chain, there are stakeholders that develop AI solutions; individuals who implement, purchase, maintain, and oversee governance of AI; and ultimately, end users. The standards process should reflect the diversity of these stakeholders, from consumers to SMEs to civil society, as these individuals can contribute their knowledge across different areas.
When it comes to engagement and improving the visibility of standards work, it is essential for standards developing organizations to communicate a standard’s usability and value. Greater efforts to promote “why” standards make a difference to the stakeholders using them may be effective in encouraging participation throughout the lifecycle of a standard.
Both sectors also discussed the importance of promoting standards and their value to SMEs and oft-underrepresented startups. Patient advocacy groups and feedback from the administrative side of healthcare could also diversify and strengthen AI standards development.
Participants Share Engagement Efforts, Resources to Strengthen AI Efforts
Experts noted collaboration with the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) Joint Technical Committee (JTC) 1, Information technology, Subcommittee (SC) 42, Artificial intelligence, the first-of-its-kind international standards committee looking at the full AI IT ecosystem.
Wael William Diab, JTC 1 SC 42 chair, reflected on the group’s efforts, including its international standards portfolio to enable responsible and certifiable AI systems. He noted how JTC 1 has worked internationally with UNESCO, the Council of Europe, and other entities, to look at how standards can be used ethically. Diab expressed that the group is looking to partner with others, noting that the committee runs free virtual workshops that focus on emerging work, with sessions on healthcare, financial services, etc. The group has collaborated with ISO Technical Committee (TC) 215, Health informatics, which recently developed a Technical Report on Digital Therapeutics Health Software Systems.
As another AI standards resource, INCITS will host a webinar, “Unlocking the Power of AI Management Systems: A Workshop on ISO/IEC 42001,” on March 7. The session will provide an overview on how to use the recently published ISO/IEC 42001, AI Management System standard, (developed by ISO JTC 1 SC 42) to organize and document the efforts necessary to realize responsible AI and meet the growing body of regulatory requirements. INCITS holds the U.S. TAG for JTC 1 SC 42.
ANSI thanks the industry participants for their valuable insights as the Institute continues to support the public and private sectors in standardization initiatives for critical and emerging technologies.
Related News: