The National Institute of Standards and Technology is seeking information and feedback from interested stakeholders to help inform the development of a framework for managing risks associated with artificial intelligence.
NIST is working to develop a voluntary AI risk management framework meant to integrate trustworthiness considerations into the development, use and assessment of AI systems and services, according to a request for information notice posted Thursday in Federal Register.
The agency wants the framework to advance the development of new approaches to address characteristics of trustworthiness, including accuracy, reliability, privacy, security and safety.
Stakeholders should provide responses to several questions with regard to the proposed AI framework, including challenges to improving management of risks by AI actors, other characteristics of AI trustworthiness that should be considered in the framework and the extent to which AI risks are integrated into organizations’ enterprise risk management.
Responses to the RFI are due Aug. 19.