This post first appeared on Next Gov. Read the original article.
The agency is engaging stakeholders in the process through multiple venues.
National Institute of Standards and Technology officials are gleaning insights from a range of players as they work to draft Congressionally-directed guidance promoting the responsible use of artificial intelligence technologies.
That in-the-making document—the Artificial Intelligence Risk Management Framework, or AI RMF—is aimed at building the public’s trust in the increasingly adopted technology, according to a recent request for information.
Responses to the RFI are due Aug. 19 and will inform the framework’s early days of production.
“We want to make certain that the AI RMF reflects the diverse experiences and expertise of those who design, develop, use, and evaluate AI,” Elham Tabassi, NIST’s Information Technology Laboratory chief of staff, told Nextgov in an email Monday.
Tabassi is a scientist who also serves as federal AI standards coordinator and as a member of the National AI Research Resource Task Force, which was formed under the Biden-Harris administration earlier this summer. She shed light on some of what will go into this new framework’s development.
AI capabilities are transforming how humans operate in meaningful ways, but also present new technical and societal challenges—and confronting those can get sticky. NIST officials note in the RFI that “there is no objective standard for ethical values, as they are grounded in the norms and legal expectations of specific societies or cultures.” Still, they note that it is generally agreed that AI must be made, assessed and used in a manner that fosters public confidence.
“Trust,” the RFI reads, “is established by ensuring that AI systems are cognizant of and are built to align with core values in society, and in ways which minimize harms to individuals, groups, communities, and societies at large.”
Tabassi pointed to some of NIST’s existing AI-aligned efforts that hone in on “cultivating trust in the design, development, use and governance of AI.” They include developing data and establishing benchmarks to evaluate the technology, participating in the making of technical AI standards, and more. On top of those efforts, Congress also directed the agency to engage public and private sectors in the creation of a new voluntary guide to improve how people manage risks across the AI lifecycle. The RMF was proposed via the National AI Initiative Act of 2020 and aligns with other government recommendations and policies.
“The framework is intended to provide a common language that can be used by AI designers, developers, users, and evaluators as well as across and up and down organizations,” Tabassi explained. “Getting agreement on key characteristics related to AI trustworthiness—while also providing flexibility for users to customize those terms—is critical to the ultimate success of the AI RMF.”
Officials lay out various aims and elements of the guide via the RFI. Those involved intend for it to “provide a prioritized, flexible, risk-based, outcome-focused, and cost-effective approach that is useful to the community of AI designers, developers, users, evaluators, and other decision-makers and is likely to be widely adopted,” they note. Further, the guidance will exist in the form of a “living document” that’s updated as the technology and approaches to implementing it evolve.
Broadly, NIST requests feedback on its approach to crafting the RMF, and its planned inclusions. Officials ask for responders to weigh in on hurdles to improving their management of AI-related risks, how they define characteristics and metrics to AI trustworthiness, standards and models the agency should consider in this process, and concepts for structuring the framework—among other topics.
“The first draft of the RMF and future iterations will be based on stakeholder input,” Tabassi said.
Although the guidance will be voluntary in its nature, she noted that such engagement could help lead to wider adoption once the guide is completed. Tabassi also confirmed that NIST is set to hold a 2-day workshop, “likely in September,” to gain more input from those interested.
“We will announce the dates soon,” she said. “Based on those responses and the workshop discussions, NIST will develop a timeline for developing the framework, which likely will include multiple drafts to allow for robust public input. Version 1.0 could be published by the end of 2022.”