This post first appeared on Federal News Network. Read the original article.
The National Institute of Standards and Technology wants to make sure identity systems, which increasingly leverage artificial intelligence and machine learning, are trained on good data and continuously tested to prove out their efficacy.
But a NIST official says a key challenge is harnessing robust datasets and testing methodologies across a quickly growing range of applications.
NIST is also in the midst of updating its digital identity guidelines this spring. Ryan Galluzzo, Digital Identity Program Lead in NIST’s Applied Cybersecurity Division, said his team is focused on testing and evaluation when it comes to how AI is used in identity solutions.
“We’re not going to be able to create innumerable amounts of requirements for all potential applications of AI and machine learning. There’s just too many,” Galluzzo said Jan. 25 at an event sponsored by the Better Identity Coalition, FIDO Alliance, and the Identity Theft Resource Center.
NIST’s standards are a key facet of the Biden administration’s approach to AI. And while a separate team at NIST is working on an “AI risk management framework,” digital identity — which includes AI-powered technologies like biometrics — is also a key area.
Galluzzo said testing algorithms in operational scenarios with a representative user population is a key best practice. Additionally, organizations should continuously monitor their solutions once deployed, and have processes in place to address inadvertent bias or discrimination.
“The challenge is going to be from both a standardization perspective and a prioritization perspective is, what do we test? How do we test it? What needs to be run in open tests? What are going to be run by vendors themselves as they kind of evaluate and test these programs? What are the most appropriate methodologies,” Galluzzo said.
For several years, NIST has run a process to test facial recognition technologies under what is now known as the “Facial Recognition Technology Evaluation” (FRTE) effort. The testing has helped address issues around the performance of facial recognition technologies across different demographic groups.
“We’ve got good standards around biometrics,” Galluzzo said. “In many of these other applications, we don’t necessarily have the same degree of standards and testing methodologies that we can apply to them.”
And Galluzzo said data challenges are paramount, especially since organizing good datasets for testing identity systems can pose privacy challenges.
“We’ve had lots of internal debates and conversations about things like risk analytics tools, and how can we potentially put together something like FRTE, or something similar, [like] challenges or hackathons,” Galluzzo said. “But the data issues are just so overwhelming, as far as being able to have that good testing data set. So folks who have ideas on that, we’re very much open to discussing it.”
For identity systems, Galluzzo said it ultimately comes down to “getting the right representative data to make sure you’re actually representing your population, and then making sure that you’re doing the various different kinds of testing you need over time to actually get to where you want to go.”
Meanwhile, Galluzzo said NIST’s digital identity team is also considering adopting a speedier process for updating the identity guidelines in the future.
Last fall, NIST’s security and privacy controls team unveiled a new “patch” process for its baseline publication, allowing the agency to update those standards in just weeks instead of the usual months- or years-long timeframes.
Galluzzo said the digital identity team is considering doing something similar once it gets through the Revision 4 updates to the guidelines.
“We’re looking at new ways to potentially update the guidance in flight in ways that can better keep pace with technology,” he said.
The post NIST focuses on data, testing to ensure efficacy of AI-powered identity systems first appeared on Federal News Network.