Artificial Intelligence: DHS Needs to Improve Risk Assessment Guidance for Critical Infrastructure Sectors

This post first appeared on GAO Reports. Read the original article.

What GAO Found

Federal agencies with a lead role in protecting the nation’s critical infrastructure sectors are referred to as sector risk management agencies. These agencies, in coordination with the Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA), were required to develop and submit initial risk assessments for each of the critical infrastructure sectors to DHS by January 2024. Although the agencies submitted the sector risk assessments to DHS as required, none fully addressed the six activities that establish a foundation for effective risk assessment and mitigation of potential artificial intelligence (AI) risks. For example, while all assessments identified AI use cases, such as monitoring and enhancing digital and physical surveillance, most did not fully identify potential risks, including the likelihood of a risk occurring. None of the assessments fully evaluated the level of risk in that they did not include a measurement that reflected both the magnitude of harm (level of impact) and the probability of an event occurring (likelihood of occurrence). Further, no agencies fully mapped mitigation strategies to risks because the level of risk was not evaluated.

Extent to Which the Sector Risk Management Agencies (SRMA) Have Addressed Six Activities in Their Sector Risk Assessments of Artificial Intelligence (AI)

Lead agencies provided several reasons for their mixed progress, including being provided only 90 days to complete their initial assessments. A key contributing factor was that DHS’s initial guidance to agencies on preparing the risk assessments did not fully address all the above activities.

DHS and CISA have made various improvements, including issuing new guidance and a revised risk assessment template in August 2024. The template addresses some—but not all—of the gaps that GAO found. Specifically, the new template does not fully address the activities for identifying potential risks including the likelihood of a risk occurring. CISA officials stated that the agency plans to further update its guidance in November 2024 to address the remaining gaps. Doing so expeditiously would enable lead agencies to use the updated guidance for their required January 2025 AI risk assessments.

Why GAO Did This Study

AI has the potential to introduce improvements and rapidly change many areas. However, deploying AI may make critical infrastructure systems that support the nation’s essential functions, such as supplying water, generating electricity, and producing food, more vulnerable. In October 2023, the President issued Executive Order 14110 for the responsible development and use of AI. The order requires lead federal agencies to evaluate and, beginning in 2024, annually report to DHS on AI risks to critical infrastructure sectors.

GAO’s report examines the extent to which lead agencies have evaluated potential risks related to the use of AI in critical infrastructure sectors and developed mitigation strategies to address the identified risks. To do so, GAO analyzed federal policies and guidance to identify activities and key factors for developing AI risk assessments. GAO analyzed lead agencies’ 16 sector and one subsector risk assessments against these activities and key factors. GAO also interviewed officials to obtain information about the risk assessment process and plans for future templates and guidance.

Leave a Reply

Your email address will not be published. Required fields are marked *