Applying Design Principles for Responsible AI in Homeland Security

This post first appeared on IBM Business of Government. Read the original article.

Tuesday, September 10, 2024

Co-Authors: Kayla Schwoerer, Andrea Patrucco, and Ilia Murtazashvili

The IBM Center for the Business of Government has partnered with Homeland Security Today to share insights from the Future Shocks initiative and subsequent book, Transforming the Business of Government: Insights on Resiliency, Innovation, and Performance. The means and methods traditionally employed by government face a significant challenge posed by the advent of disruptive technologies like artificial intelligence, the changing nature of physical and cyber threats, and the impact of social media and miscommunication on society. This partnership will share insights on how the homeland community can build resilience in thinking and action, innovate while running, and stay ahead of the enemy.  Through an on-going column and paired webinars, we will explore how best practices, questions about the unknown, and insights from several initiatives can be applied.

Given the multiple areas that homeland security encompasses in the U.S. – from direct actions to secure the homeland to strategic efforts around critical infrastructure security and resilience – the market for AI technologies to support the homeland security enterprise is ever expanding. For example, AI is already being used for data analytics and management tasks in agencies and increasingly in tools to support security and surveillance. The reality is, as homeland security spending has increased in recent years1, such technologies have caught the eye of private investors and venture capitalists who are investing more heavily in tech companies and start-ups specializing in building and marketing AI tools for government defense and homeland security efforts. 

In Chapter 8 of the IBM Center for The Business of Government book, Transforming the Business of Government: Insights on Resilience, Innovation, and Performance, we explore how public procurement processes can leverage AI to improve customer experience (CX) and provides seven guiding design principles to ensure responsible AI use in the public sector. The insights are especially relevant to homeland security as they underscore the importance of ethical and effective AI deployment in enhancing public trust and operational efficiency. This column summarizes the chapter’s key elements, focusing on their implications and applications for homeland security.

With such a shift in the market for AI-powered tools, one in which the market may be suddenly brimming with vendors eager to get a “piece of the pie” while agencies are tasked with assessing rapidly developing technologies that can sufficiently but also ethically and responsibly support homeland security efforts, it is perhaps more important than ever that the design principles for ethical use of AI in and through public procurement are top of mind. Thus, below, we outline their relevancy to the homeland security enterprise and more specific ways that the implementation tools can be applied to support the procurement of ethical and responsible AI technologies that support homeland security efforts.   

Ethical Guidelines and Principles. Procuring ethical and responsible AI technologies begins with selecting vendors that actually adhere to ethical standards and guidelines for responsible AI development, deployment, and use. This might look like building tools that comply with existing frameworks such as the Principles for Trustworthy AI in Government, an agency’s code of conduct for vendors, or a vendor’s mission or set of ethical guidelines that they build their technology by.   

Privacy Protection at the Forefront. AI-driven data analytics platforms hold great promise for supporting many aspects of homeland security from terrorism prevention to managing citizenship and immigration and everything in between. However, the vast amounts of personal data that these platforms collect, and store means that privacy protection is critical. AI technologies procured must prioritize privacy through robust anonymization and encryption protocols. Procurement managers should look for vendors with valid privacy impact assessments and consider the validity of a vendor’s data anonymization techniques and encryption protocols in procurement decisions.

Combating Bias for Fairness. Addressing and minimizing bias in AI systems is vital to ensure fairness. Subjecting AI tools to rigorous bias testing before adoption can help avoid discriminatory outcomes. As the Department of Homeland Security (DHS) ramps up its efforts to recruit more AI experts to the agency, it should consider how it is using AI-enabled recruitment systems in those efforts. For criminal justice applications, such as the identification of suspicious vehicles, AI models must be rigorously tested and transparent to prevent bias arising from flawed data or discriminatory algorithms to ensure AI aids in law enforcement without compromising fairness or equity. Implementing bias detection and mitigation features helps promote equal treatment and non-discrimination in this case, in turn helping DHS better align itself with its goals of advancing equity and maintaining public trust.

Interpretability and Explainability. Greater transparency through interpretability and explainability in AI systems increase trustworthiness by clarifying AI-driven decisions. AI decision-making systems in the homeland security context are often very domain-specific and reliant on many different types of data integrated from different systems which can create additional barriers to transparency. For example, supporting disaster survivors requires AI technologies that can quickly and efficiently assess the severity of damage after a disaster so that human analysts can accurately review the outputs of the models. These outputs must be interpretable but also explainable so that analysts can make a sound decision that can then be communicated to property owners who can contest decisions if they feel they were made unfairly. Therefore, assessing a vendor’s transparency protocols and decision-making documentation to ensure model interpretability and explainability are critical as it can foster trust but also ensure fairness and accountability.  

Promoting Social Impact and Inclusion. Homeland security efforts support and benefit a diverse population, many of whom are not US citizens. AI technologies should reflect this diversity to enhance customer experience. AI-based language translation systems for public services, for instance, must support multiple languages, including those spoken by marginalized communities. 

Responsible Vendor Governance. Partnering with vendors who uphold responsible governance practices is crucial. Vendors must adhere to data privacy and security protocols to ensure that AI solutions are designed ethically and align with national security standards. 

The application of these design principles aligns with the DHS Artificial Intelligence 2024 roadmap2 , which aims to use AI in DHS to enhance its homeland security mission but also foster a transformative, customer-centric approach to government operations. A responsible use of AI can significantly improve the relationship between the government and its constituents, promoting trust, transparency, and accountability. As agencies across the government involved in homeland security efforts continue to innovate and integrate AI technologies, these principles can guide its efforts to protect the American people while upholding privacy, civil rights, and civil liberties.

Click here to read the full chapter.

 

Sources

https://watson.brown.edu/costsofwar/costs/economic/budget/dhs  

2 See https://www.dhs.gov/sites/default/files/2024-03/24_0315_ocio_roadmap_artificialintelligence-ciov3-signed-508.pdf  

This blog post was first published on the GTSC website.

Leave a Reply

Your email address will not be published. Required fields are marked *