This post first appeared on IBM Business of Government. Read the original article.
Threefold strategy to assist government leaders and public managers with how best to approach using AI
Artificial Intelligence (AI) has moved into the mainstream of business and government. Business leaders are rushing to take advantages of the benefits that can be brought to a wide array of industries to help increase productivity. Government leaders are also moving forward, but with appropriate caution. When considering the use and application of AI related technologies, government leaders weigh different factors than their private sector counterparts. Whether it is deploying self-driving electric trolleys in a city or retrofitting city streetlights with sensors to make them “smarter,” these leaders must address issues of accountability, transparency, ethics, equity, common good, effectiveness, efficiency, managerial capacity, and political legitimacy.
In the recent IBM Center report, Risk Management in the AI Era: Navigating the Opportunities and Challenges of AI Tools in the Public Sector, Professors Justin Bullock and Matthew Young outline a threefold strategy to assist government leaders and public managers on how best to approach using AI.
Understand Federal Guidance on Managing the Use of AI. This report begins by highlighting the ever-evolving guidance offered by the U. S. federal government to manage the opportunities and challenges of AI. This overview is by no mean exhaustive, but rather a snapshot in time bringing attention to a number of significant actions the U.S. federal government has taken on how best to approach the management of risk associated with the development and use of AI within government and throughout society.
This guidance shaped across two administrations focuses on how best to leverage the opportunities present in AI tools such as increases in task efficiency, decreases in task costs, improvements in decision-making quality, and overall improvements in the delivery of government services. It also reminds government leaders to make sure issues like safety, security, privacy, public trust, job loss, malicious hacking attacks, and biased data are considered and weighed prior to pursing the use of AI tools in the delivery of government missions.
Here’s a synopsis of top federal actions outlining how to approach the use and application of AI in government:
- the federal government created a research report summarizing what is known by the leading experts on AI.
- this report fueled a National AI R&D strategic plan that set into motion an initial seven strategies for the federal government to begin to systematically approach this risk management challenge.
- following the presidential transition, the Trump administration—also recognizing the importance of having a holistic risk management approach to the AI era—updated this strategic plan.
- in an attempt to encourage more action on the strategic plan, the Trump administration issued an Executive Order for agencies to begin implementing the strategies and recommendations put forward in the strategic plan, all while encouraging the development of partnerships across the federal government and across the numerous relevant stakeholders.
- finally, the legislative branch has taken up this issue and offers an attempt to both codify parts of the strategic plan and authorize a Center of Excellence within the GSA to be the organizational home of implementing and maintaining this strategic plan.
AI Tools and Risk Management: Evaluative Criteria for Using AI Tools in the Public Sector. Innovation and risk are inextricable from one another. Implementing AI tools in the public sector carries the additional hazard of novelty. As a relatively new technology with a simultaneously expanding range of applicability, there are fewer empirical examples to draw upon for evidence of success. Young and Bullock’s report provides analysis of the nature of AI tools and important characteristics of the tasks these tools could be used for to create an evaluative framework for the use of AI tools to accomplish the work of government. There are two principle components to the framework. The first synthesizes the technology behind AI tools and task characteristics to identify likely best-fit scenarios between technology and task. The second draws from seminal work in public administration theory to provide a set of evaluative criteria for using AI tools. These criteria are meant to be used assess both opportunities and hazards before adoption of an AI tool, and to design program evaluation processes after the tool’s implementation.
Seizing the opportunities and mitigating the hazards associated with adopting AI tools requires paying careful attention to the match between technology and task, understanding the organizational status quo with respect to task execution and performance, and having a clear idea of what constitutes success with respect to outputs and outcomes.
Young and Bullock offer a five-point evaluative framework adapted from Salamon’s New Tools of Governance for public managers to use when considering whether or not to adopt AI tools. The five criteria are:
- Effectiveness – According to Bullock and Young, a robust risk management approach to adopting AI tools should perhaps downweigh the value of efficiency with respect to the other criteria used in their framework. This is particularly important when the use case is new to the organization; in these instances, decision makers should be less preoccupied with whether or not they could and more focused on if they should.
- Efficiency – Organizations should begin by carefully considering what the appropriate threshold for task quality ought to be. Once those thresholds are determined, they can consider small-scale or, ideally, controlled experiments comparing the effectiveness of an AI tool’s task augmentation against the base case of current processes.
- Equity – Equity-related concerns with AI tools are perhaps the most challenging with respect to risk management. Bullock and Young suggest that when assessing equity factors for adopting AI tools risk management strategies should use the observed performance of the organization and its human agents as the base case, rather than an ideal state of perfect equality.
- Manageability – Manageability is a measure of how simple or difficult a tool is to implement and operate. According to Bullock and Young, the most vexing manageability-related hazard that AI tools introduce is that even though its decision-making process is modeled after the mammalian brain, there is no reliable way of understanding why these systems make a particular choice or determination. So, while we could ask a human agent to describe the thought process or logic model that led them to make a particular determination, AI tools are often “black boxes.”
- Political legitimacy – It is impossible to overstate the importance of assessing whether an AI tool is likely to be viewed as legitimate by the public and those with veto authority within the organization and whether the adoption decision has sufficient support to make implementation feasible. Each of the preceding criteria moderate perceptions of legitimacy and feasibility; all else equal increased effectiveness, efficiency, equity, and manageability ought to improve both and vice versa.
Evaluating potential adoption cases against these criteria will reduce uncertainty with respect to both the technology and the status quo task environment, allowing for more informed decision making and improved risk management.
Following the introduction of this guiding framework for government leaders and public managers, the authors offer two cases of adoption of AI tools by two local governments. They examine the risk management strategy of the City of Bryan, Texas which became one of the first municipalities to offer self-driving shuttle rides for residents as part of a partnership with industry and university researchers. The other case study examines how the City of Syracuse has both invested in the necessary infrastructure to support autonomous vehicles and other AI tools, and to proactively develop a comprehensive risk management strategy to guide future adoption and implementation decisions.
Finally, Professors Young and Bullock offer recommendations for identifying both relevant opportunities and hazards presented by AI tools to government leaders and public managers as part of their overall risk management strategy. These recommendations seek to aid in decision making that maximizes the opportunities from AI tools while minimizing the challenges and hazards. While this list is certainly not exhaustive, it provides those doing the work of government with a starting point to quickly think through some of the most important considerations for adding AI tools to their broader toolkit of delivering government services effectively, efficiently, and equitably as possible.
Risk Management in the AI Era: Navigating the Opportunities and Challenges of AI Tools in the Public Sector serves as an excellent companion piece to recent IBM Center reports that examine both AI and aspects of risk management that can help government agencies. The Center recently collaborated with the Partnership for Public Service on several reports, which includes Using Artificial Intelligence to Transform Government; More Than Meets AI; and More Than Meets AI: Part II. These reports focus on the evolving use of AI in government, how this technology might affect federal employees, and how best to identify the risk associated with pursuing AI technologies. Along with these AI-specific reports, the Center has a library of risk management research, such as Managing Risk in Government: An Introduction to Enterprise Risk Management by Karen Hardy; Managing Risk, Improving Results: Lessons for Improving Government Management from GAO’s High Risk List by Donald Kettl; Improving Government Decision Making through Enterprise Risk Management by Thomas Stanton and Douglas Webster; Risk Management and Reducing Improper Payments: A Case Study of the U.S. Department of Labor by Dr. Robert Greer and Justin B. Bullock; and Managing Cybersecurity Risk in Government by Anupam Kumar, James Haddow, and Rajni Goel.
In light of the government and national response to COVID-19, the Center recently launched a blog series on the importance of risk based decision-making government. This series will explore such topics as lessons learned from the evolution of risk management in government; enterprise risk management (ERM) and how it can help improve decision making; managing the risks associated with artificial intelligence; and managing specific financial, IT, cyber, and program risks.
The IBM Center has wealth of resources that can assist government executives mitigate the potency of uncertainty by managing the realities of risk and we welcome the addition of Risk Management in the AI Era: Navigating the Opportunities and Challenges of AI Tools in the Public Sector by Justin Bullock and Matthew Young to our rich library of research. We hope you find their report and all the reports referenced above helpful in delivering the business of government.