This post first appeared on IBM Business of Government. Read the original article.
Significant attention has been paid of late on how best to approach potential regulation of artificial intelligence (AI).
Blog Co-Author: Virginia Huth currently serves as the SES Assistant Commissioner of the Office of Regulatory and Oversight Systems in the Office of Technology Transformation Services at the U.S. General Services Administration, which is overseeing the modernization of the eRulemaking system. Virginia is writing in her personal capacity; her opinions are her own and do not represent the views of the GSA.
But what about the converse of this proposition – how can AI help governments become more efficient in issuing and analyzing regulations?
A major challenge in the rulemaking process involves managing mass volumes of comments, which agencies must review for substance to inform the basis of the rulemaking. For example, the National Environmental Protection Act rulemaking of 2020 received over 1.1 million comments. AI can improve agency accountability in addressing all substantive comments.
Another major challenge is the complexity of some rulemakings, some over 1,000 pages, relying on scientific studies and data to inform the analysis, and taking years to complete. Yet rulemakings still appear typically as a PDF, with no ability to search on the document for key text. Text analytics that tags data for meaning, would enable an important step forward in establishing machine-readable text. Machine readable text would not only allow rulemakers to identify key parts of a new rulemaking needing coordination, but can also help with both retrospective review of prior rules and identifying opportunities to reduce duplication in multiple rulemakings. Machine readable text can also help the public identify parts of a rule of particular interest to them, helping them to provide meaningful comments.
Some argue that the risks of AI are too high for the regulatory process, and that the current process is sufficient. Yet the current process can be strengthened, especially in how AI can support key regulatory tenets. The key lies in how AI can further the foundation principles of transparency, public engagement, and accountability.
- Transparency is the cornerstone of the rulemaking process. While proposed rules are posted in the Federal Register and then incorporated into the Code of Federal Regulations (CFR), the average citizen does not check the Federal Register or the CFR frequently. Technology can improve the ability of the public to more easily find and search on regulations of interest to them, through improvements to solutions such as www.Regulations.gov.
- Public engagement is a vital part of the regulatory process. Agencies may also conduct public meetings to hear directly from the public, or conduct private meetings with members of the public that must be a matter of public record. But how many citizens can travel to Washington DC for meetings? While many have concerns about more comments being written by AI, is there anything inherently wrong with getting help that facilitates engagement and helps someone write a better comment?
- Accountability by agencies for relying on scientific data and other supporting evidence wherever appropriate is critical to the regulatory process. Without sound evidence, rulemakings may be challenged and have been overturned in the Federal courts and by Congress. AI can help agencies make informed decisions across multiple parts of the process, reducing the risk of being overturned:
- AI can help agencies sift through immense volumes of data and evidence.
- AI can help in public comment review, assessing large volumes of comments to array common themes quickly and finding what comments may be generated by “fake” sources to mitigate potential impacts from disinformation
- AI can support retrospective review of existing rules, enabling agencies to identify rules needing updates.
Regulatory Development
Agency analysis and decision making for a rulemaking generally begins with a determination of the need for a regulation, followed by conducting research and gathering information to support the analysis (known as “establishing the record”), and then drafting the text of the proposed rule.
Historically, agency staff have engaged in regulatory analysis in a linear fashion, sifting through many documents of varying degrees of complexity to develop alternatives for review. AI has enabled public and private sector organizations to collect vast amounts of information and array common themes in an organized fashion, in orders of magnitude faster than conventional analysis.
In the same way that cost/benefit analysis and risk estimation are critical tools in the rulemaking process, agencies can consider the costs, benefits, and risks of using AI to support the rulemaking process. A prior report issued through NAPA contends that that AI can reduce human mistakes and correct bias in law enforcement decisionmaking, and those lessons could apply to other regulated sectors.
Public Comment Review
A key issue that AI can both introduce and protect against involves “fake comments” or “fake commenters.” The process for rulemaking is not democratic. While quantity of comments may affect decision-makers on certain issues, the Administrative Procedure Act requires that data, evidence, and a sound rationale drive behind the final decision. Yet perceptions matter; the belief that fraudulently submitted comments could sway the decision-making process is a dangerous threat to the integrity of the rulemaking process.
New AI tools can help agencies more effectively summarize mass volumes of public comments, improving public confidence by reinforcing that substantive comments are of primary value as opposed to volume of comments. A human should always be involved to review the analysis, and current technologies exist today to enable tracing of summary information back to the source document (the public comment) for validation. Greater public awareness of these capabilities can improve public trust; in contrast, lack of technology support in large data environments can lead to incomplete analysis.
It is worth noting that the scope of the “fake” commenter issue has generally been limited to extremely high profile and controversial regulations, such as the Net Neutrality rulemaking at the FCC in 2017. This is not meant to diminish the importance of the challenge of “fake” commenters, but rather to suggest that any solution should consider probability as well as impact when estimating overall risk.
Retrospective Review
A recent article in the University of Pennsylvania Law School Regulatory Review, “Artifical Intelligence for Retrospective Regulatory Review,” by Catherine Sharkey and Cade Mallet with the New York University School of Law, provides an excellent discussion of this issue. The report and case studies are, we think, both encouraging and instructive for governmental creators and users of AI. One lesson learned is that the resources and technical expertise required to carry an AI project to the finish line are rare among federal agencies. Where internal capacity exists, agencies should consider launching pilot projects on algorithmic retrospective review and sharing their tools openly with other federal agencies. The authors conclude that easing AI into prospective rulemaking by learning from and replicating its contributions to retrospective review is a prudent first step.
Read the full, expanded version of this paper.
Image by pch.vector on Freepik