This post first appeared on Federal News Network. Read the original article.
The Biden administration is seeking feedback on how federal agencies could benefit from generative artificial intelligence tools, like ChatGPT, to meet its mission.
The White House Office of Science and Technology Policy (OSTP), in a request for information released Tuesday, is asking the public to provide this input, as part of an upcoming National AI Strategy. OSTP will accept comments through July 7.
As part of that upcoming strategy, OSTP is asking for feedback on how federal agencies can leverage AI to improve service delivery, as well as examples of the “highest priority and most cost-effective ways to do so.”
The RFI asks respondents to weigh in on the “unique opportunities and risks” of agencies using generative AI tools.
“By developing a National AI Strategy, the federal government will provide a whole-of-society approach to AI. The strategy will pay particular attention to recent and projected advances in AI, to make sure that the United States is responsive to the latest opportunities and challenges posed by AI, as well as the global changes that will arrive in the coming years,” the RFI states.
A senior administration official told reporters earlier this month that the Office of Management and Budget will release draft guidance this summer on the use of AI systems within the federal government.
The OMB guidance will establish specific policies for federal agencies to follow when it comes to the development, procurement and use of AI systems — all while upholding the rights of the American public.
To understand how agencies can effectively use AI tools, the White House is asking about the national security risks and benefits of AI, and how AI tools can help identify cyber vulnerabilities in critical infrastructure.
The RFI also seeks public input on how agencies can use shared pools of resources, expertise and lessons learned to better leverage AI in government. It also asks how the federal government should work with the private sector, as well as with state, local and tribal governments, to support the rollout of safe and effective AI tools.
Agencies have generally been slow to adopt generative AI tools, or have policies in place preventing federal employees from using them as part of official business.
Politico reported earlier this month that the Environmental Protection Agency is prohibiting employees from using generative AI tools — including ChatGPT and OpenAI — for “official use.”
EPA’s Office of Mission Support, according to the email obtained by Politico, described its policy as an “interim decision,” and that the EPA continues to analyze AI tools and will follow up with a final decision on these tools.
“EPA is currently assessing potential legal, privacy and cybersecurity concerns as releasing information to AI tools could lead to potential data breaches, identity theft, financial fraud or inadvertent release of privileged information,” the EPA memo states.
The Defense Department, however, is testing out a generative-text AI tool to help the agency write contracts and speed up the federal acquisition process.
The Biden administration, in its latest AI policy update, is also refining its federal research and development priorities around AI.
The Biden administration also released an updated National AI R&D Strategic Plan on Tuesday that reflects its priorities for future AI research.
The new National AI R&D Strategic Plan is the first update of its kind since 2019. The plan states strategic federal investments in AI R&D are essential to understanding “AI-related risks and opportunities in support of the public good.”
“In order to seize the opportunities that AI presents, the nation must first work to manage its risks,” the plan states. “The federal government plays a critical role in this effort, including through smart investments in research and development that promote responsible innovation and advance solutions to the challenges that other sectors will not address on their own.”
The National AI R&D Strategic Plan outlines nine strategies for federal R&D, including a focus on long-term investments in “fundamental and responsible AI.”
The plan also prioritizes human-AI collaborations that “complement and augment human capabilities, while “mitigating the risk of human misuse of AI-enabled applications that lead to harmful outcomes.”
The National AI R&D Strategic Plan also outlines the need to understand the ethical and legal implications of AI, as well as how to design AI systems that are trustworthy, reliable dependable and safe.
“This includes research to advance the ability to test, validate, and verify the functionality and accuracy of AI systems, and secure AI systems from cybersecurity and data vulnerabilities,” the plan states.
The Biden administration is also looking at ways to provide shared public data sets set environments for AI training and testing. The administration has already briefed lawmakers on one way to broaden AI resources.
The task force behind the National AI Resource (NAIRR), a proposed AI data-and-research hub meant to put federal AI resources in the hands of more U.S. researchers, told Congress in a final report in January that it could reach initial operating capability within 21 months if the project receives enough funding.
The task force is asking Congress for $2.6 billion in congressional appropriations over six years to get the NAIRR up and running.
The Biden administration is also focused on ways to measure and evaluate AI systems through standards and benchmarks.
The National Institute of Standards and Technology in January released its long-awaited AI Risk Management Framework (RMF). a new, voluntary set of rules of the road on what responsible use of AI tools looks like for many U.S. industries.
The framework gives public and private-sector organizations several criteria on how to maximize the reliability and trustworthiness of AI algorithms they are about to develop or deploy.
The administration is also looking to build on existing efforts to grow the AI R&D workforce, as well as expand opportunities to work with industry and the international community to cooperate on AI breakthroughs.