This post first appeared on Federal News Network. Read the original article.
The Biden administration is calling on federal agencies to step up their use of artificial intelligence tools, while also laying the policy groundwork to mitigate risks from this emerging technology.
President Joe Biden, in a sweeping executive order last October, called for a “governmentwide AI talent surge” across the federal workforce to build up its capacity to lead on this emerging technology.
More than 90 days after the executive order, the White House says agencies are on track to meet their assigned goals.
Austin Bonner, deputy U.S. chief technology officer for policy at the White House Office of Science and Technology Policy, said the AI executive order “builds on a lot of work this administration has been doing from day one.”
“The EO really is the most significant action any government has taken on artificial intelligence,” Bonner said during the Federal News Network’s AI and Data Exchange. “The president told us to pull every lever.”
Working with agencies, tech leaders on ‘trustworthy AI’
The administration released a blueprint for an AI Bill of Rights in December 2022, outlining what more than a dozen agencies will do to ensure AI tools deployed in and out of government align with privacy rights and civil liberties.
The White House, as part of the AI executive order, has also secured voluntary commitments from 15 American technology companies to ensure “safe, secure and trustworthy AI” before releasing it to the public.
Meanwhile, the Office of Management and Budget is finalizing new guidance for how federal agencies should accelerate the use of AI tools and set up guardrails for this emerging technology. (Read Sen. Mark Warner’s comments about necessary guardrails shared at the AI & Data Exchange.)
Draft OMB guidance, released last November, directed agencies to name chief AI officers, accelerate adoption of AI tools and manage associated risks.
“This is really how we’re leading by example. The federal enterprise is vast. It reaches all kinds of industries and employs lots of people. It’s really important that we use artificial intelligence well, both to harness the opportunities and make sure that we’re managing risks,” Bonner said.
OMB is in the process of finalizing the guidance. It accepted comments through Dec. 5, 2023, and received nearly 200 comments on the draft policy.
Bonner said the upcoming final guidance focuses both on developing governance and making sure agencies have the right leadership in place to accelerate their use of AI tools.
“We think about governance, who are the right federal leaders who are high enough in the operation to integrate this kind of risk management with the risk management that they’re doing all the time,” she said. “They have a broad range of activities, and that means a broad range of risks need to be folded in as well, and needs to be led by people with the right kind of experience and purview.”
New OMB guidance aims to embrace innovation
Bonner said the OMB guidance also calls on agencies to “remove barriers to innovation.”
“That can mean things like making sure that we have the right compute resources in order to meet these missions, improving our cybersecurity, taking other steps that sort of get barriers out of the way so that AI can work well,” she said.
The guidance also tasks agencies with adopting risk mitigation practices, particularly where Americans’ rights and safety are implicated. “We recognize that there are kinds of uses of AI where particular care for rights and safety is needed,” Bonner said.
Buying a home, for example, sometimes involves AI tools to screen applications.
“That kind of process we need to make sure has adequate data protections in place to protect against bias, against discrimination. We’re on the lookout for those kinds of problems. And I think we’re asking the federal government and its operations to be attentive to those kinds of things as well,” Bonner said.
Under the AI executive order, the National Institute of Standards and Technology recently created an AI Safety Institute. It’s focused on research and setting guidelines to address the risks of AI while also taking full advantage of its opportunities.
“This is a new organization, one built to really address this moment, and we’re excited to kick that off,” Bonner said.
Attracting new tech talent to help agencies drive AI
The Office of Personnel Management, under the EO launched a large-scale, governmentwide hiring action to bring more data scientists into government service. OPM’s pooled hiring notice gives jobseekers the ability to apply once and be considered for several GS-14 data scientist positions across multiple agencies.
“President Biden recognized right away that he has given us an enormous job. We need people to do it — with a wide variety of AI and AI-enabling skills,” Bonner said.
“I think people will see that this is a great place to serve. We need all kinds of skills. We need more data scientists. We need folks with all kinds of computer skills, all kinds of engineering skills. We also need people to do research and development to really study the impact these tools is having on our society, on the way we work,” she added. “And we need folks who are going to help build the monitoring and evaluation systems that help make sure we’re hitting all of our marks when it comes to AI trust and safety.”
OPM late last year also approved direct hire authority for AI-related positions across the government.
“We are trying to remove barriers to make it so people who are out doing this great work and technology in their communities, throughout industry, can come and bring their skills into the federal government and see this as an opportunity to serve,“ Bonner said.
The administration also stood up a Council of Chief AI Officers across the federal government. “This is a really important place for federal leaders to come together, share best practices and coordinate their work. Not every federal agency needs to reinvent the wheel,” she said.
Reimagining how agencies do their work, help the public
The Biden administration expects AI will transform the way agencies meet their missions.
Bonner shared a few examples. The National Weather Service expects it can use AI to get better at predicting extreme weather, while the Transportation Department is counting on it to predict failures of critical air safety equipment and allow for preventative maintenance.
“AI has the potential to assist the government in all of those missions — as long as we’re managing its risks well and making sure that it’s safe and trustworthy,” Bonner said. “Any federal agency leader that you brought in could tell you an idea that they have for a way that artificial intelligence can help improve their mission. And we’re really excited to help get those going.”
Bonner said OSTP will play a central role in overseeing the AI executive order’s implementation
“We are charged with making sure the president gets the best science and technology advice, coordinating the work of the federal government across many, many agencies on science and technology,” she said. “And because we have such a broad science and technology mission, every place where artificial intelligence connects with some part of science and technology, we have great folks working on things — like improving treatment for patients with cancer and thinking about the ways that artificial intelligence can be helpful there.
“When I need to ask a question about biological risks, I have actual biologists down the hall that I can speak to. We’ve got a broad purview, and I think that’s appropriate given how broad the executive order is.”
The AI & Data Exchange’s Day 2 on Federal News Network takes place at 1 p.m. Eastern on March 28. Register now — it’s free!
The post AI & Data Exchange: OSTP’s Austin Bonner on ‘leading by example’ as agencies implement AI executive order first appeared on Federal News Network.