Artificial Intelligence and Public Service: Key New Challenges

This post first appeared on IBM Business of Government. Read the original article.

Wednesday, July 26, 2023

Digital innovation and AI have grown extensively since before WW II.

(This post first appeared on the National Academy of Public Administration website. Please also see our our first post, “A Call to Action: The Future of Artificial Intelligence and Public Service.”)

Recently they have gained MUCH more attention. For example, since last fall, ChatGPT has thrilled many with its potential as a powerful, low-cost personal assistant. At the same time, it has terrified many—often the same people—with its potential as a powerful, low-cost source of disinformation.

We find that, after more than 50 years of computing, we are suddenly working with explosively new capabilities and impacts on public services and society.

But what capabilities are truly “new”? How should we use them to create value? How should we prepare for the future?

I.   NEW CAPABILITIES – EXPONENTIAL CHANGE HAS GROWN EXPLOSIVE

Until about 2010, we focused on traditional data processing. We developed computer-oriented organizations and specialists; programming languages, databases, and networking; standardization for economies of scale; and – most important – exponential growth in computer productivity.

From 1970 to 2010, as computer productivity doubled every two years (Moore’s Law), it rose a million times. That was amazing. It was used, however, primarily for routine record-keeping and communications. By combining technologies and technology people, we improved accounting and many other procedures, especially in large organizations.

Eventually, by 2020, realities had changed dramatically. Computers could work with complex data patterns, not just simple numbers, or text. They could understand words that were spoken. They could speak words that were written. They could translate reasonably well from one language to another. They became the world’s best players of Chess and Go. They could identify people from their pictures. They made impressive progress with autonomous driving.

By 2020, computers could handle many tasks that had previously required people.

And where will we be by 2030? Where will AI – the cutting edge of digital analysis and innovation take us?

Roughly speaking, computers by 2030 should be a billion times more cost-effective than they were in 1970 (60 years = 30 doublings = 1.1 billion). Recent analyses predict AI’s analytic and robotic capabilities to push as many as many as 47% of workers from their current jobs.

Delegating that much work to computers will foster transformational and often disruptive change, not just incremental evolution.

II.   NEW WAYS TO CREATE VALUE – FOUR STRATEGIC PRIORITIES…

Given such powerful capabilities, all stages of digital innovation must change, from:

  1. Tech capabilities – 2. Value chains – 3. Change management – 4. Portfolio balance.

Let’s look at strategic priorities for each stage.

  1. Technology invention and development, given much bigger risks and rewards. The first step will be creating tools that people are willing to Early on, this work was led by governments. Governments handled large problems like defense and tax collection. They could also raise funds and capture results that smaller investors couldn’t.

However, once the internet, personal computers, and smartphones became common, the private sector could harvest enormous returns from data-enabled transparency and personalized service. Since then, private firms have dominated IT development. The first firms to get new technologies and applications to market have often been successful.

But pressure to be first can be dangerous. Overly rushed networks leave vulnerabilities in systems security, reliability, and privacy. Controls must then be developed for bad actors as well as natural disasters. While returns may be high, we must avoid getting crushed by the risks. A smarter risk/reward balance is key, but difficult to achieve, especially for private firms.

To responsibly develop capabilities while controlling for bigger risks and rewards, we need government-regulated development and better public/private collaboration. We must learn what works through well-analyzed, persistent, and resilient experiments that include non- business institutions and stakeholders. We need ongoing research on the public impacts of AI. We need to avoid market failures caused by public and/or private monopolies.

  1. Value chain design, given needs for “cross-boundary” communities and data. Networks add value primarily by supporting specialization, scale, and While we need future networks to continue such support, we must shift targets from internal command and control to negotiated coordination among external actors.

The cost and risks will be formidable. But critical needs to coordinate with larger groups – entire institutions, industries, jurisdictions, and sometimes the globe – make “cross- boundary” networks and standards essential. Climate change, emigration and immigration, pandemics, etc., require reliable interactions among diverse widespread stakeholders.

Better value chains will also need newly available data. Think of brainwave sensors to control artificial limbs. Think of data collected and managed from space. Think of the “Internet of Things” collecting and analyzing better data for more of what we need to know.

  1. Change management, given massive reallocation of jobs and relationships. Early computing worked through cooperation with technology specialists such as systems analysts, programmers, CIOs, et. al. But future stakeholders, especially when fearing displacement, may be far less receptive to The “do or die” challenge – often ignored so far – will be helping displaced and other resistant workers, leaders, and others to find new roles. This will depend on economic development, education, psychology, and politics, not just technology.

Future innovations must be paired with better job creation. The displaced will need incentives and support for bottom-up initiatives and adjustments (not just following top-down orders). Sharing experience and emotions will be important. Back-and-forth dialog will be critical (and could be encouraged through AI-based interactivity). Information and networking should be facilitated through platforms connecting publishers, bloggers, broadcasters, governments, schools and universities, community development groups, religious institutions, professional associations, foundations, etc. Future change management must coordinate and support a notably greater range of people, problems, and physical territory.

  1. Portfolio balance and governance, given AI-enabled impacts on productivity, equity, and public trust. Successful government service and regulation depend on:
    1. Productivity that is
    2. Results that are distributed
    3. Authority used competently in the public

In its early decades, however, digital innovation focused almost exclusively on simply getting the applications to work. That was enough to call them “productive.”

Meanwhile, equity and trust were often ignored as “downstream” issues. Over the years, this became disastrous. From 1970 to 2020, productivity grew, but only at 1.3%/year (not the 2.25%/year we had enjoyed from 1920-1970). Meanwhile, equity became much worse, with the ratio of CEO salaries to those of their average workers exploding from 25 to 390. Public trust also got worse. In the early 1970s, roughly 60% said that those in authority made good decisions “most of the time.” By 2010, only 21% agreed. For society overall, governance has been a disappointment (if not a disaster).

In response, we must take full advantage of what AI can do for productivity, equity, and trust. We need to plan for and manage innovations at the society-wide level, not just project by project. We need wide-spread, low-cost, AI-enabled services that learn to quickly provide better quality. We need bottom-up strategies for AI-powered agriculture, medical care, transportation, public education, public health, income support, public safety, etc.

III.   PREPARING FOR THE AI-ENABLED FUTURE

In getting to 2030 and beyond, we must prepare for incredible new challenges.

From a distance, this may look much like the past. We will still need technology invention and development, value chain design, change management, and balanced governance.

From up close, however, it’s clear we will need to coordinate larger and more resistant groups through yet-to-be-developed standards and value chains relying on far more powerful information and machines. In addition to world class AI, we need world class leadership.

Unfortunately, many leaders and the public are not yet aware of what’s coming. Many will think we should delegate “technology problems” to the specialists, much as we have before.

What we need instead will be straightforward but difficult: make smart choices about new and contested priorities, get the right people working on them, and – most importantly – generate widespread and sustained support for results.

We have done this before. What’s coming should be what democracy, under challenge, has shown it can do very well…

Image by Freepik

Leave a Reply

Your email address will not be published. Required fields are marked *