Preparing for an AI Future: Cybersecurity Considerations for Public Service

This post first appeared on IBM Business of Government. Read the original article.

Monday, November 13, 2023

Guest Blogger: By Bobbie Stempfley, Vice President, Cyber Security Dell Technologies (formerly Deputy Assistant Secretary Cyber Security and Communications, Department of Homeland Security)

Cybersecurity has evolved from a conversation among technologists in server rooms to a substantial dialog between industry leaders and policy makers on the international stage. There is an almost universal recognition of the importance of managing risks associated with doing business in the digital age of the 21st century, and three decades of not adapting to modern technical and security capabilities to overcome.

As we face this history, technological innovations continue and the promise of artificial intelligence (AI) has reemerged as new capabilities have captured the imagination. “AI is one of the most powerful technologies of our time, with broad applications. . . in order to seize the opportunities AI presents, we must first manage its risks.” An active debate is occurring on how AI, which has been described as being as important an innovation as electricity and the wheel, will impact the delivery of citizen services, public safety, economic development, and national security. In cybersecurity today we are facing workforce shortages, millions of devices connected around the globe each requiring regular updates to address vulnerabilities, and threats like ransomware. Enterprises attempt to rationalize a growing set of security technologies designed to provide detection and protection, while adversaries learn and adapt with agility. Should we be optimistic that the advancements in AI will change these dynamics?

We must start with approaching how we build and integrate this technology; the concept of ‘built-in’ security in new and updated systems has never been more relevant. Highly secure, privacy-enhancing solutions must be the norm, not the exception. Given the pace of advancement in this space, this reality is not assured. It will take leadership, research, investment, and the development of solutions for unanswered questions, including:

  • the complex policy and technologies involved with the identification, use, and protection of data,
  • solutions that can help organizations to protect against adversarial attacks, and
  • improving the performance of tools supporting the development of AI solutions.

The perennial question of ‘how much security is enough’ must evolve into more robust risk management discussions that guide security and government leaders. These discussions should focus on how to make risk trade-offs between safety and security on the one hand, and to achieving other essential imperatives such as interpretability and fairness on the other. Effective guidelines and best practices for how to make these decisions must be built in partnership between the public and private sectors. These guidelines and best practices must then be considered as a part of the responsible use of AI.

There is reason to be optimistic. The recent rapid advancements in Generative AI and its broad applications present tremendous opportunities to change the cybersecurity ecosystem, perhaps for the better. The opportunity areas are rich and include:

  • Secure software development: The digital world is defined by software; it is embedded in critical systems, provides capabilities that drive critical infrastructure, processes our financial transactions, and manages the processes of our governments. Software development will be transformed through advancements in AI. Improved coding capabilities can improve the quality of the software, lowering the number and severity of vulnerabilities in the products throughout the ecosystem.
  • Reducing risky user behavior: Recent research has shown that most users would bypass security policies and controls if it helped their team meet their business objective.Continuing enhancements in technical controls that preclude the ability of users to bypass security helps, as does the work to make it easier for non-experts to act in secure ways; with the addition of predictive and generative technologies, risky behavior is more readily identifiable. Just-in-time training can also be provided to increase user awareness and reinforce appropriate behaviors. Technologies that ‘nudge’ users towards more secure approaches will serve to help offset the current gaps in user behaviors and catch errors before they become security risks.
  • More efficient Security Operations: Enhancing the protection and defense efforts executed by security teams and security operations can have great value. The early successes in embedding AI in security tools are just the beginning. As generative AI and other advancements emerge, they will enable teams to manage the torrents of data used to detect threats more effectively and efficiently, identify the most effective responses, and develop meaningful proactive and automated responses. These transformations can help offset the current resource shortfalls felt in the through-out the private and public sector, enabling the limited resources to be more effectively applied.
  • Enhancing threat modeling: Industry has long sought the ability to make accurate and meaningful assessments of the security of large and complex system-of-systems with mixed successes. The ability to understand the dynamics of these systems and cost of making formal methods-based security arguments have been barriers to wide-scale use of these methodologies. With the advancements in AI, the ability to enhance our threat modeling efforts, and more rapidly and predictably identify where weaknesses may exist will enhance the resilience of these solutions at lower costs.

This optimistic view must be balanced by a realistic assessment that these technologies will also shift the dynamics of cybersecurity in less positive ways. The exact solutions that will enable better threat detection and support more efficient operations of security teams can and will be leveraged by adversaries to improve target selection and enhance their efficiency in exploitation. GenerativeAI’s ability to create more personalized content is being used to draft even more effective phishing emails and deepfakes. The volume of threats is likely to increase, and security teams will have to manage this increase. These impacts are among the obvious ones, but they are an incomplete representation. Other realities must be faced:

  • Expanding attack surface: New channels for adversaries to enter are being created, and protections must be integrated into security programs (e.g., data poisoning, the manipulation of training data and malicious prompts to drive unexpected and malicious behavior from data models; the creation of high volumes of synthetic data to mask malicious data in the increased volume; exfiltration of models, etc).
  • Changing Security Programs: For the last three decades, cybersecurity activities have been defined in terms of the triad of confidentiality, integrity, and availability. These concerns remain valid, but additional security, resilience, and safety considerations must also be included in mature security programs. Ensuring that AI systems function as intended, and that the accountability and transparency descriptions provided by the systems are secure, will be an essential part of accountability regimes. Further, systems must behave in a manner that protects intellectual property, trade secrets, or other sensitive information both in the way they behave and in their transparency efforts.
  • Cyber Crime Impacts: Crime is an ecosystem, one that is often impacted by actions outside the relationship between threat actors and defenders. It is unclear how the leveraging of AI advances in other parts of this ecosystem (i.e. changes in the processes of the financial institutions, enhancements in investigative techniques, etc), will change the incentive structures for adversaries and thereby changing the dynamics of the criminal ecosystem. This will require continued agility by security professionals, investigators, and policymakers.

Despite the general understanding of the importance of cyber security, not all organizations are in the position to implement what is necessary to address the threats faced by today’senterprises. Whether it is a lack of access to resources, talent or influence, these organizations can present risks to their customers and partners. Given their reality, these organizations may be further left behind as the speed and volume of attacks increase, and their ability to access security solutions remains challenged. How we leverage this moment of innovation to reduce this divide will be key to our efforts to advantage the protectors and defenders.

As with any advancement, malicious actors with fewer concerns about ethics, privacy, and responsible use will seek to use it to their advantage. Each advancement on the defender’s side will be met with innovations by the adversary. This cat-and-mouse game has historically advantaged the adversarial threat actors. Now is our opportunity to change that. The core question is how to effectively leverage the innovations that will be driven by the use of AI, to provide a more significant advantage for the protectors and defenders than for the threat actors. And, in doing so, ensure that policies and capabilities benefit all, reducing the divide between well-resourced organizations and the cyber-underserved — creating advantages for all.

 

This blog post first appeared on the National Academy of Public Administration’s website.

Image by macrovector on Freepik

Leave a Reply

Your email address will not be published. Required fields are marked *