
Implications for Corporate Oversight of Cybersecurity

AI as a Cybersecurity Risk and Force Multiplier
03/11/2025
AI and New Risks
US Cybersecurity and Infrastructure Security Agency (CISA) Chief Jen Easterly likely voiced the concerns of many CEOs and board members in describing the impact on cybersecurity of generative AI (GenAI). Easterly said, “A powerful tool will create a powerful weapon . . . It’ll exacerbate the threat of cyberattacks . . . [by making] people who are less sophisticated actually better at doing some of the things they want to do.”
Commonly cited cyber-risk factors related to AI, and particularly GenAI, include the following:
|
Even more sobering, as leading cyber experts have pointed out, is the fact that some unintended downstream consequences or second-order effects of artificial intelligence use-cases are as yet unknown.
Applying AI to Cybersecurity
However, advances in AI and GenAI also have the potential to improve companies’ cybersecurity posture in several ways, and potentially tip the scale in favor of cybersecurity teams against attackers. While AI is considered a cybersecurity risk multiplier, AI can also be considered a “force multiplier.” AI can allow organizations to anticipate threats in advance and respond to cyberattacks faster than the attackers can move. As the threat landscape continues to grow and evolve, AI is poised to become a prominent tool used to address many cybersecurity risks, and boards must understand the benefits and risks it will bring to their organizations.
A promising area of opportunity is the ability to apply AI-driven network, asset mapping, and visualization platforms to “provide a real-time understanding of an expanding enterprise attack surface.” Using AI, ML, and LLM tools to automate parts of key cybersecurity functions like threat detection and incident response can enable quicker and more efficient mitigation.
LLMs provide the most value to organizations when used for threat detection and remediation. These LLMs can be trained on data that is constantly being updated, such as continuously updated data from the Internet and data generated by internal security assessments. This data allows LLMs to understand and detect new cyberattacks before the human cybersecurity teams can. In addition to threat detection, LLMs are also valuable in threat and vulnerability remediation. These models can analyze alerts and system log data, evaluate cyberattack information, and produce the best steps for remediation.
AI's ability to learn from data and make predictions or decisions makes it a powerful tool in the field of cybersecurity. Generative AI can also improve the human-to-machine interface, demystifying complex cybersecurity terms and architectures and greatly reducing the friction that some may feel working with the cybersecurity team.
Cybersecurity use cases for AI include these:
|
In addition to these, AI can also be used for insider threat detection, identity and access management (IAM), account protection for Software as a Service accounts, and threat hunting.
The critical advantage AI offers, though, is its ability to benefit the currently strained cyber workforce by both enhancing their work and potentially leading to improved job satisfaction.AI-powered security and compliance automation platforms are already delivering this as these tools can “streamline workflows, enabling teams to respond to incidents faster and with greater precision.” This, in turn, allows the cybersecurity professionals to focus on more valuable strategic initiatives and higher-level threat analysis. With the potential for improved performance and value creation, boards should evaluate the organization’s cybersecurity workforce and leadership to assess their readiness for AI and determine how AI may impact the company’s current and future cybersecurity workforce needs.
AI can improve cybersecurity effectiveness, but it is not a panacea, and it introduces new risks boards and management teams must monitor. Board members’ first acknowledgment should be that cybercriminals also access to AI tools. AI can be helpful in detecting threats; however, “cyber criminals evolve their attack strategies to evade it.” Further, these tools are prone to high false positive rates, making it difficult to identify novel threats.
Imperative for Boards
AI’s ability to be both a force and risk multiplier—for companies’ business models generally, and within the cybersecurity landscape specifically—amplifies the importance of the 2023 Director's Handbook on Cyber-Risk Oversight's Principle One regarding the need for boards to consider cybersecurity as a matter of strategy and enterprise risk, rather than simply as a technology issue. In addition, AI’s multiplier effect on cyber risks heightens the need for collective action to improve systemic resilience, as outlined in Principle Six of the Handbook.
