AI as a Cybersecurity Risk and Force Multiplier

By Patrick Hynes (EY), Robyn Bew (EY), JR Williamson (Leidos), and Murray Kenyon (US Bank)

03/11/2025

AI and New Risks

US Cybersecurity and Infrastructure Security Agency (CISA) Chief Jen Easterly likely voiced the concerns of many CEOs and board members in describing the impact on cybersecurity of generative AI (GenAI). Easterly said, “A powerful tool will create a powerful weapon . . . It’ll exacerbate the threat of cyberattacks . . . [by making] people who are less sophisticated actually better at doing some of the things they want to do.”

 

Commonly cited cyber-risk factors related to AI, and particularly GenAI, include the following:

  • More advanced and effective social engineering campaigns that leverage AI to create increasingly realistic imitations of documents, videos, images, and voices
  • Faster identification of high-value targets and vulnerable systems by bad actors
  • Reduced cost for cyberattack tools, lowering the barriers to entry for less-sophisticated cybercrime actors
  • Developing novel attack techniques based on AI modeling and simulation that subvert a system’s inherent weaknesses rather than known vulnerabilities
  • Data poisoning that corrupts underlying AI model data in order to manipulate outputs
  • Prompt injection attacks, where specifically engineered prompts trick GenAI systems allowing bad actors to bypass security, privacy, or other system guardrails

 

Even more sobering, as leading cyber experts have pointed out, is the fact that some unintended downstream consequences or second-order effects of artificial intelligence use-cases are as yet unknown.

Applying AI to Cybersecurity

However, advances in AI and GenAI also have the potential to improve companies’ cybersecurity posture in several ways, and potentially tip the scale in favor of cybersecurity teams against attackers. While AI is considered a cybersecurity risk multiplier, AI can also be considered a “force multiplier.” AI can allow organizations to anticipate threats in advance and respond to cyberattacks faster than the attackers can move. As the threat landscape continues to grow and evolve, AI is poised to become a prominent tool used to address many cybersecurity risks, and boards must understand the benefits and risks it will bring to their organizations.

A promising area of opportunity is the ability to apply AI-driven network, asset mapping, and visualization platforms to “provide a real-time understanding of an expanding enterprise attack surface.” Using AI, ML, and LLM tools to automate parts of key cybersecurity functions like threat detection and incident response can enable quicker and more efficient mitigation.

LLMs provide the most value to organizations when used for threat detection and remediation. These LLMs can be trained on data that is constantly being updated, such as continuously updated data from the Internet and data generated by internal security assessments. This data allows LLMs to understand and detect new cyberattacks before the human cybersecurity teams can. In addition to threat detection, LLMs are also valuable in threat and vulnerability remediation. These models can analyze alerts and system log data, evaluate cyberattack information, and produce the best steps for remediation.

AI's ability to learn from data and make predictions or decisions makes it a powerful tool in the field of cybersecurity. Generative AI can also improve the human-to-machine interface, demystifying complex cybersecurity terms and architectures and greatly reducing the friction that some may feel working with the cybersecurity team.

 

Cybersecurity use cases for AI include these:

  • Threat Detection and Response: Cybersecurity teams can use AI security tools to analyze threat indicators from millions of endpoints in exponentially less time than without them. This rapid detection and response capability is crucial in minimizing the impact of a security breach.
  • Advanced Analytics: AI enables advanced analytics that help close the gap between an attacker's speed and a defender’s ability to detect malicious activity; for example, by being able to execute two to three times more threat hunts per analyst.
  • Incident Investigation and Response: AI can help determine risk and impact and automate decisions during a cyber incident. This can significantly speed up the response time and minimize potential damage.
  • Enriching Threat Indicators: AI can enrich threat indicators and metadata on terabytes of streaming data, improving the security posture with high-performance analytics. This helps to lower the signal-to-noise ratio to improve the efficacy of the alerts an analyst needs to investigate.
  • Cost Reduction: Automation of cybersecurity processes with AI can help reduce costs. Although the tools themselves are not cheap, as the volume of security data rises at such large rates, we typically need more analysts to interpret and operate on that data. AI can help augment the capacity of existing analysts, so that they can address a greater volume of data with higher quality decision-making and speed, without the need to increase your staff in a commensurate manner. Since labor is frequently the largest single cost category of a cybersecurity program, AI can enable a cybersecurity program to expand its capacity and maturity without driving up labor costs.

 

In addition to these, AI can also be used for insider threat detection, identity and access management (IAM), account protection for Software as a Service accounts, and threat hunting.

The critical advantage AI offers, though, is its ability to benefit the currently strained cyber workforce by both enhancing their work and potentially leading to improved job satisfaction.AI-powered security and compliance automation platforms are already delivering this as these tools can “streamline workflows, enabling teams to respond to incidents faster and with greater precision.” This, in turn, allows the cybersecurity professionals to focus on more valuable strategic initiatives and higher-level threat analysis. With the potential for improved performance and value creation, boards should evaluate the organization’s cybersecurity workforce and leadership to assess their readiness for AI and determine how AI may impact the company’s current and future cybersecurity workforce needs.

AI can improve cybersecurity effectiveness, but it is not a panacea, and it introduces new risks boards and management teams must monitor. Board members’ first acknowledgment should be that cybercriminals also access to AI tools. AI can be helpful in detecting threats; however, “cyber criminals evolve their attack strategies to evade it.” Further, these tools are prone to high false positive rates, making it difficult to identify novel threats.

Imperative for Boards

AI’s ability to be both a force and risk multiplier—for companies’ business models generally, and within the cybersecurity landscape specifically—amplifies the importance of the 2023 Director's Handbook on Cyber-Risk Oversight's Principle One regarding the need for boards to consider cybersecurity as a matter of strategy and enterprise risk, rather than simply as a technology issue. In addition, AI’s multiplier effect on cyber risks heightens the need for collective action to improve systemic resilience, as outlined in Principle Six of the Handbook.

 

NEXT

 

 

Content

AI Friend and Foe

Elevate your board's AI oversight capabilities to balance cybersecurity benefits with emerging risks, while ensuring responsible governance for strategic advantage.

Defining AI and Its Impact on Cybersecurity

Understand essential AI types and their cybersecurity implications, from traditional systems to LLMs, while addressing key risks including skills gaps, model drift, and lack of model transparency.

AI as a Cybersecurity Risk and Force Multiplier

Navigate the dual impact of AI in cybersecurity as both a risk multiplier enabling sophisticated attacks and a force multiplier enhancing threat detection, analytics, and workforce capabilities.

How AI Will Impact Cybersecurity Regulatory and Disclosure Matters

Discover how organizations must navigate AI's regulatory challenges and fulfill their disclosure obligations to ensure responsible and transparent AI use and oversight.

How AI Impacts Board Readiness for Oversight of Cybersecurity and AI Risks

Equip your board with essential AI governance knowledge to address cybersecurity vulnerabilities and implement risk assessment frameworks for responsible implementation and improve board readiness for AI governance.

Boardroom Tool: Questions for Directors to Ask About AI

Leverage this question framework to guide board discussions on AI to ensure proper board governance and oversight of this critical technology.