AI in Cybersecurity

Defining AI and Its Impact on Cybersecurity

By Omar Khawaja (Databricks) and Murray Kenyon (US Bank)

03/11/2025

Companies are adopting AI tools for a variety of applications, cybersecurity use cases included. As cybersecurity teams deploy AI, it is critical to understand the underlying AI models and techniques that power these cybersecurity capabilities.

Outside of data science, AI is new to most teams across organizations. Their understanding of the risks associated with AI and how to mitigate them is relatively new. While many of the risks associated with AI may, on the surface, seem unrelated to cybersecurity (e.g., fairness, explainability, regulatory, trustworthiness, etc.), many canonical controls that have been managed by cybersecurity teams (e.g., authentication, access control, logging, monitoring, etc.) for decades can be deployed to mitigate many non-cybersecurity risks of AI.

However, AI amplifies both positive and adverse outcomes. Unless adverse outcomes are effectively overseen and managed, the net benefit of AI will be negative.

)

Defining Forms of AI

Recent advancements have focused the spotlight on generative AI and large language models and made them a viable tool across a range of business functions.  These include:

  • Advancements in Training Techniques: Over the past few years, significant advancements in the techniques used to train these models have resulted in big leaps in performance. Notably, one of the largest jumps in performance has come from integrating human feedback directly into the training process.
  • Increased Accessibility: The release of ChatGPT opened the door for anyone with Internet access to interact with one of the most advanced LLMs through a simple web interface. This brought the impressive advancements of LLMs into the spotlight, since previously these more powerful LLMs were only available to researchers with large amounts of resources and those with very deep technical knowledge.
  • Growing Computational Power: The availability of more powerful computing resources, such as graphics processing units (GPUs) and better data processing techniques, allowed researchers to train much larger models, improving the performance of these language models.
  • Improved Training Data: LLM performance has improved dramatically alongside improvements in collecting and analyzing large amounts of data.
  • Improving the Use of Prompts: The models themselves can also help to teach humans how to optimize their use of the system. Just like those who understand the syntax of complex search engines can generally significantly improve their search results with mainstream tools like Google and Bing, humans can learn how to effectively interact with GenAI tools to increase their efficacy, reliability, and usefulness.

Why LLMs are Creating New Risks and Opportunities for Information Security

While AI offers the opportunity to enhance cybersecurity, it’s critical to note that threat actors are also using AI and that use of AI in cybersecurity without proper oversight can increase risk to an organization. Security risks involved with the use of AI include these:

  • Lack of AI Proficiency: The need for AI-proficient cybersecurity professionals will grow as AI technologies, like LLMs, become more prevalent. However, this current skills gap leaves many cybersecurity teams lacking the necessary expertise to effectively manage the risks associated with LLMs and fully harness the potential that AI can empower their teams to achieve.
  • Unmanaged Model Drift: LLMs are trained on vast amounts of data, often from diverse and uncontrolled sources. The complexity of this training data makes it difficult to fully understand and control what the model has learned, which threatens the reliability of the model and, therefore, its usefulness to the cybersecurity team. Potential negative outcomes include data leakage or the generation of inappropriate content.
  • Lack of Transparency: LLMs, like many AI models, are often seen as "black boxes" because their internal workings are not easily interpretable by humans. This lack of transparency can make it difficult to predict or explain the model's output, leading to potential risks in decision-making processes. Ultimately, these tools need to become more resilient through explainability, dependability, and tamper resistance, in order to become trusted resources supporting the cybersecurity mission.
  • Autonomous Content Generation: LLMs have the ability to generate new content autonomously. While this can be useful and improve speed-to-decision processes, it also means that they can produce harmful or misleading information without human intervention and oversight.
  • Evolving Frameworks: The rapid advancement of LLMs and other AI technologies has outpaced the development and adoption of regulatory and industry frameworks. This can lead to misuse of the technology and difficulties enforcing accountability.
  • Increased Risk Tolerance: The potential benefits of AI technologies like LLMs are driving a strong appetite for their implementation among businesses. However, this eagerness can lead to an implicit increase in risk tolerance, as businesses may rush to adopt these technologies without fully understanding or mitigating the associated risks. This is particularly problematic when tech teams, due to various constraints, are unable to meet the pace expectations to deliver safe LLM solutions. As a result, businesses may end up deploying AI solutions that have not been adequately vetted for security or ethics, thereby increasing their vulnerability to data breaches, misuse, and other potential liability harms to the organization.
  • Data Use Implications: As generative AI models become increasingly sophisticated, they rely on vast amounts of data for training. This raises concerns about the ethical implications of data usage and the potential for misuse. Additionally, traditional information release practices have not fully considered the implications of data being used to train AI models. This can create a disadvantage for more conservative companies, who may hesitate to release data, while less cautious organizations may inadvertently share sensitive information.

NEXT

Content

AI Friend and Foe

Elevate your board's AI oversight capabilities to balance cybersecurity benefits with emerging risks, while ensuring responsible governance for strategic advantage.

Defining AI and Its Impact on Cybersecurity

Understand essential AI types and their cybersecurity implications, from traditional systems to LLMs, while addressing key risks including skills gaps, model drift, and lack of model transparency.

AI as a Cybersecurity Risk and Force Multiplier

Navigate the dual impact of AI in cybersecurity as both a risk multiplier enabling sophisticated attacks and a force multiplier enhancing threat detection, analytics, and workforce capabilities.

How AI Will Impact Cybersecurity Regulatory and Disclosure Matters

Discover how organizations must navigate AI's regulatory challenges and fulfill their disclosure obligations to ensure responsible and transparent AI use and oversight.

How AI Impacts Board Readiness for Oversight of Cybersecurity and AI Risks

Equip your board with essential AI governance knowledge to address cybersecurity vulnerabilities and implement risk assessment frameworks for responsible implementation and improve board readiness for AI governance.

Boardroom Tool: Questions for Directors to Ask About AI

Leverage this question framework to guide board discussions on AI to ensure proper board governance and oversight of this critical technology.