How AI Impacts Board Readiness for Oversight of Cybersecurity and AI Risks

By Brigadier General Gregory Touhill (USAF, Ret., CISSP, CISM, NACD-DC®), Murray Kenyon (US Bank), Nicola Sanna (Safe Security, and The FAIR Institute)

03/11/2025

Ensuring the board of directors is ready and able to effectively provide the strategic direction necessary to successfully integrate AI capabilities into their organization is a significant contemporary challenge. Artificial intelligence systems are transformative technologies that are disrupting entire industries and reshaping societal interactions. Their capabilities offer tremendous opportunities to organizations, yet like other automated systems, also present noteworthy new risks, as they are susceptible to significant cyber vulnerabilities.

Boards must ensure they have access to the right knowledge, data, and talent to understand and carefully weigh the balance between opportunities and risk to make timely and well-informed decisions regarding how to best incorporate AI capabilities (e.g., those used for analysis, assistance, augmentation, or autonomy) into their organization. Companies can and should leverage existing risk assessment frameworks to evaluate AI risk in economic terms and evaluate the most effective risk-mitigation controls. 

Boards need to pay close attention to the cyber risks associated with AI systems. Nation-state and cyber-criminal groups have AI systems in their sights and are actively using and targeting them. The volume and severity of these threats continues to grow, targeting vulnerabilities that include those emerging from poor software coding and security practices used by well-intentioned AI system developers eager to rush their products to market. Acquiring and using an AI system that is poorly designed and includes material defects will likely expose your organization to unacceptable risks. Before acquiring AI systems and capabilities, boards should ensure their organization exercises due care and diligence in verifying their suppliers are indeed following best practices in AI engineering, including incorporating DevSecOps software engineering principles into the development of the software-intensive systems. For example, the Software Engineering Institute at Carnegie Mellon continues to highlight best practices in AI engineering, software engineering and cybersecurity to guide developers to make AI systems the best they can be. Further, risk quantification can help boards distinguish true risk signals from noise. Organizations should consider using available comprehensive models to quantify AI risks to account for potential severity and secondary losses.

In addition to cyber threats directed against vulnerabilities in AI systems, there are also risks emerging regarding the data used to train, maintain, and enrich AI systems. Data poisoning attacks, where a malicious actor deliberately tampers with data sources used by AI systems to negatively influence the efficacy of and trust in the system, are a legitimate threat to the integrity of AI systems. So is the consumption of data used to train the models that is not “ethically sourced” (e.g., data that contains personally identifiable information, intellectual property, or government classified information without the data owner’s permission or curation). Using AI systems whose data provenance and security protections are suspect may expose an organization to significant liabilities. Boards should ensure their organizations verify that their suppliers have appropriate rights to the data used by their systems and implement best practices in data security. Those suppliers should also be disclosing what AI models they subscribe to and use to augment or enhance their product offerings to your company. Additionally, boards should consult with their general counsel to identify any liabilities emerging from third-party failures to maintain proper data security and provenance controls.

Boards are advised to secure an experienced and trusted independent third-party AI technical advisor. They also should invest in AI-related training opportunities from trusted sources such as NACD and Carnegie Mellon.

A purpose-built technology or product committee for companies that develop AI products can help focus the company on overseeing the necessary details of AI governance; however, boards should consider making AI an agenda item for the entire board to consider as part of their overall strategic process as well.

Imperative for Boards

With AI disrupting so many business and societal models, boards need to act now with velocity and precision to ensure their organization remains competitive and secure.

 

NEXT

Content

AI Friend and Foe

Elevate your board's AI oversight capabilities to balance cybersecurity benefits with emerging risks, while ensuring responsible governance for strategic advantage.

Defining AI and Its Impact on Cybersecurity

Understand essential AI types and their cybersecurity implications, from traditional systems to LLMs, while addressing key risks including skills gaps, model drift, and lack of model transparency.

AI as a Cybersecurity Risk and Force Multiplier

Navigate the dual impact of AI in cybersecurity as both a risk multiplier enabling sophisticated attacks and a force multiplier enhancing threat detection, analytics, and workforce capabilities.

How AI Will Impact Cybersecurity Regulatory and Disclosure Matters

Discover how organizations must navigate AI's regulatory challenges and fulfill their disclosure obligations to ensure responsible and transparent AI use and oversight.

How AI Impacts Board Readiness for Oversight of Cybersecurity and AI Risks

Equip your board with essential AI governance knowledge to address cybersecurity vulnerabilities and implement risk assessment frameworks for responsible implementation and improve board readiness for AI governance.

Boardroom Tool: Questions for Directors to Ask About AI

Leverage this question framework to guide board discussions on AI to ensure proper board governance and oversight of this critical technology.