Boardroom Tool: Questions for Directors to Ask About AI

By Larry Clinton (Internet Security Alliance) and Murray Kenyon (US Bank)

03/11/2025

High-performing boards comprise a diverse set of directors who ask direct and insightful questions as they seek knowledge to make informed decisions. Here are sample questions boards ought to ask about AI and cybersecurity:

 

GENERAL QUESTIONS

  • How are our competitors using AI?
  • How are we using AI?
  • Do we feel obligated to do this?
  • When we do “this” what is happening to our risk?
  • How fast should we be, and/or do we need to be going?
  • How can we use AI to disrupt our business and our industry?
  • What are the risks of investing in AI versus maintaining the status quo?
  • What’s our plan to acquire AI capabilities?
  • Who can help us?
  • How much will AI cost, and what is the expected return on investment?
  • Who will lead our AI effort, and what makes them qualified to do so?
  • How do we measure success?
  • Do we need a Chief AI Officer?
  • What is our risk exposure if malicious cyber actors use AI-enabled technology to attack our infrastructure? How do you know?
  • How can we use AI capabilities to reduce our cyber-risk exposure?
  • Is the use of AI representing the shareholders’ interests?
  • Does the board have a clear understanding of what our organization considers ethical use of AI to be?
  • Has the organization clearly defined and communicated what ethical AI use means for us?
  • Do we have internal processes in place to adequately communicate the ethical use of our AI systems?
  • Do we have channels in place with entities outside our organization to adequately and appropriately communicate about the ethical use of our AI use cases?

 

QUESTIONS REGARDING AI RISKS

  • What are the risks for our expected uses of AI? Can they be quantified?
  • How will the use of AI disrupt the company’s business and industry?
  • What are the governance implications of the use of AI and related policies and controls?
  • Have we segregated training data, so we know the provenance of the data used to train our models?
  • Have we established an AI governance board or committee?
  • How can we review and approve governance policies for AI that include human review by management?
  • What is our CDO’s (Chief Data Officer) or Data Governance leader's strategy for handling data sharing requests at the scale the business is implementing AI?
  • What is our third-party risk associated with AI?
  • Who are our riskiest vendors, and how is our organization managing that risk? (Most vendors are basically writing off as much AI risk as possible on the licensee, especially because this market is largely unregulated at this point.)

 

QUESTIONS REGARDING REGULATION OF AI

  • Have we explored the operational and regulatory challenges related to the proposed use of AI?
  • Where does the proposed AI use case rank on the EU Artificial Intelligence Act scale of risk (unacceptable risk, high risk, limited risk, or minimal risk) for both the provider and user?
  • Are we developing AI in accordance with putative legislative and regulatory expectations?
  • Have we assigned responsibility for tracking AI regulatory matters to a chief legal officer or general counsel as regulations develop?
  • Are our policies, processes, procedures, and practices related to the mapping, measuring, and managing of AI risk in place, transparent, and implemented effectively? How do we know?
  • Do our accountability structures ensure appropriate teams and individuals are empowered, responsible, and trained for mapping, measuring, and managing AI risks?
  • Are policies and procedures in place to address AI risks from third-party software and other supply-chain issues?
  • Are we using protected data to train the model that can be subject to opt-out or removal requests?
  • Have we reviewed our insurance policies for AI-related risks and use cases?

 

QUESTIONS REGARDING THE BOARD’S ABILITY TO OVERSEE AI

  • Does the current board possess adequate expertise to properly and effectively perform oversight over our use of AI?
  • Does the board need to institute its own AI board education program to enable it to properly carry out its fiduciary responsibility?
  • Should the board hold periodic virtual sessions to consider/educate board members about AI as it pertains specifically to our business?
  • Do we need to restructure the board to effectively manage our extended cyber risk due to our current and anticipated use of AI?
  • Do we need a new committee to focus on AI?
  • Should all the board committees be discussing AI?
  • Should our AI/cyber risk be considered as a separate matter for board discussion and action, or should it be integrated as a part of our overall operations? Or both?

 

QUESTIONS REGARDING OVERSIGHT AND MANAGEMENT OF AI

  • Does our corporate structure ensure management is balancing the potential benefits of AI with potential risk?
  • Is the board considering AI risks simultaneously with economic benefits from AI use cases?
  • Does our budgeting process ensure adequate funding for continuous monitoring, testing, and auditing of AI risk?
  • Is there appropriate and sufficient employee training, including budget, to assure that relevant portions of the organization’s workforce are able to implement the AI-based use case?
  • Have we engaged “red teams” to assess generative AI use cases, thus assuring that all necessary aspects of the organization have had proper input into the development and deployment of safe and resilient AI solutions?
  • Have we considered the company’s outsourcing plan with respect to AI and the risks outsourcing may entail?
  • How do we know that our AI supplier is using best practices?
  • Has the management team conducted adequate due diligence to determine the degree of risk associated with a specific AI use case based on the pre-deployment testing process?
  • Are our testing, monitoring, auditing, and mitigation efforts reflected in our logging and metadata emanating from the AI itself, or is a human in the loop?
  • Has the management team adequately and empirically determined that the proposed AI use case risk can be mitigated or transferred in line with the organization’s risk appetite?
  • Are processes in place to maintain an acceptable risk profile over time and accounting for the potential for the AI to “drift”?

 

NEXT

Content

AI Friend and Foe

Elevate your board's AI oversight capabilities to balance cybersecurity benefits with emerging risks, while ensuring responsible governance for strategic advantage.

Defining AI and Its Impact on Cybersecurity

Understand essential AI types and their cybersecurity implications, from traditional systems to LLMs, while addressing key risks including skills gaps, model drift, and lack of model transparency.

AI as a Cybersecurity Risk and Force Multiplier

Navigate the dual impact of AI in cybersecurity as both a risk multiplier enabling sophisticated attacks and a force multiplier enhancing threat detection, analytics, and workforce capabilities.

How AI Will Impact Cybersecurity Regulatory and Disclosure Matters

Discover how organizations must navigate AI's regulatory challenges and fulfill their disclosure obligations to ensure responsible and transparent AI use and oversight.

How AI Impacts Board Readiness for Oversight of Cybersecurity and AI Risks

Equip your board with essential AI governance knowledge to address cybersecurity vulnerabilities and implement risk assessment frameworks for responsible implementation and improve board readiness for AI governance.

Boardroom Tool: Questions for Directors to Ask About AI

Leverage this question framework to guide board discussions on AI to ensure proper board governance and oversight of this critical technology.