HBR research finds that only 12% of boards have a named AI governance role, while regulatory frameworks, including the EU AI Act and emerging national requirements, are moving rapidly toward explicit board accountability for AI risk. Most boards receive AI updates as part of a technology briefing. Few have a director with the mandate to ask the governance questions that the regulatory environment now requires. This gap is becoming a liability as the regulatory landscape moves from guidance to enforcement.
Board accountability for AI requires more than an annual briefing. It requires a director with the mandate and the capability to ask the questions that constitute governance oversight: does the organisation have a framework for categorising AI risk? What is the escalation path when an AI deployment causes harm? How is AI value measured, and against what baseline? Are the organisation's data handling rules specific enough to govern which data enters which AI systems? These are not technology questions. They are governance questions of the same type as financial audit and legal compliance. They require a named owner at board level with both the mandate and the preparation to ask them.
The diagnostic question is direct: if an AI deployment in this organisation caused a significant customer harm tomorrow, who at board level would be accountable for the governance failure? If there is no clear answer, the accountability structure does not yet exist. The assignment does not require a new director appointment. It requires an explicit mandate added to an existing director's responsibilities, and the briefing infrastructure: governance reports, risk registers and escalation records. to support it. The regulatory direction is clear. The organisations that build this structure before it is required will have a governance posture that is easier to demonstrate than one assembled in response to a compliance deadline.