What Do Companies Need to Know About AI? A Checklist for Board Members

Image: Unsplash

What Do Companies Need to Know About AI? A Checklist for Board Members

The Institute of Directors (“IoD”) – a professional organization for company directors, senior business leaders, and entrepreneurs – released a “reflective checklist” this spring with the aim of providing boards with a high-level understanding of where ...

September 5, 2023 - By Tom Whittaker

What Do Companies Need to Know About AI? A Checklist for Board Members

Image : Unsplash

Case Documentation

What Do Companies Need to Know About AI? A Checklist for Board Members

The Institute of Directors (“IoD”) – a professional organization for company directors, senior business leaders, and entrepreneurs – released a “reflective checklist” this spring with the aim of providing boards with a high-level understanding of where organizations stand when it comes to ethical use of artificial intelligence (“AI”). Board level understanding of the opportunities and risks of AI is essential, as companies – and their boards – are subject to specific legal duties, which means that they need to understand what AI is being used, how it is used, and what risks there are. 

Despite such duties, an IoD members’ survey revealed that 80 percent of boards did not have a process in place to audit their AI, and a whopping 86 percent of businesses are already using some form of AI without their boards being aware of it. Against that background, the IoD’s checklist outlines twelve principles intended to help guide the use of AI throughout an organization. We delve into the principles – and key explanations – which will require tailoring to individual organizations, any specific AI systems being used, and any relevant legal and regulatory framework that governs an organization and AI system’s use.  

It is worth noting that the IoD’s report was published shortly before the United Kingdom’s white paper on regulating AI was published, and thus, we will have to wait and see whether the IoD determines that its reflective checklist requires updating.

(1) Monitor the evolving regulatory environment – Organizations should be aware of existing and prospective legislation affecting AI. Examples of such legislation are: The UK government white paper on AI regulation referred to above; The European Union’s proposal for the regulation of Artificial Intelligence, the AI Act; and The EU AI Liability Directive, which introduces rules specific to damages caused by AI systems. Organizations should consider how any regulations, such as data protection or sector-specific regulation, applies to their development and use of AI systems.

(2) Continually audit and measure what AI is in use and what they are doing – The ethical principles must be auditable and measurable; they should be embodied in the ISO 9001:2015 quality system (or equivalent suitable system for example ISO/IEC 42001 when ratified) to ensure a consistent approach to the evaluation and use of AI by the organization. Companies should consider whether their AI systems should be on their risk register and whether established board committees (e.g., audit, risk) have the relevant training and resources.

(3) Undertake impact assessments which consider the business and the wider stakeholder community – Impact assessments must be undertaken which consider the possible negative effects and outcomes for employees who interact with the AI or whose jobs may be affected. Similarly, impact assessments must be undertaken for stakeholder groups, such as customers, suppliers, partners, and shareholders.

(4) Establish board accountability – The board is accountable both legally and ethically for the positive use of AI within the organization including third party products which may embed AI technologies. Board members should be aware of this accountability. The board should hold the final veto on the implementation and use of AI in the organization.

(5) Set high level goals for the business aligned with its values – High-level goals for the use of AI in the organization must be created in line with its vision, mission, and values. Examples of such goals are: augmenting human tasks; enabling better, consistent and faster human decisions; and preventing bias. Are these goals clear, written, measurable?

(6) Empowering a diverse, cross functional ethics committee that has the power to veto – An ethics committee should be established at the organization with the purpose of overseeing AI proposals and implementations. The committee should recommend to the board whether the AI implementation may have a beneficial effect and understand potential negative impacts. Depending on its assessment of these impacts, it should have the power to veto any proposed us of AI.

(7) Document and secure data sources – In the definition of the purpose of the specific AI implementation, the sources of data must be identified and documented. A clear method of detecting and reporting bias should be developed. If bias is discovered, action should be taken to identify the source and remove it from the AI. Key Performance Indicators (KPIs) must be implemented to keep bias out of the organization.

8) Train people to get the best out of AI and to interpret the results – Employees should be trained in AI use in order to prevent bias and potential harmful outcomes and be aware of systems used to monitor and report bias.

(9) Comply with privacy requirements – The AI must be designed and audited to ensure compliance with data privacy legislation such as the GDPR (more about data protection here). This entails the training of AI technical teams so that they may adequately challenge AI developers to ensure AI transparency and compliance with the ethics framework. Technical teams should liaise with the ethics committee regarding their findings.

(10) Comply with secure by design requirements – The AI must be secure by design and stand the scrutiny of external test and certification processes such as Cyber Essentials Plus. To ensure that data sets used in the AI cannot be breached, penetration testing may be used.

(11) Test and remove from use if bias and other impacts are discovered – The decision to utilize AI rests with the board; so too does accountability for ongoing safe and consistent AI performance. As a result, the board must ensure that AI is tested prior to implementation to ensure compliance with the ethics framework. If the AI is externally sourced, this includes a consideration of whether ethical requirements are engrained in the procurement process. 

(12) Review regularly – Decisions made by AI should be consistently monitored and evaluated against the purpose of the AI and the ethical framework in place. If the AI deviates from the purpose and ethics in any way, those deviations should: be documented; be reported to the ethics committee; and result in corrective actions being implemented in a reasonable period of time.


Tom Whittaker is a senior associate and solicitor advocate in the Burges Salmon dispute resolution team. He regularly advises clients on commercially significant and complex civil disputes for a wide range of corporate and government clients across different sectors.

related articles