3.3 Sources for audit criteria from adjacent fields
Not all aspects of AI are covered by dedicated regulation. In these cases, auditors should look to established standards and requirements from related fields. These can provide a strong foundation for assessing the safe and responsible use of AI.
3.3.1 Information security and IT governance
AI systems often process sensitive data or support critical operations. As a result, information security standards are highly relevant. Auditors can check compliance with the relevant national or international information security standards. For example, the German Federal Office for Information Security (BSI) has developed the “IT-Grundschutz” approach. It consists of the following main publications:40
- “IT-Grundschutz” Compendium, which outlines potential threats and security requirements.
- BSI-Standards, which recommend methods, processes and measures for information security.
Similarly, frameworks such as ISACA’s COBIT can guide IT governance and help ensure robust controls are in place.41
3.3.2 Use of external cloud services
Many AI systems, especially those using LLMs, rely on external cloud services. While these services offer flexibility and scalability, they can introduce new risks.
The BSI publication “Secure Use of Cloud Services”42 outlines how users can securely utilise cloud services. For technical assessments of cloud services, auditors can refer to frameworks such as the “Cloud Computing Compliance Controls Catalogue (C5)”43 and the “AI Cloud Services Criteria Catalogue (AIC4)”44 from the BSI, or the Cloud Controls Matrix (CCM) from the cloud security alliance (CSA).45
3.3.3 Data protection
AI systems often use large datasets, which may include personal data. Data protection laws, such as the EU’s General Data Protection Regulation (GDPR), apply to both the development and use of AI. Several national data protection authorities have published guidelines for data protection in AI applications; for example, the ICO in the UK46, the Norwegian Data Protection Authority (Datatilsynet)47, the Brazilian General Data Protection Law48, as well as the German Data Protection Conference (DSK).49 Appendix 2 summarises some key challenges relating to compliance with GDPR for AI systems.
3.3.4 Transparency and non-discrimination
Public sector organisations must be transparent in their decision-making. Citizens have the right to understand how decisions that affect them are made. This includes decisions supported or made by AI systems. For example, Article 41 (2c) of the Charter of Fundamental Rights of the European Union states the obligation of administrations to give reasons for their decisions. At a national level, the right for transparency and explainability can often be found in Freedom of Information acts.
Like data protection laws, anti-discrimination laws protect fundamental rights and therefore also apply to AI applications. However, these laws are often formulated in a way to protect individuals against discrimination in single cases or specific situations. They may need to be interpreted more broadly to extend to algorithmic discrimination that sets groups of people at a statistical disadvantage.
3.3.5 Ethical AI
Many ethical requirements for AI are rooted in fundamental rights, such as privacy, fairness, and human dignity. International guidelines can help auditors develop criteria for ethical AI. These guidelines are often based on human rights treaties and national constitutions. For example, the EU High-Level Expert Group on Artificial Intelligence partially based its ethical guidelines for trustworthy AI on the EU Treaties, the EU Charter of fundamental rights and international human rights law.50 This approach can serve as a blueprint for developing audit criteria for ethical AI grounded in basic human rights that can be found in many constitutions and international commitments.51
3.3.6 Intellectual property and environmental impact
The use of AI, especially LLMs, raises questions about intellectual property. Training data may include copyrighted material. The EU AI Act and the EU Copyright Directive52 Legislation outlines copyright law in relation to AI in the EU. The UK requires organisations to comply with national copyright law, and is considering updating policy to provide clarity on how it applies to AI.53
AI systems can also have a significant environmental impact, particularly those that require large amounts of computing power. Some regulations require organisations to report on the environmental footprint of their AI systems. For example, the EU AI Act encourages the use of AI in a way that minimises environmental harm54 and states that, for some AI systems, information on environmental impact in terms of energy consumption should be provided. In the UK, some public sector organisations are required to disclose certain sustainability information, and it is anticipated that this guidance will be extended to include the impact of AI. Assessing the environmental impact of AI is an emerging area for audit attention. While methodologies for measuring this impact are still developing, some tools have been outlined by the OECD.55
For more information, see IT-Grundschutz.↩︎
For more information, see ISACA. (2018). COBIT 2019 Framework: Governance and management objective.↩︎
For more information, see Secure Use of Cloud Services.↩︎
For more information, see Cloud Computing Compliance Controls Catalogue (C5).↩︎
For more information, see AI Cloud Services Criteria Catalogue (AIC4).↩︎
For more information, see Cloud Controls Matrix.↩︎
Guidance from the Information Commissioner’s Office: UK GDPR guidance and resources and Guidance on AI and data protection.↩︎
For more information, see Artifical intelligence and privacy.↩︎
For more information, see LEI Nº 13.709.↩︎
For more information, see Guidance on Recommended Technical and Organisational Measures for the Development and Operation of AI Systems and Guidance on Artificial Intelligence and Data Protection (only in German).↩︎
For more information, see Ethics guidelines for trustworthy AI.↩︎
For example, the National Audit Office Norway, derived audit criteria for privacy and human autonomy from the European Convention of Human Rights in combination with the Norwegian Constitution in the audit report on The use of artificial intelligence in the central government (In Norwegian only, Summary report without audit criteria available in English).↩︎
For more information, see Directive 2019/790.↩︎
For more information, see UK government guidance: Copyright and Artificial Intelligence.↩︎
EU AI Act, Article 95.↩︎
For more information, see Measuring the environmental impacts of artificial intelligence compute and applications | OECD.↩︎