1 Executive summary

In 2017, the Supreme Audit Institutions (SAIs) of Brazil, Finland, Germany, the Netherlands, Norway, and the UK signed a Memorandum of Understanding to collaborate on data analytics. Recognising that digital transformation is changing how governments operate, the SAIs agreed to share knowledge and develop new audit approaches. In 2019, they committed to jointly produce guidance on auditing artificial intelligence (AI) in the public sector. The first version of this paper was published in 2020 and first updated in 2023.1 Due to significant changes in AI usage and its related risks, the growth of generative AI, new regulations, and the wider availability of commercial AI systems, the paper underwent a major update to the current 2025 version.

AI is now widely used in government to improve services and reduce costs. AI systems can make predictions (predictive AI), recommendations, or decisions, or generate content such as text, images, or audio (generative AI). Most AI systems rely on machine learning, where models learn patterns from data to achieve specific goals.

New technologies tend to be accompanied by new risks. While dedicated legislation is still emerging, there is a need for control mechanisms and audits. The AI community is increasingly focused on ethical principles and the social impact of AI. Since the first publication of this paper, licensing for specialised AI auditors has become available.2

While data protection authorities have developed dedicated guidelines and can take on a supervisory role for personal data protection, many of the risks linked to AI applications extend beyond personal data. For example, opaque AI systems can automate and reinforce unfair treatment, undermining trust in public institutions. SAIs must be able to audit AI applications through both compliance and performance audits. Several MoU member SAIs have conducted case studies or pilot audits3 to develop a generic methodology for auditing AI applications or to examine the use of AI in public services.

This paper sets out the main risks of using AI in public services and proposes methods for auditing AI systems. The guidance draws on the experience of the authoring SAIs (Brazil, Finland, Germany, the Netherlands, Norway, and the UK), including audits of machine learning and other software projects. This is not a prescriptive set of criteria and should be used as a guide for developing or adapting auditing methodologies.

The paper is structured into two main parts. Chapter 3 sets out possible audit criteria, covering national regulations, international standards, and widely used guidelines. Chapter 4 presents an audit catalogue4, structured around key areas: project management and governance, data, system development, evaluation before deployment, deployment and change management, and AI systems in production. Each area includes an overview, identifies AI-specific risks, and suggests controls. We also include a helper tool that auditors can use to prepare their audits.

Key challenges identified by the SAIs include:

  • Developers may focus on technical performance, overlooking compliance, transparency, and fairness.
  • Poor communication between product owners and developers can lead to ineffective or costly systems.
  • Many organisations lack the skills to develop AI in-house and rely on external providers, increasing compliance risks.
  • There is uncertainty about the use of personal data in AI, with unclear accountability and limited organisational structures.

Auditors need specific training in the following areas of expertise to perform meaningful assessments of AI applications and to give appropriate recommendations:

  • Baseline audits: to review documentation effectively, auditors need a good understanding of the high-level principles of AI systems and machine learning algorithms and up-to-date knowledge of the rapid technical developments in this field.
  • In-depth audits: for audits that include substantial performance tests, auditors need practical skills in common coding languages and model implementation, as well as the ability to use appropriate software tools.
  • IT infrastructure: because AI systems often rely on cloud-based solutions to meet high computing demands, auditors should have a basic understanding of cloud services and how they support AI operations.

This paper reaches the following conclusions and recommendations for SAIs:

  • SAIs should be equipped to audit AI systems to fulfil their statutory responsibilities and assess whether the use of the AI system being audited delivers efficient, effective, and compliant public services.
  • AI audits require special auditor knowledge and skills. SAIs should invest in developing their auditors’ capabilities: performance/compliance/IT auditors need a good general understanding of different types of AI systems, their use cases, ethical principles, and AI specific risks. For technical audit tests beyond the governance audit level, auditors with AI developer skills are needed.
  • The audit catalogue and helper tool in this paper have been successfully applied in several case studies and can be adapted for future audits.

The authors hope this guidance and the accompanying tools will support the international audit community in delivering effective audits of AI.


  1. This first update was based on the experiences the MoU members gained by applying the developed methodology in their audits.↩︎

  2. See for example ISACAs AAIA certification.↩︎

  3. See for example Understanding algorithms and An Audit of 9 Algorithms used by the Dutch Government by the Netherlands Court of Audit, The use of artificial intelligence in the central government by the National Audit Office Norway (full version in Norwegian only), Methods of data analysis and artificial intelligence in the German federal administration by the German Bundesrechnungshof (German only) or Use of artificial intelligence in government by the National Audit Office UK.↩︎

  4. By audit catalogue we mean a set of guidelines including both the suggested content of audit topics based on risks, as well as methodology to perform respective audit tests.↩︎