Auditability checklist

This part of Appendix One summarises the prerequisites an auditee organisation should be able to provide for audits as described in Chapter 3 ML audit catalogue.

A list of contact persons with the knowledge areas and roles (see the glossary for definitions) as described in Table 4.1 is helpful, and the respective responsibilities should be defined within the auditee organisation.

Table 4.1: Roles and responsibilities to be defined by the auditee organisation
Responsibility Example role title
Domain knowledge, relevant performance metrics, practical implications Requisitioning unit: Project owner / product owner
User of the AI system Processing official, case worker
User support Process hotline, helpdesk
IT support, developer support Chief information officer
Project management Project leader
Raw data quality and understanding Data engineer
ML model details Developer
Internal audit Controller
IT security IT security officer
Data protection Commissioner for data protection and privacy
Budget Budget holder, Budgetary commissioner

Note that several roles may be filled by a team or person. If external consultants are used, a proper handover, including sufficient knowledge (and role) transfer, has to be ensured by the auditee organisation.

The audit areas described in the audit catalogue require documentation that covers the aspects summarised in Table 4.2.

Table 4.2: Required documentation
Audit Area Audit question / aspect
Governance, project management Roles and responsibilities (defined and communicated)
Context evaluation: relevant laws and regulations (including required level of transparency), risk assessment (including side efects) and mitigation strategies
Objectives and measure(s) for success
Quality assurance plan
Maintenance, development and succession strategy
Communication with stakeholders (‘customers’ such as a ministry, users, data subjects)
Independent control unit
Human-AI interaction policy
Autonomy and accountability
Data Data acquisition method
Group representation and potential bias (raw data)
Data quality (raw data)
Database structure
List of variables used
Personal data and data protection
Model development Hardware and software specifications
Data transformations, choice of ‘features’
Choice of performance metric(s)
Optimisation process
Expectation for unseen data
Choice of ML algorithm type (including black box versus white box)
Code quality assurance
Maintenance plan
Model quality assurance
Cost-benefit analysis
Model in production Data update and monitoring
Model re-training
Automation, system architecture, interface to other systems
Long-term quality assurance
Performance control in production
Evaluation Evaluation method
Comparison of different approaches
Transparency and explainability approaches
Bias and fairness tests
Security risks and mitigation strategy

In order to enable technical tests, code and raw data necessary to reproduce the model should be accessible, as well as the model itself. Note that where providing copies is not appropriate or feasible, accessibility may be fulfilled by available staff able to run/rerun code, transform data and score the model, as requested by the auditors.