4.5 Deployment and change management
Deploying an AI system is a critical step that moves the technology from development into live use. This stage requires careful planning to ensure the system is integrated smoothly into business processes and delivers its intended benefits. Effective change management helps organisations adapt to new ways of working and reduces the risk of disruption. Successful deployment should also enable process redesign and support the realisation of expected benefits.
This chapter covers:
- Business readiness and change management.
- Release criteria and quality gates.
Deployment and change management are closely linked to other stages of the AI lifecycle. The business case, project objectives, and stakeholder engagement required for deployment are set out in Section 4.1. The technical design, user testing, and acceptance criteria underpinning deployment are documented in Sections 4.3 and 4.4. Ongoing monitoring, retraining, and compliance surveillance after deployment are addressed in Section 4.6.
4.5.1 Business readiness and change management
Deploying AI systems often requires significant changes to work processes and organisational structures. To realise the full benefits of AI, organisations may need to redesign existing processes and ensure that new workflows are aligned with the capabilities of the AI system. This could include mapping current processes and identifying the opportunities for automation or improvement. Human and AI roles should be clearly defined, and redundant steps or bottlenecks should be addressed to maximise impact.
The purpose and function of the system should be clearly communicated and understood by all stakeholders, including end-users, IT staff, and management. Training is essential, especially on the uncertainty of probabilistic systems, possible “hallucinations” of LLMs, and appropriate quality assurance measures. Effective communication strategies should be employed to build ownership and trust among stakeholders. Mechanisms such as public consultations and collaborative workshops can be used to engage stakeholders.
AI policy documents, such as an internal AI strategy or guidelines for safe and ethical use, can help prepare staff throughout the organisation to adapt to AI-based solutions. These documents should clarify the respective roles and responsibilities and ensure that end-users are aware of the AI system’s limitations. Guidance should clarify the interaction between AI systems and human case workers, including authority and accountability in case of disagreement.
Defined processes for the transition from development to production, as well as succession planning and handover arrangements for when key people leave the project, are critical. Organisations should consider establishing a forum with relevant technical expertise, independent from the development team, to critically assess the use of model outputs. This includes both internal control units and mechanisms for handling complaints from users or data subjects.
To ensure benefit realisation, organisations should set clear targets (such as efficiency gains, cost savings, improved service quality, or reduced error rates) and develop indicators and monitoring arrangements to track whether expected benefits are being achieved after deployment. Responsibility for benefit realisation should be assigned, and lessons learned should be captured and used to inform future projects.
4.5.1.1 Risks to consider
- Technical knowledge about a particular model is concentrated in a few staff members or external consultants, leading to miscommunication and inefficient implementation.
- Users of a new AI system are unwilling to change established work processes or are overconfident or sceptical about the AI system’s capabilities.
- Rigid or inflexible work processes delay or impede adoption of the AI system.
- Lack of transparency and explainability for end users.
- Dysfunctional transition from development to production.
- End-users may not be sufficiently aware of the AI system’s limitations, leading to non-transparent or unfair decisions.
- Insufficient training or guidance for end-users, leading to misuse or misunderstanding.
- Lack of defined processes for transition, succession planning, and handover.
- Inadequate communication and stakeholder engagement, leading to resistance or lack of trust.
- Failure to redesign processes or set benefit realisation targets, resulting in missed opportunities for improvement.
4.5.1.2 Expected controls
- Provide training to ensure users have the skills and understanding needed to use AI systems effectively, including critical evaluation skills.
- Develop clear policies or guidance covering human-AI interaction, clarifying authority and accountability.
- Define processes for transition from development to production, succession planning, and handover.
- Establish forums for independent internal control and external complaints.
- Use communication strategies to engage stakeholders and build trust.
- Map and redesign business processes to integrate AI effectively.
- Set benefit realisation targets and monitor progress against them.
- Assign responsibility for benefit realisation and continuous improvement.
4.5.2 Release criteria and gates
Release into production should be contingent on passing all pre-deployment evaluations, including technical, ethical, and legal acceptance criteria (see Sections 4.3 and 4.4). The results of these evaluations, including user acceptance testing and evidence of meeting business KPIs, must be documented and signed off by relevant stakeholders.
Release criteria should include compliance sign-off, documentation packs (such as model cards, system cards, and audit card), and publication of relevant information for transparency and accountability (see Section 4.4). Change management processes must be in place to govern future updates, migrations, and rollbacks. This includes version control, compatibility testing, and fallback mechanisms to ensure continuity and reliability.
Where AI systems are purchased or rely on external models, service level agreements should ensure continuous support and control of external infrastructure or services sufficient for incident management and failure analyses (see Section 4.6).
4.5.2.1 Risks to consider
● An immature or underperforming system that does not meet release criteria is set into production.
● Behaviour of previous production versions cannot be reproduced after release of a new version.
● Insufficient support of or control over purchased or externally developed systems.
4.5.2.2 Expected controls
- Ensure release criteria and gates, including compliance signoff and documentation packs, are met.
- Establish service level agreements for purchased or externally developed systems, ensuring continuous support and control.
- Maintain version control, compatibility testing, and fallback mechanisms for change management (see Section 4.6).