Understanding why healthcare transformation initiatives frequently fail to achieve their intended impact
- Dr. Rhys Jefferies

- Mar 25
- 6 min read
Updated: 3 days ago
Healthcare transformation rarely fails because the ambition was wrong. Most programmes begin with a legitimate objective: improve access, redesign pathways, standardise care, digitise information, reduce unwarranted variation, or move services towards more effective and sustainable models of delivery. The difficulty lies in turning strategic intent into routine practice in complex, high-pressure systems. NHS England’s Change Model is explicit that effective and sustainable change depends on multiple interacting elements, including shared purpose, leadership, spread and adoption, measurement, improvement tools, and project and performance management.[1]

The gap between strategic intent and operational reality
A major source of failure is the gap between strategic sponsorship and operational reality. Large programmes can appear well supported at senior level while still being poorly aligned with local workflows, staffing capacity, or day-to-day service pressures. This was seen clearly in national evaluations of NHS electronic record implementation.
One BMJ study examined 12 early-adopter NHS acute hospitals and specialist settings over two and a half years and found implementation and adoption to be much more difficult and uneven than initially expected.[3]
The lesson was not that digital transformation lacked value, but that centrally backed change could not succeed through technical deployment alone. Local ownership, workflow fit, adaptation, and implementation capability were critical.
Why strong design does not guarantee successful implementation
One of the most common reasons transformation underdelivers is that organisations treat it primarily as a design exercise. Considerable effort is often invested in future-state models, programme structures, business cases, and executive approvals. These are necessary, but they do not in themselves create operational change. The implementation literature has repeatedly shown that the decisive question is not simply whether an intervention is evidence-based, but whether it can be adopted, embedded, spread, and sustained in a specific context. Greenhalgh and colleagues developed the NASSS framework precisely to explain why promising innovations in health and care are so often not adopted, are later abandoned, or fail to scale despite early promise.[2]
Readiness, fit, and timing as determinants of uptake
Readiness is another frequent blind spot. Organisations often move from ambition to mobilisation without establishing whether teams are genuinely ready to work differently. Readiness is not the same as formal approval or verbal support. It includes shared commitment, confidence, capability, time, and the practical resources required to implement change. Weiner’s theory of organisational readiness defines it as a shared psychological state in which members feel both committed to implementing change and confident in their collective ability to do so.[4] That distinction matters because programmes often overestimate readiness by listening mainly to sponsors rather than testing whether services are actually prepared to absorb and sustain the work of change. National digital transformation data illustrates this problem well.
The National Audit Office reported that between 2016 and 2017 the proportion of trusts rating their digital readiness as high rose from 65% to 83%. Yet the same report noted that 16% of trusts still rated their digital capability as low, and only 54% reported that digital records were available at the point of care for clinical decision-making.[5]
In other words, headline progress in readiness did not automatically translate into consistent practical usability where care was actually being delivered. This is exactly the kind of gap that causes transformation programmes to appear more advanced on paper than they feel in practice.
A further issue is contextual fit. New models of working often fail not because staff disagree with them in principle, but because they do not align well enough with the realities of delivery. Implementation research consistently shows that contextual factors influence whether change becomes usable and sustainable. Workflow, local leadership, competing priorities, professional roles, organisational history, and the wider implementation climate all shape adoption.[6] When that fit is weak, organisations often see partial uptake, workarounds, symbolic compliance, or fragmented delivery across teams. The intervention may be sound in theory, but its real-world impact remains limited.
Timing also matters. In practice, the point at which an intervention is introduced can significantly influence how it is received and whether it is adopted.
In one of our own theatres improvement programmes, we introduced a software solution for aligning shared resources as an enabler within a live operational improvement context rather than as a standalone technology deployment. Because the implementation was timely, linked to a recognised service objective, and well aligned to local operational needs, executive sponsorship was secured early and uptake reached 100% within the first few months of mobilisation.[7]
This is a practice example rather than a published evaluation, but it reinforces a wider point from the literature: adoption is shaped not only by the quality of the intervention, but by its timing, context, and fit.[2][6]
What published evidence tells us about delivery conditions
Published case studies also show that stronger outcomes are usually associated with a better balance between strategic direction and local capability.
The independent evaluation of the Global Digital Exemplar programme found that it supported 51 provider organisations and involved £302 million of central investment.[8] Evaluators concluded that the programme largely achieved its aims, but they did not attribute progress to funding or technology alone. Success was associated with organisational capability, implementation support, governance, and structured learning between sites.
This is important because it shows that even well-funded programmes depend on the quality of delivery conditions, not simply the attractiveness of the solution being introduced.
Sustainability is a separate challenge again. Many programmes achieve enough momentum to launch, but not enough reinforcement to last.
A long-term study of the Productive Ward: Releasing Time to Care programme explored its impact up to 10 years after implementation across six hospitals. The study drew on 88 interviews, 10 ward manager questionnaires, and structured observations on 12 wards.[9]
The researchers found that while some material and practice legacies remained visible, the intervention itself had not been sustained as a continuous quality-improvement approach. That distinction matters. Transformation often leaves tools, habits, or traces behind, but that is not the same as durable, system-wide operational change.
From transformation ambition to practical delivery
The practical implication is that transformation should be approached as a delivery and adoption challenge from the outset. That means starting with clear executive sponsorship and defined scope, but then moving quickly into detailed diagnostic work, gap analysis, readiness assessment, and scenario testing. It means understanding stakeholder preferences and operational constraints before finalising the intervention. It means creating an implementation strategy supported by leadership, governance, PMO discipline, communications, and change management rather than assuming that a good idea will spread through advocacy alone. NHS England’s guidance on large-scale change is clear that spread and adoption, measurement, leadership, and project and performance management need to be treated as integral elements of change rather than downstream tasks.[1][10]
It is also important to distinguish between intervention performance and implementation performance. Organisations are often quick to measure target outcomes such as productivity, waiting times, utilisation, quality, or financial improvement. These matter, but they do not explain whether the change is actually being adopted. Implementation performance metrics should sit alongside outcome metrics: uptake, compliance with the new model, milestone delivery, leadership grip, issue resolution, and sustainability signals. Without that distinction, programmes can struggle to tell whether disappointing results reflect a weak intervention, weak implementation, or both. The wider implementation literature strongly supports this more disciplined view of change as something that must be embedded, not merely launched.[2]
Conclusion
Ultimately, the most important question in transformation is rarely, “Is this a good idea?” More often, it is, “What has to be true for this to work here?” Healthcare transformation initiatives frequently fail to achieve their intended impact because organisations move too quickly from ambition to intervention without paying enough attention to context, readiness, fit, ownership, and sustainability. They are more likely to succeed when change is designed around the realities of service delivery and supported through a structured approach to implementation, adoption, and review. Strategic intent is only the starting point. Impact depends on what can be delivered, adopted, and sustained in the real world.[1][2]
References
[1] NHS England. The Change Model Guide. 2018.
[2] Greenhalgh T, Wherton J, Papoutsi C, et al. Beyond Adoption: A New Framework for Theorizing and Evaluating Nonadoption, Abandonment, Scale-up, Spread, and Sustainability of Health and Care Technologies (NASSS). 2017.
[3] Sheikh A, Cornford T, Barber N, et al. Implementation and adoption of nationwide electronic health records in secondary care in England: final qualitative results from prospective national evaluation in “early adopter” hospitals. BMJ. 2011.
[4] Weiner BJ. A theory of organizational readiness for change. Implementation Science. 2009.
[5] National Audit Office. Digital transformation in the NHS. 2020.
[6] Greenhalgh T, Wherton J, Papoutsi C, et al.; and Robert G, Sarre S, Maben J, et al. Evidence on implementation complexity, contextual fit, and sustainability in healthcare improvement.
[7] Internal practice example from a theatres improvement programme using a shared-resource alignment software intervention; not a published external evaluation.
[8] University of Edinburgh, NHS Arden & GEM CSU, and UCL. Full Report of the Independent Evaluation of the Global Digital Exemplar Programme. 2021.
[9] Robert G, Sarre S, Maben J, Griffiths P, Chable R. Exploring the sustainability of quality improvement interventions in healthcare organisations: a multiple methods study of the 10-year impact of the “Productive Ward: Releasing Time to Care” programme in English acute hospitals. BMJ Quality & Safety. 2020.
[10] NHS England. Leading Large Scale Change: A practical guide. 2018.


