top of page

Why new ways of working only become embedded when they fit operational reality

  • Writer: Dr. Rhys Jefferies
    Dr. Rhys Jefferies
  • 6 days ago
  • 5 min read

Updated: 3 days ago

A change can be evidence-based, strategically sound, and widely supported in principle – and still fail in practice. One of the most common reasons is poor operational fit. New ways of working often look coherent on paper but collide with the realities of service delivery: time pressure, fragmented workflows, competing priorities, local resource constraints, professional boundaries, and the effort required to do work differently in a live system. In practice, many change efforts do not fail because people disagree with the objective. They fail because the intervention does not fit the environment in which people are expected to use it.


That is why operational fit should be treated as a precondition for embedding change, not a detail to address later.[1][4][5]



Why good interventions still fail in practice


A useful way to think about this is through the idea of fit between people, tasks, and the intervention itself. The FITT framework was developed on the basis that adoption depends on the fit between individuals, tasks, and technology rather than on the qualities of any one of these alone.[1] In healthcare, that is a practical insight.


A change may be clinically sensible, but if it increases task complexity, disrupts timing, demands skills people do not yet have, or adds friction to already pressured workflows, uptake will be weak or inconsistent.

Later work extending FITT in healthcare reinforces the same point: what looks like resistance often turns out to be a fit problem.[2]


Operational fit as a condition for successful embedding


Broader implementation research points in the same direction. The updated CFIR describes operational fit through constructs such as compatibility, available resources, and implementation climate.[3] The NASSS framework goes further by showing how non-adoption and abandonment often arise when complexity builds across the intervention, the adopters, the organisation, and the wider context.[4] In practice, poor fit is rarely a single issue. It is usually the cumulative effect of multiple small mismatches.


What poor operational fit looks like in real settings


Poor operational fit tends to show up in familiar ways. A new process is technically adopted but old workarounds remain. A tool is made available but used inconsistently across teams or shifts. Staff comply when monitored but revert under pressure. The programme may still look active at governance level, but its operational footprint is fragile. We saw this in an earlier MVP of our software, which achieved 100% adoption across surgical specialties within a trust for coordinating resource and financial risk. Yet a separate legacy process for changing resource allocation continued in parallel and weakened the intended purpose of what the software was doing. This was a valuable MVP learning observation and rather than evidence that the software could not work in practice, it showed that high adoption alone is not enough if surrounding processes still shape operational and financial decisions in ways that undermine the intervention. That insight directly informed later development of the product and the implementation approach around it.


Context matters: when the environment shapes uptake


Operational fit also has a strong contextual dimension. It is not only about workflow in a narrow sense. It is about whether the design of the intervention matches the uncertainty, pace, cognitive load, and decision environment in which it is being used. I saw this clearly in work I published with colleagues in 2022 on the implementation of a national COVID-19 hospital guideline across NHS Wales. At that stage of the pandemic, the evidence base was sparse, evolving rapidly, and open to varied interpretation. In that context, a static guideline would have been a poor fit with operational reality. The response was to create a national web-based resource with fixed core content and dynamic updates as evidence changed (which at the time it did a lot) in a format designed for quick clinical synthesis. The result was more than 4,500 registrants in the first wave alone, nearly 100% of respiratory, intensive care, or emergency unit consultants in Wales, around 170,000 page views, over 31,000 video plays, and repeated use averaging 6 visits per registrant.[6] For me, the important lesson was not simply that the guidance was accessed;


It was that uptake improved because the form of the intervention fit the uncertainty and operational tempo of the context in which clinicians were working.

That example also shows why operational fit should not be confused with making change easier in a superficial sense. Sometimes fit requires simplification. Sometimes it requires adaptation. Sometimes it requires redesigning the delivery mechanism rather than the core intervention. The question is not “How do we lower the standard?” but “How do we preserve intent while making the intervention workable in the reality of the setting?” Reviews of implementation context increasingly support this view: successful change depends not only on the intervention itself but on the social and organisational processes surrounding it. A model that works in one setting may need modification, additional support, or different sequencing elsewhere.[2][5]


Testing fit before expecting adoption


A common mistake is to assume that if people are engaged and supportive, operational fit will take care of itself. It rarely does. Engagement helps surface fit problems, but it does not remove them. Leaders still need to ask practical questions. What additional tasks does this create? What routines does it interrupt? What spatial, temporal, or staffing conditions does it assume? What new coordination does it require across teams? What happens to the intervention when service pressure rises? What legacy systems may contravene the intended efforts of the new intervention? These are the questions that determine whether a change becomes embedded and impactful in the real-world. Even extensions of the FITT framework have found that environmental factors such as ward rhythms, space limitations, and the practical conduct of work can undermine use even where the basic user-task-technology fit looks reasonable.[1][2]


For leaders, the implication is straightforward.


Operational fit should be tested early, not inferred later from uneven adoption.

That means involving the people who will actually do the work, observing live processes rather than relying only on idealised maps, and stress-testing whether the intervention remains workable under pressure. It means being willing to adapt format, sequence, roles, or interfaces without losing the underlying purpose of the change. It also means tracking implementation signals that suggest fit problems: workarounds, drop-off in use, variation between settings, delays at handoff points, and reversion to previous practice.[3][4][5]


Conclusion


Ultimately, new ways of working only become embedded when they fit operational reality. Change does not fail only because people resist it. It often fails because it asks people to work in ways that do not align with the tasks, rhythms, constraints, and uncertainty of real service delivery. The practical test of a good intervention is not whether it looks coherent in design, but whether it remains workable in use. In healthcare, that is often the difference between an intervention that is merely introduced and one that can withstand the pressures of real delivery.[1][6]


References


[1] Ammenwerth E, Iller C, Mahler C. IT-adoption and the interaction of task, technology and individuals: a fit framework and a case study. BMC Medical Informatics and Decision Making. 2006.

 

[2] Kujala S, Hörhammer I, Kaipio J, Heponiemi T. Applying and Extending the FITT Framework to Identify the Challenges and Opportunities of Successful eHealth Services for Patient Self-Management: Qualitative Interview Study. JMIR Human Factors. 2020.

 

[3] Damschroder LJ, Reardon CM, Widerquist MAO, Lowery J. The updated Consolidated Framework for Implementation Research based on user feedback. Implementation Science. 2022.

 

[4] Greenhalgh T, Wherton J, Papoutsi C, et al. Beyond Adoption: A New Framework for Theorizing and Evaluating Nonadoption, Abandonment, Scale-up, Spread, and Sustainability of Health and Care Technologies (NASSS). Journal of Medical Internet Research. 2017.

 

[5] Schroeder D, et al. Context counts: a review of implementation context in healthcare. 2022.

 

[6] Jefferies R, Ponsford MJ, Davies C, Williams SJ, Barry S. Strategies to promote guideline adoption: lessons learned from the implementation of a national COVID-19 hospital guideline across NHS Wales. Future Healthcare Journal. 2022.

bottom of page