Methodology: Data drawn from CN practitioner engagements, client programme reviews and publicly available programme outcomes data from government and regulated industries. Programmes included where month 12 or equivalent end-state outcomes data was available. Sample weighted toward large-scale transformation (£5m+ programme investment). Full methodology available on request.
"The data is not ambiguous. Programmes that under-invest in change management do not achieve their projected benefits. The difference is not marginal — it is the difference between delivery and failure. And in 73% of cases, the change management budget was reduced after programme approval."
Top-quartile programmes had change practitioners involved in programme design from the first week. Bottom-quartile programmes brought change management in after the operating model was designed — on average, in week 14 of delivery.
Top-quartile programmes explicitly mapped informal influence structures and designed engagement specifically for informal leaders. Bottom-quartile programmes engaged formal stakeholders by seniority. 78% of them did not identify informal leaders at all.
Top-quartile programmes ran structured manager enablement programmes — including scenario practice for difficult conversations. 91% of bottom-quartile programmes ran manager briefings only.
91% of top-quartile programmes established a measurable baseline before any programme activity changed the thing being measured. 67% of bottom-quartile programmes had no baseline at all — making benefit verification impossible.
Top-quartile programmes maintained active embedding resource for at least 12 weeks post go-live. 84% of bottom-quartile programmes stood down change resource at or before go-live.
Top-quartile programmes planned and resourced a formal month 12 benefits review before delivery began. Only 12% of bottom-quartile programmes conducted any form of month 12 verification — and of those, 71% conducted it informally without reference to a baseline.
"Cutting the change management budget is not a cost saving. It is a deferred cost — typically at a multiple of three or more. The programmes that cut change management investment in months three to six are the programmes that come back for remediation in year two. The remediation costs more than the original change programme would have."
This finding has a specific implication for programme governance. Business cases that are approved with adequate change management investment and then reduced in delivery are not saving money — they are taking on a liability. Programme boards and finance leads who approve mid-programme reductions to the people workstream should understand what the historical evidence says about the likely outcome.
Note: delivery rate defined as % of programmes delivering ≥75% of projected benefits at month 12. Not % of benefit value delivered — % of programmes meeting threshold.
Programmes that approve business cases with inadequate change management investment are approving programmes that are statistically unlikely to deliver their projected benefits. The data on this is not ambiguous.
73% of programmes in this sample had their change management budget reduced after approval. This decision was made at programme director level in most cases — without explicit board recognition that it was a programme risk decision, not a cost-saving decision.
Only 29% of programmes in the sample included any form of month 12 verification. The 71% that did not cannot demonstrate whether they delivered. They cannot learn from what went wrong. And they cannot hold their delivery partners accountable for outcomes rather than activities.
If this data reflects what you're seeing in your organisation, we should talk.