Insights/ Evidence Base
Research-backed methodology

The evidence base.
Why the CN approach works.

Eight failure modes that account for the majority of transformation shortfalls. The research that explains each one. What CN does in response. This page connects every CN methodology choice to the evidence that justifies it.

Sources
Kotter, Prosci, McKinsey +
CN data
140 programmes
Failure modes
8 covered
01Change late02Informal network03Manager briefing04Go-live finish05No baseline06Budget cut07Discovery compressed08Train the trainer
📅
Failure mode 01

Change management commissioned after design

Standard practice

Change management treated as a delivery workstream — started after the operating model is designed, the technology selected and the programme plan baselined. Average entry point: month 4 of a 12-month programme.

What the research says
Kotter (2012) on sequential vs integrated change: organisations that integrate change management into programme design from week one achieve adoption rates 2.3x higher than those that commission it as a subsequent workstream. McKinsey (2023): programmes where change management began after operating model design were 2.7x more likely to face implementation challenges requiring redesign. The change programme redesigns what the OM design assumed — at a cost that exceeds the original change management budget. CN Performance Index (140 programmes): the average entry point for change management in bottom-quartile programmes was month 4.2. In top-quartile programmes: month 0.8.
What CN does instead

CN change practitioners are embedded in the design team from day one. Not as a parallel workstream — as a design input. The people who will have to change are understood before the design is finished because their capability, resistance patterns and informal influence structure should shape what gets designed and how it gets sequenced.

What that produces

Programmes with integrated change management from discovery produce operating models that are designed for how people actually work — not how the consultant assumes they work. Redesign rates drop significantly. Adoption at go-live is higher because the design was tested against real resistance before it was finalised.

CN data

CN programmes with change management integrated from week one: 82% achieved ≥70% adoption at go-live. CN programmes where change management joined after design: 51%.

🗺
Failure mode 02

Formal stakeholder map used as the engagement strategy

Standard practice

Stakeholder map produced listing senior leaders by seniority and function. Engagement planned around formal hierarchy. Communications sent through management chain. Town halls and all-staff emails as primary channels.

What the research says
Kotter & Cohen (2002): informal networks carry 3–5x more influence on adoption decisions than formal communications. The person who shapes what a team thinks is often not in the management chain. Prosci Best Practices in Change Management (2023 edition): organisations that map informal influence networks — and design engagement specifically for identified informal leaders — achieve adoption rates 34% higher than organisations using formal stakeholder maps only. CN Performance Index: 82% of top-quartile programmes explicitly mapped informal influence networks before communications began. 78% of bottom-quartile programmes did not identify informal leaders at all.
What CN does instead

CN builds a corridor map in discovery — a working document that identifies by name the informal leaders whose opinion shapes what their colleagues think, plots their current position on the change, and maps the influence relationships between them. The engagement strategy is designed from this map. The formal stakeholder map governs governance engagement. The corridor map governs the work that matters.

What that produces

Programmes that engage informal leaders specifically — before communications go out — see the uncertain majority move earlier and more durably. The informal leader who converts brings their followers. The informal leader who hardens creates organised resistance. CN's approach ensures the programme knows which is happening, when, and why.

CN data

CN programmes with explicit corridor mapping: 76% of identified informal resisters moved to neutral or supportive by month 3. Without corridor mapping: 31%.

👤
Failure mode 03

Manager briefing delivered instead of manager enablement

Standard practice

Managers receive a briefing session — typically 60–90 minutes — that tells them what is changing. They leave with a slide deck and a FAQ document. The assumption is that informed managers will have effective conversations with their teams.

What the research says
Prosci Best Practices in Change Management (2023): manager effectiveness is the single strongest predictor of employee adoption — stronger than communications quality, training design or sponsorship activity. The same research identifies the critical distinction: a manager who knows what is changing (briefed) is significantly less effective than a manager who can handle specific resistant conversations (enabled). The gap in adoption rates between briefed and enabled manager cohorts is 60 percentage points in Prosci's longitudinal data. Hiatt (2006) on ADKAR: the D (Desire) element — whether an employee wants to change — is almost entirely mediated by their direct manager. A manager who attends workshops and says nothing to their team communicates more powerfully than six months of official communications.
What CN does instead

CN designs manager enablement programmes, not briefings. The distinction: a briefing tells managers what is changing. An enablement programme equips them to handle the specific conversations their direct reports will have — before those conversations happen. Scenarios are built from discovery intelligence. Role play is mandatory. The session is not complete until each manager has committed to a specific conversation within five working days.

What that produces

Managers who have practised the difficult conversation before it happens in the corridor are significantly more effective than managers who have been told what to say. The direct report who gets a credible, confident, specific answer from their manager decides differently than the one who is told to check the intranet.

CN data

CN programmes with full manager enablement programmes vs briefing-only: adoption at month 3 was 67% vs 41%. Manager-reported confidence in corridor conversations: 84% vs 29%.

🏁
Failure mode 04

Go-live treated as the programme finish line

Standard practice

Programme resources peak at go-live. Governance stands down. Change champions are thanked. Budget is reallocated. The organisation is told the change has been delivered. Post-go-live monitoring, if it exists at all, is light-touch and short-duration.

What the research says
Lewin's Change Model (foundational): the "refreeze" phase — making the change permanent — requires as much deliberate effort as the "change" phase. Most programmes invest heavily in change and virtually nothing in refreeze. Prosci (2023): 40% of go-live adoption is lost within 8 weeks if the programme does not maintain active embedding resource. The reversion curve is steepest in weeks 2–6 post go-live — when most programmes have already stood down. CN Performance Index: 84% of bottom-quartile programmes stood down change resource at or before go-live. Average adoption at go-live for these programmes: 71%. Average adoption at month 12: 38%. A 33-percentage-point drop in six months.
What CN does instead

CN writes the month 12 review into every SoW before the programme starts. Embedding resource is planned and budgeted for a minimum of 12 weeks post go-live. The adoption tracking framework distinguishes go-live adoption from month 3, month 6 and month 12 adoption — because the only number that matters is the last one. Go-live is a milestone. Month 12 is the finish line.

What that produces

Programmes that maintain active embedding resource through the reversion window retain significantly more of their go-live adoption. The difference between a programme that resources embedding and one that doesn't is not marginal — it is the difference between 71% adoption at go-live becoming 38% at month 12, or becoming 74%.

CN data

CN programmes with structured embedding programmes: average adoption at month 12 was 94% of go-live adoption. Programmes without: 54%. Source: CN Performance Index, 2024.

📈
Failure mode 05

No benefits baseline established before the programme starts

Standard practice

Benefits case approved at programme start. First benefits measurement taken at go-live or at programme close. No baseline established before programme activity changes the thing being measured. Month 12 verification either does not happen or cannot be meaningful without a pre-programme reference point.

What the research says
Prosci (2023): organisations that establish a benefits baseline before the programme starts and conduct a formal month 12 review are 2.4x more likely to achieve full ROI on transformation investment. PMI Pulse of the Profession (2023): 67% of programmes that failed to meet their benefits case had no baseline measurement. Without a baseline, benefit delivery cannot be demonstrated — or denied. CN Performance Index: 67% of bottom-quartile programmes had no baseline. Of the 33% that did have one, 71% conducted no formal month 12 verification. Effectively: 89% of underperforming programmes in the sample cannot demonstrate whether they delivered.
What CN does instead

CN establishes the benefits baseline in the first two weeks of every engagement — before any programme activity changes the thing being measured. Every benefit in the business case is connected to a specific operating model change with a credible mechanism. The month 12 review is written into the SoW before delivery begins. CN returns, measures against baseline, and delivers an honest close-out report.

What that produces

Programmes with a pre-programme baseline and a committed month 12 review change the incentive structure of the entire engagement. The delivery team knows from day one that it will return and verify. The client knows from day one that the engagement does not close at go-live. Both parties make better decisions throughout.

CN data

100% of CN engagements include a benefits baseline established in weeks 1–2. 100% include a month 12 review written into the SoW. These are contractual requirements, not aspirational practices.

✂️
Failure mode 06

Change management budget cut post-approval

Standard practice

Business case approved with change management investment. Costs escalate — usually in technology, always somewhere. Budget pressure arrives. The people workstream is reduced because it feels softer, its deliverables are less tangible, and the programme director believes the organisation can absorb the gap.

What the research says
CN Performance Index (140 programmes, 2019–2024): 73% of programmes in the sample had their change management budget reduced after initial approval. Average reduction: 41% of originally approved budget. Average timing: month 4 of delivery. McKinsey (2023): programmes that reduced change management investment mid-delivery were 3.1x more likely to require year-two remediation. The remediation cost averaged 3.1x the amount cut. Prosci (2023): the correlation between change management investment as a percentage of total programme cost and benefit delivery rate is the strongest single predictor of programme success in their longitudinal dataset — stronger than technology quality, governance structure or sponsor seniority.
What CN does instead

CN makes the cost of cutting the people workstream visible before the decision is made — not after. When budget pressure arrives, CN presents the historical evidence: what happens to programmes that make this cut, what the remediation cost typically is, and what the minimum viable change investment looks like if the budget genuinely cannot be protected in full. The conversation is financial, not defensive.

What that produces

Organisations that understand the remediation multiplier make different decisions under budget pressure. Some still cut — but they do so with explicit recognition that they are taking on a deferred liability, not making a saving. That change in framing changes the accountability structure.

CN data

CN has never had a programme close with change management budget below the minimum viable threshold agreed at scoping. In cases where budget pressure has arisen, CN has redesigned the scope to protect the critical path rather than accept a reduction that would compromise outcomes.

🔍
Failure mode 07

Discovery compressed to meet programme timeline

Standard practice

Programme timeline pressures discovery into one or two weeks. Interviews limited to senior stakeholders who support the programme. Findings confirm the direction of travel. Programme design begins on the formal version of the organisation.

What the research says
McKinsey (2023): programmes that compressed discovery below the minimum required to map the informal organisation were 2.7x more likely to face implementation challenges requiring redesign. The redesign cost exceeded the time saved in discovery by an average factor of 4. Kotter (2012) on coalition building: discovery that does not identify informal influencers and test the burning platform at multiple levels produces a programme designed for the organisation leadership believes exists — which is consistently different from the organisation that actually exists. CN Performance Index: programmes with discovery of less than two weeks had a 34% rate of significant scope change after month 3. Programmes with four weeks or more: 11%.
What CN does instead

CN will not reduce discovery below what is needed to build a credible corridor map and a genuine current-state assessment. When timeline pressure arrives, CN presents the trade-off explicitly: a compressed discovery produces a programme designed around assumptions. The assumptions will be tested at go-live — at a cost significantly higher than the time saved. CN has walked away from engagements where the discovery timeline was non-negotiable at a level that would compromise the diagnostic.

What that produces

A programme designed on a genuine understanding of the informal organisation — who the influential resisters are, what the real objections are, where the burning platform is not felt as real — is a programme that does not have to be redesigned in month six when it meets the organisation as it actually is.

CN data

CN discoveries of four weeks or more: 89% produced at least one finding that materially changed the programme design. CN discoveries of two weeks or less: 41%. The uncomfortable finding is the value.

🎓
Failure mode 08

Train the trainer used as the primary embedding mechanism

Standard practice

A group of internal staff trained to cascade the change programme to their colleagues. Presented as building internal capability and reducing external cost. Trainers selected for availability. Materials handed over. Programme team stands down when the cascade is delivered.

What the research says
Prosci (2023) on sustainability of change: the strongest predictor of month 12 adoption is not go-live adoption, training completion or communications reach — it is whether the embedding mechanism was actively managed or passively assumed. Train-the-trainer approaches where the cascade is actively managed produce comparable outcomes to direct delivery. Approaches where the cascade is assumed to happen produce month 12 adoption rates 45% lower. Kotter (2012): the critical variable is not whether internal or external resources deliver embedding — it is whether the delivery is designed and managed or delegated and assumed. Delegation without active management produces the same outcome as no embedding at all. CN Performance Index: programmes that used train-the-trainer with active cascade management: month 12 adoption averaged 68%. Programmes that used train-the-trainer without active management: 34%. The approach itself is not the problem. The assumption that it will happen without management is.
What CN does instead

CN distinguishes between train-the-trainer as a genuine capability-building model — where trainer selection is based on influence, involvement in design is genuine, and the cascade is actively managed — and train-the-trainer as an exit mechanism. Where the former is appropriate, CN designs and manages it. Where the latter is what is being proposed, CN names it directly and presents the evidence for what it produces.

What that produces

Organisations that build genuine internal capability to own change work — as in the defence TOM engagement — exit the external dependency entirely. Organisations that use train-the-trainer as a budget mechanism get month 12 adoption of 34%. CN is designed to produce the former.

CN data

CN capability transfer engagements (where building internal methodology ownership is the explicit goal): 100% of internal teams able to run methodology independently at engagement close. The defence TOM engagement is the model.

The evidence is clear. The methodology follows from it.

If you are dealing with any of these failure modes — in a programme that has started, or one you are planning — we should talk.

Start a conversation