Context Checkpoint

AI EVOLUTIONADAPTBEHAVIOUR KIT

Purpose Spot when conditions shift and no longer match current behaviours
Intervention type Behaviour-context audit
AudienceBehaviour Owners, product leads, ops managers, AI change partners
Time 1 hour session. Run quarterly or after system/process shifts

Expected outcomes

  • Users: Can articulate why some tasks have become harder

  • Teams: Adjust or replace behaviours that are holding work back

  • Business: Avoid legacy behaviours that reduce performance or trust

  • Organisation: Builds adaptive muscle, behaviour evolves with context, not against it

What to bring to the session

Steps

1 | Frame the Behaviour to Review

Step 1: Clarify what you're here to check

Use this sentence to anchor the behaviour:

“We expect [who] to [do what] using AI during [which task], in order to [achieve what outcome].”

Example: “We expect team leads to use AI to summarise weekly client feedback, so patterns can be shared at Monday stand-up.”

Then ask:

  • When was this behaviour introduced?

  • What has changed since? (in task, tool, team, targets?)


Use the Behaviour Tracker if drift has already been flagged.

2 | Run a Context Fit Scan

Step 2: Use the Context Checkpoint Canvas to test for misfit

Discuss and explore which parts of the context have changed, and how that affects the behaviour.
Look across six zones where misfit often creeps in. Use the Context Layers Cheatsheet (in resources below) to make this easier.

  1. Work structure: Has the trigger, timing, or flow of the task changed?

  2. Tool & interface: Has the AI evolved? Are the prompts, outputs, or steps now different?

  3. Human role: Is the same person still doing this? Do they still believe in or benefit from the behaviour?

  4. Cross-team dependencies: Have inputs or hand-offs shifted across teams?

  5. Organisational signals: Has the policy, priority, or recognition of this behaviour changed?

  6. External landscape: Has anything outside (customers, regulations, ecosystem) made this behaviour less fit?


What to watch for:
Not all change is obvious. Look for:

  • New tools that bypass the step

  • Teams silently reverting

  • Outputs no longer used

  • Beliefs drifting (“Why are we even doing this?”)

Mark each layer:
✅ Fits
🟡 Misaligned
❌ Blocking

Step 3: Decide what to do using the 3R model

If you’re refitting:

  • Name the exact change (what + who + by when)

  • Log the update in the Behaviour Tracker

If you’re retiring:

  • Clarify what replaces it

  • Agree how you’ll test and track the new version

4 | Flag Knock-On Effects

Step 4: Check if this change affects other teams, systems, or metrics

Ask:

  • “If we change or retire this behaviour, who else needs to know?”

  • “Does this shift impact a shared tool, KPI, or dependency?”


If yes:

  • Log it on the your action tracker

  • Inform Behaviour Owner Network or flag in your shared change space (e.g. Slack channel, Miro board, Notion doc)

Step 5: Lock in accountability

Decide:

  • Who owns testing or communicating the change?

  • When will we revisit this behaviour next?

Resources

Context Layers Cheatsheet

Discuss and explore which parts of the context have changed, and how that affects the behaviour.

Other methods within the evolve block