Context Checkpoint
AI EVOLUTIONADAPTBEHAVIOUR KIT
Lauren Kelly
Purpose Spot when conditions shift and no longer match current behaviours
Intervention type Behaviour-context audit
Audience Behaviour Owners, product leads, ops managers, AI change partners
Time 1 hour session. Run quarterly or after system/process shifts
Expected outcomes
Users: Can articulate why some tasks have become harder
Teams: Adjust or replace behaviours that are holding work back
Business: Avoid legacy behaviours that reduce performance or trust
Organisation: Builds adaptive muscle, behaviour evolves with context, not against it
What to bring to the session
Context Checkpoint Canvas (Build below)
Latest workflow or task description.
Behaviour Tracker. (drift notes, feedback)
HAP Behaviour Kit Pattern Cards. (optional for surface friction reference)
Steps
1 | Frame the Behaviour to Review
Step 1: Clarify what you're here to check
Use this sentence to anchor the behaviour:
“We expect [who] to [do what] using AI during [which task], in order to [achieve what outcome].”
Example: “We expect team leads to use AI to summarise weekly client feedback, so patterns can be shared at Monday stand-up.”
Then ask:
When was this behaviour introduced?
What has changed since? (in task, tool, team, targets?)
Use the Behaviour Tracker if drift has already been flagged.
2 | Run a Context Fit Scan
Step 2: Use the Context Checkpoint Canvas to test for misfit
Discuss and explore which parts of the context have changed, and how that affects the behaviour.
Look across six zones where misfit often creeps in. Use the Context Layers Cheatsheet (in resources below) to make this easier.
Work structure: Has the trigger, timing, or flow of the task changed?
Tool & interface: Has the AI evolved? Are the prompts, outputs, or steps now different?
Human role: Is the same person still doing this? Do they still believe in or benefit from the behaviour?
Cross-team dependencies: Have inputs or hand-offs shifted across teams?
Organisational signals: Has the policy, priority, or recognition of this behaviour changed?
External landscape: Has anything outside (customers, regulations, ecosystem) made this behaviour less fit?
What to watch for:
Not all change is obvious. Look for:
New tools that bypass the step
Teams silently reverting
Outputs no longer used
Beliefs drifting (“Why are we even doing this?”)
Mark each layer:
✅ Fits
🟡 Misaligned
❌ Blocking
Step 3: Decide what to do using the 3R model
If you’re refitting:
Name the exact change (what + who + by when)
Log the update in the Behaviour Tracker
If you’re retiring:
Clarify what replaces it
Agree how you’ll test and track the new version
4 | Flag Knock-On Effects
Step 4: Check if this change affects other teams, systems, or metrics
Ask:
“If we change or retire this behaviour, who else needs to know?”
“Does this shift impact a shared tool, KPI, or dependency?”
If yes:
Log it on the your action tracker
Inform Behaviour Owner Network or flag in your shared change space (e.g. Slack channel, Miro board, Notion doc)
Step 5: Lock in accountability
Decide:
Who owns testing or communicating the change?
When will we revisit this behaviour next?
Resources
Context Layers Cheatsheet
Discuss and explore which parts of the context have changed, and how that affects the behaviour.
Other methods within the evolve block
Human-AI Performance
By Lauren Kelly
Contact: lauren@alterkind.com
© 2025 Alterkind Ltd. All rights reserved.
Human-AI Performance™ is a proprietary methodology developed by Alterkind Ltd using our Behaviour Thinking® framework. All content, tools, systems, and resources presented on this site are the exclusive intellectual property of Alterkind Ltd.
You’re welcome to use, share, and adapt these materials for personal learning and non-commercial team use.
For any commercial use, redistribution, or integration into client work, services, or paid products, please contact lauren@alterkind.com to discuss licensing term
Icons by Creative Mahira, The Noun Project.
Thanks to Nicholas Edell, Valentina Tan and multiple VPs implementing AI for your feedback during development.
LICENSE
Human AI Performance by Alterkind is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Based on work at alterkind.com
For commercial licensing contact: lauren@alterkind.com