AI Feedback Loop

AI FEEDBACKCAPABILITY

Purpose Turn real AI use into behavioural progress. Build a feedback loop that makes improvement fast, visible, and repeatable.
Intervention type Behavioural systems design
Lead Team lead, AI champion, or product owner
Time 90 min setup, 15–30 min recurring loop

Expected outcomes

  • User: Knows what “good” looks like and how to improve

  • Team: Runs a regular loop to sharpen AI use and surface blockers

  • Business: Skills improve faster, feedback gets acted on

  • Org: Frictions turn into insights leaders can track and respond to

What to bring to the session
  • Feedback Loop Canvas. (Build in the below steps)

  • Feedback Safety Checklist (Below)

  • Signals Library (Below)

Steps

1 | Frame it

Step 1: Explain what this is and why it matters
Start your session with a single framing idea so that everyone knows this is about improving specific behaviours, not tracking people.:

“This feedback loop will help us learn faster from how we’re already using AI. It turns trial-and-error into progress we can see.”

Make the behaviour system visible:

  • “Do something” → “See effect” → “Adjust” → “Try again”

Step 2: Outline the Feedback Loop Canvas

Take a sheet of board and draw a circle with 6 parts.

In each write:

  • Behaviour: What AI action are we reinforcing?

  • Cue Moment: When does it happen in real work?

  • Result Signal: What tells us it worked (or didn’t)?

  • Feedback Format: How will users get feedback?

  • Adjustment Step: What do they do next?

  • Loop Rhythm: How often does the loop run?

2 | Pick a task

Step 3: Choose a live AI use case

  • Start with something that happens at least weekly

  • Examples:

    • Summarising meetings with Copilot

    • Drafting customer replies

    • Prioritising tasks with AI suggestions

3 | Fill the canvas

Step 4: Define the behaviour

“What do we want people to do with the AI?”

Examples:

  • "Use AI to generate a first draft of the report summary"

  • "Ask AI to rewrite internal emails for clarity"

Tip: Use a verb type words. Like use, prompt, review, share). Make it observable.

Step 5: Define the cue moment

“When should this behaviour happen in the real workflow?”

Examples:

  • “After every call ends”

  • “Before hitting send”

  • “During sprint planning”

Step 6: Identify result signals

“How will we know it worked, or didn’t?”

Pick 1–2 fast, visible signals, such as:

  • AI output is used without rework

  • Time saved (compared to manual)

  • Peer gives a 👍 reaction

  • Email gets higher open rates

  • Prompt reuse by others

Tip: Use the below Signals Cheat Sheet for ideas.

Step 7: Choose the feedback format

“How will people get the feedback?”

Options:

  • Peer comments (Keep / Adjust / Stop)

  • 1–5 usefulness score

  • Visual prompt tips

  • Slack emoji + comments

  • Quick coaching at end of day

Step 8: Plan the adjustment step

“What should people do next, based on the feedback?”

Examples:

  • Retry the prompt immediately

  • Tweak it and post again

  • Try a new template

  • Ask for peer coaching

Step 9: Set the rhythm

“How often does this loop run?”

Examples:

  • Every Friday for 15 mins

  • At the end of daily stand-up

  • After every sprint review

4 | Run and reflect

Step 10: Try the behaviour, run the loop

  • Try the task

  • Get feedback (peer, prompt, data)

  • Adjust

  • Repeat

  • Log quick learnings in a visible place (Slack, tracker, whiteboard)

Track:

  • Attempts made

  • Signals hit

  • Feedback given

  • Adjustments tried

Before giving or preparing to receive feedback, use the Feedback Safety Checklist.

Step 10: Reflect every 4 weeks
Use 10–15 mins to check:

  • What are we better at now?

  • What’s still unclear?

  • Are people still using the loop?

  • What do we need to change?


Escalate top blockers to your AI rollout lead monthly.

Resources

Signals Library

A cheat sheet of common, practical feedback signals teams can use to track AI behaviour success.

Feedback Safety Checklist

Use this checklist at the start of any feedback loop, peer-review, or AI improvement session.

Before giving feedback
  • Focus on the task, not the person
    Say: “This part of the output is unclear,” not “You did this wrong.”

  • Be specific
    Point to exact words, choices, or moments. Avoid vague comments like “It’s a bit off.”

  • Use Keep / Adjust / Stop format

    • Keep: “This worked well, do more of it.”

    • Adjust: “This could land better, try changing X.”

    • Stop: “This part caused confusion—let’s drop it.”

  • Ask first, offer second
    “Want a thought on that?” → Helps others feel in control.

When receiving feedback
  • Listen for patterns, not perfection
    Don’t over-focus on one comment. Look for repeat signals.

  • Clarify before reacting
    Ask: “Can you show me what you meant?” before explaining yourself.

  • Thank the person
    Feedback = support. Acknowledge it openly.

  • Try the change, then judge
    Experiment first. Reflection comes after action.

Other methods within the capability block