The Ethics Watchdog

BEHAVIOUR PROFILEBEHAVIOUR KIT

They don’t hate AI, they just don’t trust it yet. They value doing it right over doing it fast. Fairness and transparency aren’t nice-to-haves, they’re non-negotiable. Win them over, and they’ll help you build something better.

How to spot them

Signature behaviours:

Interrogates the system – Always asking how decisions are made.
Cross-checks results – Verifies AI outputs against trusted sources.
Reads the fine print – Digs into data origins, ethics policies, and compliance.
Demands fairness – Flags bias, discrimination, and gaps in logic.
Thinks bigger – Brings up AI’s environmental and social impact.

What this means for you:
  • They’re your moral compass, they protect your reputation before things go wrong.

  • They build long-term trust with clients, customers, and regulators.

  • But without space to speak, they’ll stall projects and erode team confidence from within.


The challenges they create

⚠️ Slow start – They won’t engage until ethical standards are proven.
⚠️ Endless questions – Can feel like they’re blocking progress with scrutiny.
⚠️ Trust threshold – Need high levels of transparency to even try AI tools.
⚠️ Perfection trap – If AI isn’t flawless, they may see it as flawed beyond use.

What to do

Normalise transparency
  • Don’t wait for questions, volunteer information about how AI works.

  • Share the data sources, decision paths, and limitations up front.

  • Use explainable AI tools and plain-language guides as standard practice.]

Turn them into your AI integrity lead
  • Invite them to co-develop ethical policies, audits, and risk flags.

  • Give them ownership of fairness reviews or bias checks.

  • Position them as a key voice in AI design, not just a gatekeeper.

Help them shape the culture
  • Let them lead discussions on ethical use, inclusion, and sustainability.

  • Encourage them to mentor others on spotting and fixing ethical risks.

  • Celebrate their role as a quality bar, not a blocker.

Success looks like this

✔️ They shape fair, transparent AI policies that teams follow.
✔️ They champion ethical use, not just point out flaws.
✔️ They build trust across the organisation, inside and out.
✔️ They turn scrutiny into safety, making AI more robust and responsible.

Methods that use the profiles

More profiles to explore

Get all 42 patterns, 14 profiles and more in the HAP Behaviour Kit

A digital card kit that spots human friction, flips the behaviour, and turns AI skeptics into everyday users.

Works remote, hybrid, or in‑room.

£365

(+ applicable taxes)