Kia ora and Namaskaram 🇳🇿 🇮🇳
Here's what the research reveals about AI biases—and the strategies that actually work to mitigate them.
Unbiased AI is useless AI.
Missed my workshop: Unbiased AI Research in 2025? No worries.
I've just published a recording summarising the latest research on AI biases.
Here's what I discussed:
📚 Cognitive Bias Codex Experiment. I asked Claude to review all 188 biases and rate the likelihood each shows up when researching with AI. Automation bias, confirmation bias, and social desirability bias topped the list. (Full prompt in video description.)
⚠️ LLMs have become a dangerous authority. Why do biases exist? The Bernie Madoff story shows us—as humans, we tend to believe in other humans. Now there's research showing we trust LLM opinions more than our peers, more than experts.
🎯 ChatGPT can be tricked. Researchers got the LLM to comply 72% of the time using strategies like scarcity, social proof, and authority. The same biases which work for humans work quite effectively to persuade the AIs.
💡 The role of research is shifting. We shouldn't be eliminating biases. We need to have a sense of when a bias is showing up—then look at active ways of mitigating it.
⚡ Metacognition is the breakthrough. Recent research shows LLMs can toggle between fast pattern matching (System 1) and slow deliberate thinking (System 2)—moving to slower thinking when stakes are high or confidence is low.
Want to go deeper? Learn how to make AI your strategic partner for clearer, deeper, and wiser thinking.

