Kia ora, Namaskaram 🙏🏾

In human crowdsourcing, cognitive biases significantly impact the quality of ideas. Two techniques have been developed to mitigate these biases in human crowdsourcing:

1. Social Projection: Asks crowdworkers to predict the label that they believe the majority of other workers would choose.

2. Awareness Reminder: Makes crowdworkers aware of their inherent biases before they answer.

But here's the question: if LLMs exhibit the same cognitive biases as humans, will the same crowdsourcing mitigation strategies work for the AI?

📚 The Evidence

Researchers tested both methods on GPT-3.5 and GPT-4 across six cognitive biases.

The Six Biases

👉🏾 Order Bias - Preferring options based on their position in a list rather than their actual quality

👉🏾 Compassion Fade - Showing reduced empathy when evaluating larger groups compared to individuals

👉🏾 Egocentric Bias - Preferring to rank its own outputs highly in evaluations

👉🏾 Bandwagon Effect - Following perceived majority opinions rather than evaluating independently

👉🏾 Attentional Bias - Being distracted by irrelevant information when making judgments

👉🏾 Verbosity Bias - Favoring longer responses even when shorter ones may be more accurate

The Bias Fixes Tested

Fix 1. SoPro (Social Projection): Asks AI to answer according to how the majority of people would respond, encouraging reflection of broader social consensus.

Fix 2. AwaRe (Awareness Reminder): Informs the AI about a specific bias and instructs it to be careful of this bias while answering.

What Worked

AwaRe (Awareness Reminder) successfully enabled LLMs to mitigate biases. The method has two key advantages: it doesn't require lengthy explanations or repeated problem-solving, and it's versatile (it can be applied to any cognitive bias regardless of question format).


What Didn't Work

SoPro (Social Projection) showed inconsistent or minimal bias reduction compared to its effectiveness with humans. What works to reduce bias in human crowdworkers doesn't necessarily work for AI.

💻 Vishal's Evidence-Based Prompt:

Please answer the following question while being aware of [bias name].

Six biases from the research you can enter into your prompt:

  1. Order Bias - Position-based preferences

  2. Compassion Fade - Reduced empathy for larger groups

  3. Egocentric Bias - Self-preference in evaluations

  4. Bandwagon Effect - Following perceived majorities

  5. Attentional Bias - Distraction by irrelevant information

  6. Verbosity Bias - Preferring longer over better

Designed with 💚 Vishal George

Founder & Chief Behavioural Scientist

A few ways to keep learning:

🧠 Dangerous AI Biases - Read the 9 risky AI biases to watch out for
🃏 Thinking Fast & Wise with AI - Get 27 Prompt Cards to think clearly, deeply and wisely with AI.
📚 Five AI on Substack - Subscribe to a paid plan to access my library of prompts and 5 x AI Playbooks published every year.

Keep Reading

No posts found