
Kia ora, Namaskaram 🙏🏾
In human crowdsourcing, cognitive biases significantly impact the quality of ideas. Two techniques have been developed to mitigate these biases in human crowdsourcing:
2. Awareness Reminder: Makes crowdworkers aware of their inherent biases before they answer.
But here's the question: if LLMs exhibit the same cognitive biases as humans, will the same crowdsourcing mitigation strategies work for the AI?
📚 The Evidence
Researchers tested both methods on GPT-3.5 and GPT-4 across six cognitive biases.
The Six Biases
👉🏾 Order Bias - Preferring options based on their position in a list rather than their actual quality
👉🏾 Compassion Fade - Showing reduced empathy when evaluating larger groups compared to individuals
👉🏾 Egocentric Bias - Preferring to rank its own outputs highly in evaluations
👉🏾 Bandwagon Effect - Following perceived majority opinions rather than evaluating independently
👉🏾 Attentional Bias - Being distracted by irrelevant information when making judgments
👉🏾 Verbosity Bias - Favoring longer responses even when shorter ones may be more accurate
The Bias Fixes Tested
Fix 2. AwaRe (Awareness Reminder): Informs the AI about a specific bias and instructs it to be careful of this bias while answering.
✅ What Worked
AwaRe (Awareness Reminder) successfully enabled LLMs to mitigate biases. The method has two key advantages: it doesn't require lengthy explanations or repeated problem-solving, and it's versatile (it can be applied to any cognitive bias regardless of question format).
❌ What Didn't Work
💻 Vishal's Evidence-Based Prompt:
❝
Please answer the following question while being aware of [bias name].
Six biases from the research you can enter into your prompt:
Order Bias - Position-based preferences
Compassion Fade - Reduced empathy for larger groups
Egocentric Bias - Self-preference in evaluations
Bandwagon Effect - Following perceived majorities
Attentional Bias - Distraction by irrelevant information
Verbosity Bias - Preferring longer over better