- Evidence-Based Prompts
- Posts
- 🧠 Reduce AI Hallucinations with 3 Prompts
🧠 Reduce AI Hallucinations with 3 Prompts
Train your AI research assistant to produce reliable and robust research. Learn how to reduce hallucinations with three targeted prompts and human-in-the-loop validations.

Kia ora, Namaskaram 🙏🏾
I've spent the past 3 months experimenting with prompts to reduce the dreaded “AI hallucinations" in my research reports.
Here's my key takeaway:
AI makes research faster, deeper, and more accessible. But only humans can make it trustworthy.
To effectively tackle AI hallucinations, it's useful to see that they come in three distinct shapes:
Type 1: Semantic Distortions
AI misunderstands context. Example: "Python is a friendly programming language." (It overlooks that Python is also a snake.)
Type 2: Factual Inaccuracies
AI confidently presents incorrect facts. Example: "Sydney is the capital of Australia." (When humans state inaccurate facts, AI can end up restating those inaccuracies. Actually, it's Canberra. )
Type 3: Fluency Discrepancies
AI produces fluent but unrealistic statements. Example: "Cats love lasagna—and dread Mondays." (AI is trained on Garfield books and extrapolates this for all cats.)
Working Paper - Nanwani, J., & Kadu, R. K. (2025). Advances in reducing AI-generated hallucinations: Techniques and open challenges. Authorea.
3 Prompts for Reliable and Robust Research
Try my3 prompting strategies to train your AI Research Assistant to produce Deep “Behavioural” Research.
⚠️ Human-in-the-loop validation—this workflow is essential for you to catch those tricky Type 3 hallucinations.