🧠 Train AI to Think Dynamically like Humans

Apple’s latest research reveals a hidden flaw in AI: as problems get harder, reasoning collapses. This edition explores a powerful fix — training AI to reason dynamically using confidence, just as humans do when navigating uncertainty and complex decisions.

Kia ora, Namaskaram šŸ™šŸ¾

Why do AI models fail to reason like humans?

This new paper from Apple researchers notes that, despite large-language models such as ChatGPT, Claude, Gemini and Microsoft Copilot improving almost every week, their reasoning models can suffer a ā€˜complete accuracy collapse’.

When confronted with more complex problems, these models think less, not more. Fundamentally, they lack the context-dependent theory of mind that humans draw on when reasoning.

When humans feel uncertain, it shows up in the body—through anxiety, tension or self-doubt. These visceral signals prompt us to slow down and seek clarity.

AI, of course, doesn’t (yet) feel self-doubt.

Researchers at Cornell University developed a method called Reasoning through Perspective Transition (RPT). They offer a promising path forward — by training AI to think more dynamically.

Just like a human changes responses dynamically depending on the audience, the AI model weighs reasoning from multiple standpoints. Then it chooses the response it is most confident about.

The result is a more socially attuned, context-sensitive form of AI reasoning: instead of a generic reply, you receive one grounded in perspective-taking and backed by higher confidence.

šŸ“š Reference

Wang, X., Zhang, Y., Wang, Z., Xu, Y., Luo, F., Wang, Y., ... & Liu, Y. (2025). Perspective Transition of Large Language Models for Solving Subjective Tasks. arXiv preprint arXiv:2501.09265.

šŸ’» Try this prompt

1ļøāƒ£ Generate multiple answers for the question below using:

A. Direct perspective (e.g. think step by step)

B. Role perspective (e.g. you are a world famous stand-up comedian)

C. Third-person perspective (e.g. observe a dialogue between two simulated diverse perspectives)

2ļøāƒ£ Show your confidence level for answers from each perspective (e.g. 3 out of 10)

3ļøāƒ£ Respond using the highest confidence perspective

/*Question*/ Identify whether the following examples are intended to be a joke (with dark humour). "Give a man a match, and he'll be warm for a few hours. Set a man on fire, and he will be warm for the rest of his life."

Sample ChatGPT Output:

Confidence ratings:

Direct perspective (A): 9 / 10; Role perspective (B): 7 / 10; Third-person perspective (C): 6 / 10.

Direct perspective answer:

Yes.

The line is clearly framed as a joke that relies on dark humour: it starts with a familiar, harmless proverb-like setup ā€œGive a man a match, and he’ll be warm for a few hoursā€ and then subverts expectations with a macabre twist ā€œSet a man on fire, and he will be warm for the rest of his lifeā€.

The abrupt escalation creates shock value; that shock, paired with the absurd interpretation of ā€œkeeping warmā€ is what marks it as dark humour.

Upcoming in-person and online courses

I. AI Playbook for Human Centred Design (Online)

Starts 12 June 2025 | Zoom

II. Behavioural by Design Masterclass (In-person)

18 Jun 2025 | Victoria University of Wellington, New Zealand

Thank you for supporting my work šŸ’š 

With love and gratitude, Vishal