BFSI insights

AI Through the Human Lens: Investigating Cognitive Theories in Machine Psychology

Published 7 Nov 2025 · arXiv · Akash Kundu
arXiv preview

Overview

The paper examines if Large Language Models (LLMs) display human cognitive patterns, using frameworks such as the Thematic Apperception Test and Moral Foundations Theory. The study reveals that LLMs often mimic human-like behaviors, raising questions about AI transparency and ethical use.

Key Insights

  • Cognitive Patterns in LLMs: LLMs produce coherent narratives and show susceptibility to positive framing.
    • Evidence: Evaluated using structured prompts and automated scoring.
    • Verifiable: Yes, through replication of the study.
  • Moral Judgments: LLMs align with Liberty/Oppression concerns.
    • Evidence: Observed in model responses.
    • Verifiable: Yes, through analysis of model outputs.
  • Self-Contradiction and Rationalization: Models demonstrate contradictions tempered by rationalization.
    • Evidence: Identified through structured evaluation.
    • Verifiable: Yes, via independent testing.

BFSI Relevance

  • Why Relevant: Understanding LLMs' cognitive patterns aids in developing transparent and ethically deployed AI systems.
  • Primary Sector: Financial Services
  • Subsectors: AI Safety, Ethical AI Deployment
  • Actionable Implications:
    • Enhance AI transparency in financial services.
    • Develop ethical guidelines for AI deployment.
    • Monitor AI systems for cognitive biases.
researcher peer-reviewed-paper global