Papers
arxiv:2508.08992

Rethinking Prospect Theory for LLMs: Revealing the Instability of Decision-Making under Epistemic Uncertainty

Published on Apr 10
Authors:
,
,
,
,
,
,
,
,

Abstract

Prospect Theory's applicability to large language models shows inconsistent reliability and lacks robustness under linguistic uncertainty, particularly with epistemic markers.

AI-generated summary

Prospect Theory (PT) models human decision-making behaviour under uncertainty, among which linguistic uncertainty is commonly adopted in real-world scenarios. Although recent studies have developed some frameworks to test PT parameters for Large Language Models (LLMs), few have considered the fitness of PT itself on LLMs. Moreover, whether PT is robust under linguistic uncertainty perturbations, especially epistemic markers (e.g. "likely"), remains highly under-explored. To address these gaps, we design a three-stage workflow based on a classic behavioural economics experimental setup. We first estimate PT parameters with economics questions and evaluate PT's fitness with performance metrics. We then derive probability mappings for epistemic markers in the same context, and inject these mappings into the prompt to investigate the stability of PT parameters. Our findings suggest that modelling LLMs' decision-making with PT is not consistently reliable across models, and applying Prospect Theory to LLMs is likely not robust to epistemic uncertainty. The findings caution against the deployment of PT-based frameworks in real-world applications where epistemic ambiguity is prevalent, giving valuable insights in behaviour interpretation and future alignment direction for LLM decision-making.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2508.08992
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2508.08992 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2508.08992 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.