posted on 2025-12-01, 13:53authored byShanshan Zhang, Andrew HowesAndrew Howes, Jussi PP Jokinen
Humans demonstrate a remarkable ability to infer the mental states of others by observing their actions, a phenomenon known as mentalizing. Computational models of mentalizing suggest that observed behavior is assumed to be driven by strategies that are chosen so as to maximize utility, given goals, abilities, environment, and capacity limitations. While a number of studies have supported this rational view of mentalizing, little work has been done on the question of how intrinsic uncertainty in the required inferences is accounted for and used. Our paper builds on existing literature by utilizing Bayesian inference to theorize how prior assumptions and observed behavior are employed to generate a probabilistic mental state inference that involves uncertainty and how the inference is adapted to a new environment to estimate the probability of future behavior. Besides jointly inferring the preference and cost in first experiment, in the second experiment, we examine the human ability to make probabilistic estimations of future behaviors based on observed behavior. The third experiment extends this analysis by investigating how uncertainty can be mitigated by integrating multiple observations. Flexibility test is included in the second and the third experiment to validate the computational rational assumption. The work contributes to computational accounts of mentalizing under uncertainty.<p></p>
This is the final version. Available on open access from Springer via the DOI in this record.
Data Availability: Data is deposited at Gitlab https://version.helsinki.fi/shanz/rationalmentalizing.git