Interesting research: “ Humans expect rationality and cooperation from LLM opponents in strategic games .” Abstract: As Large Language Models (LLMs) integrate into our social and economic interactions, we need to deepen our understanding of how humans respond to LLMs opponents in strategic settings. We present the results of the first controlled monetarily-incentivised laboratory experiment looking at differences in human behaviour in a multi-player p-beauty contest against other humans and LLMs. We use a within-subject design in order to compare behaviour at the individual level. We show that, in this environment, human subjects choose significantly lower numbers when playing against LLMs than humans, which is mainly driven by the increased prevalence of ‘zero’ Nash-equilibrium choices. This shift is mainly driven by subjects with high strategic reasoning ability. Subjects who play the zero Nash-equilibrium choice motivate their strategy by appealing to perceived LLM’s reasoning ability and, unexpectedly, propensity towards cooperation. Our findings provide foundational insights into the multi-player human-LLM interaction in simultaneous choice games, uncover heterogeneities in both subjects’ behaviour and beliefs about LLM’s play when playing against them, and suggest important implications for mechanism design in mixed human-LLM systems...
AI & Technology
Human-AI Trust: New Study Reveals Strategic Play Against LLMs
- Humans adopt more Nash-equilibrium strategies when playing against LLMs than other humans.
- This strategic shift is driven by beliefs in LLM rationality and unexpected cooperation.
- Individuals with high strategic reasoning ability primarily drive the change in human-LLM play.
Advertisement
Intelligence briefing: Why this matters: Understanding human expectations of AI's strategic behavior is crucial for designing secure, effective human-AI systems in defense, intelligence, and cybersecurity operations, preventing miscalculation or exploitation.
Original reporting: https://www.schneier.com/blog/archives/2026/04/human-trust-of-ai-agents.html