Can AI Understand How We Fear... AI?

Understanding public perceptions of AI risk is critical for shaping responsible development and regulation, especially as AI systems increasingly affect everyday life across cultural and national contexts.

While large-scale surveys have captured public attitudes toward AI across 20 countries (Dong et al.), such efforts are costly, time-consuming, and provide limited insight into why people hold certain views—often missing the nuanced reasoning behind expressed fears or support.

To explore whether large language models (LLMs) can approximate such surveys, we propose using a combination of Plurals (Ashkinaze et al.) and In Silicon Sociology (Kozlowski et al.) to simulate deliberation among agents representing people from those 20 countries. These agents, guided by a large language model moderator, will discuss whether they fear AI or not.

We will then identify areas of high and low consensus, and compare the results to those from real-world respondents, following a methodology similar to Figure 4 in Bianchi et al. This approach not only tests the alignment of simulated and actual public opinion [Data Source], but also surfaces the underlying reasoning that surveys often miss.


References: