Understanding public hopes and fears about artificial intelligence (AI) is crucial, as these perceptions shape public trust, influence policy decisions, and guide the trajectory of responsible AI development. However, such sentiments vary widely across cultural and national contexts, underscoring the need for predictive models that can generalize across diverse populations.
To date, most research has focused on descriptive analyses or geographically limited case studies. Specifically, there has been little effort to develop predictive models capable of inferring individual hopes or fears from free-text responses, particularly in a cross-cultural or multilingual setting. This leaves a significant gap in creating scalable tools grounded in rich, diverse datasets.
To partially address this gap, the proposed work will develop predictive models that classify whether a person expresses hope or fear about AI based on textual input. These models will be trained and evaluated using: