The problem of organizational moral misalignment is more critical than ever. Ethical blind spots can lead to reputational disasters, internal turmoil, and even flawed decision-making at the highest levels.
Past efforts to address these issues have often fallen short. Traditional top-down ethics reviews or limited stakeholder involvement frequently fail to capture the diversity of internal perspectives before critical decisions are made.
To tackle this, we propose using a combination of Plurals, “In Silico Sociology”, and 1000 Agents to simulate deliberations on real world corporate dilemmas [1] among diverse roles within an organization — such as executives, engineers, and ethicists (Madaio et al.).
These AI agents, each representing different organizational viewpoints, engage in structured discussions moderated by a large language model (LLM). Through these conversations, we identify high and low-consensus moral questions, spotlighting the areas of disagreement for deeper organizational reflection.
To further enhance these reflections, we suggest embedding our four idea-generating tools directly into LLMs, enabling more dynamic, creative, and inclusive ethical exploration across teams. Alternatively, you should run a user study testing whether people using your tool take `more informed' moral decisions than those using a baseline tool. For an exampele of evaluation setup, see, Section 5.1 plus A.4 in the Appendix of the Impact Assessment Card paper.