Norms in language-based Human-AI Interaction
Norms in language-based Human-AI Interaction
Disciplines
Computer Sciences (25%); Media and Communication Sciences (25%); Philosophy, Ethics, Religion (25%); Linguistics and Literature (25%)
Keywords
-
Misinformation Crisis,
Fake News,
Norms of Assertion,
AI Ethics,
Human-AI Interaction,
Responsible Communication
As misinformation, fake news, and conspiracy theories circulate ever more freely, trust in media, science, and government is eroding. This challenge will intensify further, as we increasingly communicate with, and by aid of, large language models. Our three-year research project seeks to devise principles for responsible LLM communication: Philosophically and empirically informed rules, that specify what LLMs should and should not say. To do so, we aim to understand what people expect in conversations with AI, how they respond when these expectations arent met, and whether their expectations and reactions differ across languages and cultures. Subsequently, well propose guidelines for designing AI systems that communicate responsibly and transparently. We will also test the envisioned novel guidelines with industry partners who provide language-based AI applications. The project will be conducted by Prof. Markus Kneer (project leader), University of Graz, PD Dr. Markus Christen (PI), University of Zurich, Prof. Mihaela Constantinescu (PI), University of Bucharest, Prof. Izabela Skoczen (PI), Jagiellonian University. Polaris News, led by award- winning journalist Hannes Grassegger is a key collaborator.
- Universität Graz - 100%
- Aleksander Smywinski-Pohl, AGH University of Science and Technology - Poland
- Zbigniew Skolicki, Sonstige Forschungs- oder Entwicklungseinrichtungen - Poland
- Markus Christen, University of Zurich - Switzerland
- Hannes Grassegger, Universität Basel - Switzerland