Can AI Build Mental Immunity? Countering Indoctrination
Can AI Build Mental Immunity? Countering Indoctrination
Disciplines
Computer Sciences (85%); Political Science (5%); Psychology (10%)
Keywords
-
Generative AI,
Chatbots,
Psychological Inoculation,
Value Sensitive Design,
Informationwarfare
Artificial intelligence is transforming how people experience the world both online and offline. Not only do we interact with AI chatbots and other algorithmic systems like recommendation tools, but we also increasingly find ourselves in immersive digital environments. Some researchers refer to these as synthetic realities. These are spaces where the line between real and virtual becomes blurred, such as in online games, virtual communities, or platforms with AI generated content. While they offer new opportunities, they also bring serious risks. Extremist groups, authoritarian governments, and powerful private actors are already using these technologies to influence public opinion. They spread hate, false information, and emotionally charged messages to divide communities and weaken trust in democratic institutions. Current countermeasures, such as removing content or banning users, are often too slow and can lead to unintended consequences like limiting freedom of speech or increasing social division. This project explores a different approach. Instead of reacting after harm has occurred, it investigates how AI systems can help people build resistance to manipulation from the beginning. Drawing on psychological inoculation theory, a concept similar to mental vaccination, it develops a chatbot that engages users in conversation. By exposing people to weaker forms of harmful arguments, the chatbot helps them recognize and resist persuasive messages before they take hold. The project also challenges traditional thinking in artificial intelligence and human computer interaction. While most AI systems are designed to automate tasks or support decision making, this research explores how AI can ethically influence human thinking in support of democratic values. It asks how persuasion can be used responsibly, especially in sensitive areas such as preventing extremism and digital manipulation. The chatbot will be developed through a process that puts human values at the center. Experts in psychology, education, digital safety, and civil society will help identify key values such as empathy, openness, and respect for user autonomy. These values will be reflected in the chatbots behavior and communication style. The system will be trained using real examples of counter speech and evaluated in controlled environments where tone, message structure, and transparency are varied. The broader goal is to build AI systems that do not simply remove harmful content but support critical thinking and resilience. This research offers new tools to protect users, especially young people, from harmful influence in digital spaces. It also shows how AI can be used not to manipulate, but to empower people to think clearly, reflect more deeply, and engage safely and responsibly in digital life.
- Technische Universität Wien - 100%