Generative AI in agent-based Models
Generative AI in agent-based Models
Disciplines
Computer Sciences (100%)
Keywords
-
Agent-Based Modeling,
AI,
Generative Ai,
Social Sciences,
Large Language Models,
Interdisciplinary
In many areas of scientific research, we use computer models to simulate human behavior. This allows us to conduct research on mobility, social media or even sustainability. For these models to work well, we need to describe human behavior with simple rules. However, human behavior is often too complex, which means that the models can make mistakes. Thanks to the great progress in the field of artificial intelligence (AI) (e.g. ChatGPT and other so-called large-language models), we now have the opportunity to simulate human behavior in a completely different way. Instead of rigid rules, we can ask the AI how people will behave or decide in certain situations. Due to the large amount of data with which the AI has been trained, not only standard behavior can be described, but also very specific behavior of people with various characteristics. This can make the models more realistic and inclusive. However, this fusion of AI and behavioral models also poses some risks and challenges. For example, poorly trained AI can produce very clichéd behaviors that do not show a realistic picture of the real world. Other AI models show the world as it should be instead of how it really is. Another general problem with AI models is that the training data used often has a bias, as certain population groups are disproportionately and in some cases even exclusively represented. In addition, many AI models tend to generate completely false statements (AI hallucination) if they cannot find a correct answer. These problems are particularly serious because the evaluation of AI models is not always easy. For many questions, especially in the area of human behavior, the "correct" answer is not known and the AI`s answers are therefore not easily verifiable. The aim of this research project is to investigate the above-mentioned obstacles and risks and to eliminate them as best as possible. In addition, a scheme will be developed to assess which large- language models can be used well and which can be used less well to predict human behavior. This method can subsequently be used to develop new, better AI models. The central points of the project are inclusion (the models should be able to correctly represent the behavior of all people, not just standard behavior), fairness (no groups of people should be disadvantaged) and transparency (the decisions and statements of the model should be as comprehensible as possible).
- Universität Graz - 100%