Investigating AI's View of the World and Humanity in Religious and Social Conflicts
Project A1
Artificial intelligence (AI), in particular large language models (LLMs), are leading to profound changes in teaching, society, politics and religion. Our project examines the values and attitudes inherent in LLMs such as ChatGPT, Grok, DeepSeek_r1 and Llama. These models form the core of generative AI.
Our research deals with the following topics: Which values, attitudes and assumptions, tendencies (bias) and stereotypes are spread by and reflected in which LLMs? How can we and other decision-makers influence LLMs so that they avoid hate speech and one-sidedness, for example, and instead offer balanced reflections? Which topics are most affected by the battle for interpretative sovereignty?
We will create data sets for testing LLMs (benchmarks) that allow us to systematically measure values and views expressed by LLMs. In addition, we are researching training methods for LLMs that allow the models to generate ambivalence, consideration of pros and cons and tolerant views when dealing with difficult topics.
Our research questions are part of the cluster A objectives, in particular research questions a. and b. and the transversal question of bias and possible discrimination in models. To this end, we are also searching for knowledge sources and the underlying histories of ideas, and use content analysis methods that benefit the entire cluster.