
What are the advantages of considering ethical issues in the development and application of artificial intelligence from the outset? Dr. Sabine Wiesmüller addressed this question in her dissertation entitled "The Relational Governance of Artificial Intelligence - Forms and Interactions". We wanted to find out more.
Ms. Wiesmüller, first of all, congratulations. The Max Weber Prize for Business Ethics, to be precise, is a great prize for your doctorate. How do you feel now?
Yes, it's actually a very nice feeling, I'm very pleased to receive such recognition at the end of a hard time that demands a lot from you. I wrote my doctoral thesis while working, so the evenings and weekends were sometimes very long. That's why it's nice to see afterwards that it was really worth it. And also that my doctoral thesis can make a real contribution to society and the economy.
It's always difficult to put a dissertation into short words. I'll give it a try, but more as an introduction to the next question. You have dealt with the fact that this artificial intelligence that we now have on our doorstep should not simply be released into the economy unregulated, right? What are the arguments against this?
You can see that companies are currently the drivers when it comes to bringing artificial intelligence into contact with society. On the one hand, there are research institutions that develop solutions and, more often, companies that transform these solutions into products. Only then does the strong influence on society come about. What happens next is not always easy to predict. AI should help to find patterns in complicated structures, simplify them and then make recommendations for action. However, if certain characteristics in data sets are more pronounced in the data that the AI works with, this can lead to this data being favored later on - which is not at all what the user wants.
Take Amazon's recruiting process, for example: The company stopped receiving applications from women for a while, which was because the company history data showed that mainly male employees had been hired. Based on this dominant pattern in the historical data, the AI solution decided that it made more sense to give preference to male applicants. For a certain period of time, this was not noticeable, but only conscious analyses brought this effect to light.
This is why it is important with artificial intelligence to examine data sets for bias from the outset, to check the stability of the performance of an AI solution and to actively recognize what the technology can and cannot do. If you understand the characteristics of AI technologies, you can use them to your advantage and offset challenges with targeted measures. Such approaches were therefore one of the main points of my work.
That's the constant criticism of the Large Language Models, isn't it? That our literary history is actually misogynistic and that many of these models reproduce misogynistic prejudices. You have to train them out of that first. Our modern idea of ethics does not correspond to the ethical ideas that are stored in the historical record.
What you are talking about is one of the big challenges with large language models: how do they represent minorities and disadvantaged groups? There is also the notion of digital inequality. These underrepresented groups do not have the same access to digital media or the necessary knowledge or skills as privileged groups. This in turn reinforces existing structures and leads to continuous underrepresentation.
Another major challenge is to first understand what patterns and trends are contained in the countless data sets that are available to us via the internet in order to train these models. It is impossible to analyze this in its entirety and make it transparent. This creates the challenge of social consequences such as the so-called manual "labeling" of data sets. In concrete terms, this means that someone has to view, evaluate and categorize ("label") all the data that is uploaded to the internet. This activity is often outsourced to developing countries and has serious psychological consequences for the employees. This is because they are the ones who sort out the things that should not appear in the AI's training data - such as violence or pornography.
The use of artificial intelligence therefore presents us with challenges, and when you take all this into account, it becomes even more difficult to predict what results an AI will produce. It is therefore all the more important to develop guidelines for the successful and responsible development and use of AI through targeted governance measures.
Companies don't like this kind of regulation very much because they always feel restricted and limited very quickly and worry that they won't be able to earn enough money. You address this in your doctoral thesis and present relational economics as an answer, which also takes the human factor into account. What is this relational economy all about?
Professor Dr. Josef Wieland developed the relational theory. He is one of the leading business ethicists in Germany. The two theoretical strands come from economics on the one hand and sociology on the other. At its core, it is about responsible business and thus the compatibility of business and ethics. My doctoral thesis is about what this means for companies in relation to AI and how they can possibly even generate a competitive advantage for themselves. I am of the opinion that this is indeed possible. If entrepreneurs invest sustainably in their company and their employees, for example by not just employing them abroad under poor conditions, then this will of course pay off in the long term.
This also applies to artificial intelligence. The adoption of the EU AI Act, which was passed by the EU at the beginning of 2024, shows that there is an awareness that the successful use of AI also requires guidelines. The next step is for the EU countries to formulate it at national level. We are therefore currently waiting for the final, national drafting of this regulation. The EU law is the first regulation of AI worldwide. The EU has therefore decided that the European AI market should not remain unregulated. This is initially positive for users, but we need to take a close look at how regulation will affect European innovation.
Regulation will have a huge impact on the development of new ideas, especially for start-ups and start-ups, and may initially make this more difficult. How do we deal with this? How do we apply regulation? And how can we proactively build in mechanisms, such as recurring audits, to prevent distortions like the Amazon example? If we manage to find a pragmatic, practical and innovation-friendly approach to this, Europe could become a pioneer for responsible and safe AI solutions.




