On the Way to a Non-Discriminatory AI

An Interview with the Beyond AI Collective

Tim Vallée and Michael Puntschuh are members of the Beyond AI Collective. In this interview, they present their work, explain the Collective's goals and what is needed to prevent AI from becoming a discrimination machine.

DAILOGUES: What does the Beyond AI Collective do?

Tim Vallée: We are a non-profit organization that aims to reduce the risks of discrimination of AI systems, but also of other algorithmic decision-making systems. To this end, we have developed a socio-technical approach that attempts to analyze systems for discrimination along an AI life cycle – that is from the initial idea of planning to the development and implementation of a system – to enable low-discrimination AI systems. This includes analyzing the training data of AI models and their integration into applications. We also want to help shape policies on AI through our involvement.

Michael Puntschuh: I would like to add why we chose the path of an NGO. A large part of the founding team was already working on AI from a policy perspective; for example, we wrote guidelines on the ethical use of AI or held workshops on the subject. We are still doing some of this now, but we realized that we were missing a good combination of different perspectives and work with concrete AI systems. With the Collective, which includes people who are familiar with regulation and discrimination as well as technically oriented people who build AI systems, we succeeded in creating an interdisciplinary link.

DAILOGUES: The work of the Beyond AI collective relates to policies and regulations. On your website, you state that your work is value-based. What does that mean for you?

Michael Puntschuh: For us, a values-based approach means that we deal with AI systems with a certain standard, that is we want to prevent discrimination. Ultimately, it is about enabling a fairer use of AI systems or at least preventing unfair use. For example, if a company approaches us and wants to work with us, the aim must be to prevent discrimination together – and not just to prevent a PR disaster. To achieve this, the collaboration should be as transparent and trusting as possible. For us, these are the values on which our work is based.

DAILOGUES: To summarize, one could say that you implicitly work with values and these values would include ideals such as fairness, transparency, trustful interaction, but also working on an equal footing. Do you think this is a framework for approaching AI in another region of the world?

Tim Vallée: It is important to look at specific processes. How can topics such as low-discrimination AI be meaningfully addressed in organizations in the long term through structures, processes, skills and resources? And values come into play when designing these structures and filling them with content. For us, anti-discrimination work is based on human rights. At their core, these have a universal claim and therefore also apply in other regions of the world. However, it is not a question of absolutizing the Western perspective and, for example, developing an AI governance structure with a solely German perspective and then rolling it out worldwide. This is because legal situations and ethical parameters are different in different regions of the world.

Michael Puntschuh: I think that a discussion about values is important. But it can be a bit of a distraction, too. We want to be guided by the law, and we can rely on clear standards in Germany and Europe. There are also international agreements and conventions that aim to reduce discrimination against women and racism. It makes sense to focus on such cross-regional commonalities, even if there are cultural differences.

DAILOGUES: The EU AI Act is now on everyone's lips. Is there a group that particularly benefits from the EU AI Act, for example, companies or people from marginalized groups or perhaps government institutions?

Tim Vallée: If you look at it from the perspective of political discourse, it is noticeable that all interest groups complained a little after it was passed. Therefore, it seems that no group benefits significantly more than others. The main purpose of the Act is to enable innovation and make risk minimization practicable by providing a uniform set of rules. As an approach to product regulation, it regulates access to the European single market. In this respect, I believe that companies are among the biggest beneficiaries of the EU AI Act. However, a distinction should be made between corporations and small and medium-sized enterprises. Small companies often lack the resources to implement the requirements quickly and with legal certainty, and they also make less use of the internal market. Even if the issue of discrimination is not the focus of the AI Act, high-risk systems must meet requirements, particularly in terms of data quality and risk management, which are the building blocks of a less discriminatory AI design. Hopefully, norms and standards will then make it possible for organizations to implement this across the board.

Michael Puntschuh: The requirements of the EU AI Act will also help those people who implement and are responsible for AI systems in companies or organizations.

DAILOGUES: Responsibility is an important keyword and at the same time not an easy topic, because with AI we have complex technological systems in which many people are involved: for example engineers, users or companies that offer the AI systems. This means that there is a wide distribution of responsibility. Do you have any suggestions on how to improve the attribution of responsibility when working with AI?

Tim Vallée: An important point, also in line with the EU AI Act, is the documentation relating to the AI applications: What training data was used? For example, is it representative? What were other important design decisions? Taking responsibility also means having the courage to make decisions and being able to stand up for them.

DAILOGUES: Have you ever personally encountered AI in a discriminatory way?

Michael Puntschuh: As a white man living in Europe, you are less likely to be directly affected by discrimination, including by AI. Nevertheless, I have observed AI systems that have produced discriminatory content: For example, if you ask image generators to generate a group of “researchers”, you very often only get white men. In early 2024, the Austrian employment agency made headlines with an assistance system for young job seekers that I had tried out. The system offered boys and men different career suggestions than girls and women, despite biographical information that differed only in terms of gender.

DAILOGUES: Will it be possible to use AI in the future to make AI more non-discriminatory?

Michael Puntschuh: There are AI tools that are helpful for this, such as hate speech detectors. However, it is important to realize that such tools can only ever be one of many. We also need people who can use the tools properly. And unfortunately, discrimination is often difficult to define in technical terms, so we need to listen to those who suffer from discrimination and include them in the development of AI.

Tim Vallée: Where I think it can get very exciting is in the area of explainability. There are interesting explanatory methods such as LIME (Local Interpretable Model-Agnostic Explanations) and Shapley Values, that is approaches that can help us to understand how an AI system arrives at its result.

DAILOGUES: If I have the impression that I am a victim of discriminatory AI, who can I turn to?

Michael Puntschuh: For people living in Germany, there are three good points of contact for such cases: the consumer advice centers, the Federal Anti-Discrimination Agency and the respective branches at the state level.

Tim Vallée: Many companies have works councils and anti-discrimination officers from whom you should seek help. There are also special contact points for certain topics, such as the ADA project, which stands for “Antidiscrimination in the world of work” and is based in Bremen. There are several projects of this kind, which you can hopefully find using a search engine, for example.

We thank Tim Vallée and Michael Puntschuh for the DAILOGUE.

About the Authors

Tim Vallée

Beyond AI Collective

Michael Puntschuh

Beyond AI Collective