Artificial intelligence is often depicted as if no humans were involved: humanoid robots walking on their own, code glowing bluish-white on giant empty screens. ChatGPT, too, simulates a conversation with a counterpart, even though no office colleague is actually present.
The image is misleading: Artificial intelligence doesn’t magically emerge from faraway clouds. AI is made by people. It is neither new nor groundbreaking to say that the worldview of software engineers writing code, and data scientists sorting and feeding datasets into models, plays a key role in determining the worldview that an AI ultimately reflects. So it’s high time to understand from a feminist perspective, where the technological levers are that can be adjusted to shift discriminatory biases.
What We Mean by Artificial Intelligence
To understand AI, it helps to take a brief look at the key technical terms.
Oliver Bendel is a professor of information systems at the University of Applied Sciences and Arts Northwestern Switzerland. He specializes in information ethics and machine ethics. He understands both the technical processes and the ethical issues behind AI. «Artificial intelligence is about replicating human thinking, problem-solving, and decision-making behavior using computer.»
Since the 1980s, what’s now known as «machine learning» or «deep learning» has emerged. This is the area most of the general public associates with artificial intelligence (AI): These developments have enabled computer-based systems to learn independently. Models like «Generative Pretrained Transformers» (GPTs), such as ChatGPT, can generate new, previously non-existent data based on existing data. Oliver Bendel explains the underlying technology as follows: «Machine learning is based on neural networks: between input and output, there are various layers. That’s where learning takes place. When there are many layers and large amounts of data, we call it deep learning.»
Teaching AI that Cats Are Not Rabbits
To train a GPT model, engineers and computer scientists might show it pictures of cats and teach it how to distinguish cats from dogs, rabbits, or hamsters. This is done through techniques like «prompt engineering» or«reinforcement learning»: if something is incorrectly represented – say, a cat is shown with rabbit ears –the model receives feedback and adjusts its layers (its neural networks) accordingly. This process is similar to the game of telephone: If each person correctly hears and passes on the word, the final result will be accurate. In the case of ChatGPT, however, the GPT model was not trained on images, but on billions of text files.
AI and Feminism
And this brings us directly to the critical questions: Who selects the data? Where does it come from? Who processes it? And most importantly: how? Oliver Bendel explains: «Since the creation of the World Wide Web, humans have uploaded billions of documents to the internet. These include academic texts, journalistic pieces, but also comments – and with them, emotions, opinions, and cruelty. Love as well as hate. These systems absorb all of that.»
This has direct consequences for the model's output: if this data is taken in uncritically, without any filtering or evaluation, then a system learns that it’s acceptable to value or devalue people based on characteristics like gender, skin color, or origin – just to name a few discriminatory factors. A well-known example is the Barbie experiment by toy manufacturer Mattel. To promote the Barbie movie, they used the image generator «Midjourney»to create AI-generated images of what Barbie might look like in every country in the world. The South Sudanese Barbie was shown holding a weapon, and the skin tones of Latin American Barbies were noticeably lighter than typical for those regions. These symbolic examples illustrate how «weiss» is embedded as a beauty standard, and how the internet knows more about armed conflict in South Sudan than about its people.
The Problem with the Data
Beyond how AI is trained, there are feminist challenges to consider in data handling. The «Feminist AI Researchers Network» (fAIr) emphasizes in a published paper how essential it is, from a feminist standpoint, to carefully select, sort, weight, and label the data: These processes determine an AI’s output, even more than the algorithms themselves. That makes it equally important who is responsible for evaluating the data before it is fed into a GPT model. While the material may be reviewed based on specific criteria, Bendel points out: «There are still different people sitting there, with their own tastes and preferences. Likely more men than women, though I can’t confirm that for certain», Bendel says. These individuals inevitably label data through the lens of their gender identity and personal perspective: «It would be ideal», says Bendel, «if the people labeling data reflected a true cross-section of society, such that AI is trained by children, teenagers, adults, the young, the old, men, women and intersex people.» Unfortunately, this process often takes place in low-wage countries, where it’s already difficult to find people for this type of labor.
Further Questions and Limitations
Beyond training data, many other important questions remain unaddressed, for example legal questions surrounding data usage: When is it acceptable to paraphrase or adapt data or images, and when does it infringe on ownership rights? But it goes far beyond data itself. In her book «Atlas of AI», AI researcher Kate Crawford criticizes the lack of reflection on how fiercely planetary resources – like rare metals – are contested in order to build AI technologies in the first place. Moreover, the energy consumption of high-performance computers is anything but environmentally friendly.
South American AI researcher Juliana Guerra argues in her paper that many AI tools are developed in Western, industrialized countries such as North America, but then applied in vastly different cultural and social contexts. This can have devastating consequences. Paz Peña and Joana Varon, from the collective «Not my AI», describe a problematic situation in which government organizations in Brazil or Argentina deploy a U.S.-developed AI system in South American favelas. The goal is to identify so-called risk factors for teenage pregnancies and try to prevent them. But in the end, this stigmatizes the real lives of young women.
Both Crawford and Guerra therefore argue that AI must be seen not just as a technological tool, but as a political one. Governments use AI technologies to exercise power by building lithium mines, exploiting workers in low-wage countries, controlling people’s bodies, or waging war. Oliver Bendel also emphasizes that he rejects anything that leads to surveillance of people: «AI should not be about making our lives harder. It should be there to make life easier for us.» However, for Bendel, the limits of AI are clear: «Machines only simulate. They are great at imitating intelligence, morality, or consciousness. But they do not actually possess those qualities.»
So, What Does Feminist AI Look Like?
According to Oliver Bendel, creating transparent and more equitable AI requires multiple approaches. One method is to teach AI what outcomes are desirable and what are not. This can be done through «Reinforcement Learning from Human Feedback» und «Prompt Engineering», i.e. instructing the model on what constitutes a good output. It’s also possible to feed the chatbot with a large number of documents: guidelines, policies, labels, or explanations of human rights. In this way, machines are given a framework for what is permissible and what is not. This approach may be understood as a form of constitutional AI, often implemented through fine-tuning. For Bendel, this is not enough: «We need skilled data scientists who know what constitutes a good dataset, what it can be used for, how it can be abused, and how it can be designed to be free from bias and prejudice. At the same time, we need ethicists—people who can identify problems, describe them, and ideally help resolve them.»
He also advocates for open-source models. In these, companies make the data, methods, and code used to train and develop AI systems transparent. Of course, this comes with the risk that anyone could alter the code. Still, Bendel supports this approach, because it allows people who want to explore and use AI to define their own rules. At the same time, misuse such as the creation of deepfakes like fake nude images of Taylor Swift must be addressed through legal consequences.