A Deusto thesis warns of the risk of automating injustice if we do not apply a feminist and human rights framework to technology

‘Hic et nunc’ calls for immediate action, arguing that we are at a critical moment to establish the rules for the AI that will govern our future.

New thesis by Borja Sanz

30 March 2026

Bilbao

In a world where algorithms determine who receives a loan or the level of risk for a woman experiencing gender-based violence, the neutrality of technology is questionable or may not exist at all. Borja Sanz Urquijo, author of the thesis ‘Hic et nunc (Aquí y ahora), Análisis interdisciplinar del potencial y las limitaciones de la IA para la defensa de los derechos humanos desde un marco feminista’ (Hic et nunc (Here and Now): An Interdisciplinary Analysis of the Potential and Limitations of AI for the Protection of Human Rights from a Feminist Framework) defended at the University of Deusto, warns that if we do not apply a feminist and human rights framework to technology, we run the risk of automating injustice at an unprecedented speed.

Bias manifests itself in the underrepresentation of women in the AI sector: they make up barely 22% of global talent and hold less than 15% of leadership positions. If artificial intelligence is fed with big data from medical records designed for men, biased judicial decisions, or social media where misogynistic content thrives, algorithms tend to reproduce, and even amplify, structural inequalities. This is the so-called “bias by omission,” in which the absence of data on women produces models that, for example, diagnose heart attacks less accurately in women or impose harsher penalties on their creditworthiness.

Borja Sanz, a lecturer and researcher at the Faculty of Engineering at the University of Deusto, who now has two theses to his name with Hic et nunc, maintains a balanced position in his research between technophobia and techno-optimism, recognising the dual dimension of technology: digital tools also offer opportunities for protection and empowerment through secure reporting channels, support networks, and early warning systems.

The end of extractive AI

Until now, the relationship between the administration and citizens has been, as explained in his thesis, extractive. The system works like a giant vacuum cleaner: it collects intimate and sensitive personal data, processes it in a “black box”—an algorithm whose inner workings are often a mystery even to its creators—and produces a cold, impersonal decision.

"We argue that these systems can evolve," says researcher Sanz. The proposal to avoid bias and revictimisation is to move from a model in which the victim is merely a “data source” to a system that empowers them. How? By involving them in the design of the tool and ensuring that the technology not only assesses risk but also provides human and tangible solutions.

Intersectionality: beyond the code

One of the most innovative points of the thesis is the need to apply intersectionality in programming. Current algorithms are often "blind" to the layers of identity that characterise people. A woman experiencing violence does not face the same challenges if she is a migrant, lives in a rural area without public transport, or has a disability.

If these variables are not considered when designing and implementing such systems, the system generates “blind spots.” The thesis proposes that AI should be able to understand these complex realities to ensure that no one is left behind, transforming the coldness of data into a tool for personalised protection.

Hic et nunc: a critical moment to establish the rules for AI

The conclusion of the thesis is a call for immediate action. The Hic et nunc of the title urges society to act, as it is now that the rules for AI that will govern our lives in the coming decades are being written.

The final proposal is to replace the “ethics of efficiency” with an “ethics of care.” This means that the success of AI should not be measured solely by its statistical accuracy, but by its ability to protect human dignity, ensure transparency, and allow anyone to understand and question why a machine has made a decision affecting their life.

In short, the thesis Hic et nunc reminds us that artificial intelligence is, above all, a human tool. And as such, it must serve equality and justice, not the prejudices of the past.