We spoke with this young and renowned doctor in Artificial Intelligence (AI) about the risks and opportunities of algorithmic systems
Counting the hours to leave class is something common in all children. However, for Nerea Luis Mingueza (Madrid, 1991) fun outside the classroom was a world of learning out of the ordinary. “My hobby became connecting to Internet to learn things & rdquor ;, he explains. As a child, she was fascinated by the computer that her uncle had at home, which allowed her to satisfy her “obsessive & rdquor; for manga culture. Her professors detected her interest and led her to a career in computer engineering.
At just 30 years old, Luis has already become a national benchmark in the technological field. Doctor in Computer Science specialized in Artificial intelligence (IA), this smiling young woman with blue hair – like Sailor Mercury, her favorite character from the ‘Sailor Moon’ series – has been decorated by the Royal House with the Order of Civil Merit, awarded the Google Anita Borg award and selected within of the Top 100 women leaders in Spain. She currently works as a consultant at the Sngular firm and since 2013 is co-founder of T3chFest, an annual technological event held at her university, Carlos III de Madrid.
Luis attends EL PERIÓDICO DE CATALUNYA as part of a talk at the ‘Humanism in the digital age’ congress, of the Digital Future Society.
Only 12% of technical jobs in AI are held by women. Is it a difficult environment in which to thrive?
You have to be very clear that, as a woman, you are a minority. Both in high school and in my career I noticed that there were more boys, but I was very clear about where I wanted to go, so all the noise did not affect me. It has not been until later and through the experience of other colleagues in the profession that I have noticed that added difficulty that exists for us.
You are a strong advocate that all the important technical work done in your field is lost without a clear and informative communication strategy.
“I am interested in being informative. People use their cell phones every day, but they have no idea how it works”
Yes. The scientific and technological community is very mobilized to share their work, but it is like we speak only for ourselves. After co-founding T3chFest (which brought the work of its field to non-experts) we realized that people liked these topics because they could understand them without having a high technical level. I am interested in being informative because people have no idea how technology works but they use their mobile phones every day. If they are not counted, it is difficult for them to see the opportunities of AI.
What positive uses can these systems illuminate?
They are very powerful tools to automate repetitive tasks and measure things that we could not see before. In medicine, it can help us choose the best treatment based on numbers, not doctor bias, and streamline certain processes so researchers can focus on what needs the most attention. Human-machine collaboration is the key to AI, one brings ethics and the other automation.
“Human-machine collaboration is the key to AI: one brings ethics, the other automation”
In addition, with its failures it allows us to re-evaluate how we have behaved all these years. When it is said that algorithms have racist biases or against women, it is because they drag them out of our society. Knowing it allows us to change it.
If we use data from the past to predict the future, we run the risk of perpetuate racist, misogynistic and class discrimination. How can we avoid it?
We have only seen the tip of the iceberg. These scandals will continue to happen and that it is denounced how algorithms can amplify these discriminations will help us to reflect to change those customs. You can start with a more inclusive database, but we will have to analyze it when those systems are put at the service of the people. Until you operate in a real environment you are not aware of the true effectiveness of the algorithm.
We talk a lot about digital sovereignty, but end up resorting to the systems of technological giants like Amazon, Google or Microsoft. Isn’t this fueling an asymmetry of power that makes us more vulnerable?
Developing your own infrastructure is too expensive. These companies offer to rent their cloud service (the computers work with external data centers that allow them to increase their power) and I do not see it bad to start, but we still need to build alternatives. We have not considered how to work until we have seen the importance of our data, and then it was too late.
When they are hunted with bad practices those big platforms usually blame a “mistake & rdquor; algorithm, what mathematician and data scientist Cathy O’Neil dubbed ‘Math washing’. An abstract scapegoat who avoids taking responsibility. Are they making wicked use of AI?
Yes, it is easier to blame a system than yourself. Behind the algorithms there are humans, if it fails what it does is expose that it is being unfair. The person responsible is the human even if you want to blame the machine.
Predictive AI not only reads data, it creates patterns to infer our future behavior. Doesn’t this threaten human autonomy?
We must be aware of the dangers of these systems and talk more about algorithmic justice. There are algorithms that have no direct impact on humans and others that can affect us on a day-to-day basis, such as those used to determine social benefits or medical treatment. But we must also highlight its positive impact. Collaboration between human and machine can make our analyzes more powerful and, for example, detect diseases with much greater precision. We do not have this concept of collaboration in our heads, we think that machines are going to replace us and the key is that they complement us to see things that we do not see.
The European Union is preparing a law to ban high-risk AI systems.
AI needs to be regulated and the EU can give us an example framework to focus more on the human. It is interesting that AI systems are categorized by risk and that audits are created to examine those systems. But then we have to see if they can be implanted.
As we have seen with data protection, Europe already has pioneering regulations but the industry fails to comply because it pays more money to pay the fines. What else to do?
I don’t think it was done wrong. In the technological world, everything is so recent that it is difficult to know what will happen. Somewhere you have to start limiting. I hope we know how to react but I have no answers.