Close

Anaïs Rességuier:
The critical approach to AI needs to be continuously restarted

9 June, 2022
Maciej Chojnowski

There is a tendency to consider AI as a highly sophisticated technology that would be almost virtual, intangible. This is very problematic because it means that we cannot see all the impacts of AI. So if we want to fully assess this technology looking at the whole AI lifecycle is very much needed, says Dr. Anaïs Rességuier in conversation with Maciej Chojnowski.

Maciej Chojnowski: In an article co-written with Rowena Rodrigues, you argue that today’s AI ethics is mostly based on high-level prinicples and remains quite abstract. You also give two reasons why AI ethics as we know it is rather ineffective. Firstly, because it has limited means to ensure compliance. And secondly, because AI ethics should not be treated as a soft version of law. Quite contrary, it should be seen as attention to context: a constantly renewed ability to see the new, which is „a powerful tool against cognitive and perceptive inertia”. What are the most striking examples of this inertia in the AI field today?

Anaïs Rességuier*: I would like to highlight two points here. The first one is inertia in acknowledging the disparate impacts of AI on different communities. We often fail to recognise that if an AI system doesn’t work for a particular community it is a failure of this system. And there are many systems that work poorly for specific groups of people. This should be considered a really significant problem, not just a marginal issue, and be properly addressed.

There is a study called „Gender shades” by Joy Buolamwini and Timnit Gebru about facial recognition systems which exemplifies this point very well. It shows that the standard evaluation of facial recognition systems that look at error rates over the whole population fails to account for major disparities across different communities. As this study showed, error rates for black females are much higher than those for white males.

This illustrates the fact that we need more fine-tuned, more precise lenses to assess this technology, one that look more closely at the characteristics of particular communities. I think that despite the evidence that this is highly needed there is still a lot of blindness towards it. This is the first point of this cognitive and perceptive inertia that I want to highlight.

And the second one?

The second one is the failure to account for the full lifecycle of AI systems, including the raw material they rely on, their environmental impact, the costs of producing and disposing of the systems components and so on.

There is a tendency to consider AI as a highly sophisticated technology that would be almost virtual, intangible. In another article, we call this the „AI immaterial narrative”. And this is very problematic because it means that we cannot see all the impacts of AI. For instance, a study on the environmental impacts of AI shows that training a single AI system produces as much carbon dioxide as five cars in their whole lifetime! So that’s quite a major impact. This is why looking at the whole AI lifecycle, not only at the algorithms that are being developed in laboratories, is very much needed if we want to fully assess this technology.

In her book „Atlas of AI”, Kate Crawford does a great job in redefining what AI is in all its different dimensions: the rare earth minerals it relies on, the infrastructure, the logistics, the data collection, and the impact of this data collection. So that’s a really broad system that needs to be taken into consideration when we look at AI. But I’m afraid we are still a bit blind to that. And this should definitely change.

According to Luciano Floridi, digital ethics is one of the three components of the normative system concerning technology (the other two being digital regulation and digital governance). And whereas regulation tells us what is legal and illegal, it has nothing to say about what we can do to live in a better society. This sort of moral evaluation is only possible within the ethics domain. But it seems that this is exactly the sort of capability that AI ethics is lacking today because of this widespread legalistic approach. What sort of actions and tools do we need to make AI ethics effective?

To answer this question I would make a distinction within the field of ethics that helps address a confusion I see quite often.

On the one hand, we can distinguish ethics as embedded within particular institutional frameworks. For instance, with bioethics there are well-established structures and frameworks that make it work and implement it in practice for medical research. At Trilateral Research, we work a lot with this approach to ethics, in particular with research ethics applied to new and emerging technologies. We provide guidance to research projects developing technological solutions to make sure they follow ethical obligations. The European Commission has a very specific research ethics framework that requires compliance from grantees. While working on projects funded by the EU we use our ethical expertise to implement this ethical framework. And this form of ethics is indeed a bit like a softer version of law.

In our article we were critical of this approach to ethics as soft law when it is not embedded within institutional frameworks. This is when it becomes indeed „toothless”. But it can be very valuable when embedded in institutions with mechanisms to ensure compliance. For instance if an EU-funded project does not comply with research ethics obligations, funding might be withheld or the project suspended. So there are ways to ensure compliance to ethical requirements.

In 2020-2021, the European Commission developed a research ethics framework for EU-funded projects using and/or developing AI. The SIENNA project had the chance to contribute to this effort and suggested ways to identify and mitigate ethics issues raised by projects using and/or developing AI systems or techniques. This framework is now in place and has become a requirement for all projects funded under the new Horizon Europe Framework Program that will be in place for the next seven years. So it’s going be interesting to see how it works and we already look forward to suggesting ways to further enhance it.

This is the first approach to ethics, ethics as embedded in institutional frameworks and there is assuredly the need to develop further this approach for the ethics of AI.

And this seems to be the dominant way of thinking about AI ethics today.

But there is also another aspect of ethics – very important although quite neglected. And it is actually the source of the first one that I have just discussed. This is what we developed in the Ethics as Attention to Context piece in the SIENNA project where ethics is more fundamentally seen as an open and continuously renewed attention to the world and to the new as it emerges.

This is where we come back more specifically to ethics as a way to reduce the cognitive and perceptive inertia. Here ethics is not about saying what we should or should not do as opposed to the first approach. It’s rather an attitude towards the world characterized by the attention and ability to determine the norms. Because only when we see the new that is happening, we are in a position to determine how to deal with it.

This is more an attitude, a form of ethical agency, than a judgement saying you should do this or that. The notion of „normative capacity” developed by a French philosopher of medicine Georges Canguilhem is helpful to clarify what is attitude or capactiy is about. It’s a capacity to determine the norm, not the norms themselves.

Could you elaborate on this distinction a little more?

We should distinguish these two approaches to ethics because their confusion gives ground to the manipulation of ethics. We’ve seen industry actors pushing strong for an ethical response to the regulation of AI. They favored the open-ended nature of ethics to avoid hard lines to be set on what they can do and cannot do with this technology. But this is made possible by a confusion between ethics as soft law and ethics as an open-ended process of reflection and attention. 

We have a technology that is bringing significant, deep transformation to the world, to who we are, to our relations and so on. That’s why we need to be acutely aware of these changes to make sure we can guide this transformation, maybe resist some of its harmful developments as well as promote the beneficial ones.

And this is where the second notion of ethics comes back to the first notion. Because once we are able to see what is happening we can then ask ourselves what we should do.

When one is thinking about a possible taxonomy of AI ethics, three general approaches come to mind: the already mentioned legalistic one, the operational one, and the critical one. While the first one seems quite clear as it’s focused on defining principles and norms, the other two may be a little less obvious. So the example of the operational approach for me is Ethics by Design, while a critical approach could be exemplified by what you call for, namely Ethics as Attention to Context. However if we assume that attention to context would mean looking critically at the surrounding world, I suppose lots of people would say this is a negative approach focused mainly on banning things. Is there a positive component to it so that it can be turned into something constructive?

I think so. I am concerned about the fact that sometimes critical approach is just negative and doesn’t contribute positively to what is being developed. But it’s also because sometimes you do wonder when you see some innovation in AI. For instance, recently OpenAI developed a system, DALL.E 2 which, although it is highly innovative and powerful, is also generating very sexist and racist images out of text. So I do understand there might be some resentment within the community toward this. Maybe that leads to this critical approach being negative.

But I do think that we should be simultanously critical and constructive. That’s what we have to try to do with the Ethics as Attention to Context paper, to orient research and the AI sector in a way that is more beneficial for the society. That is a challenge especially with very powerful actors like Google and Facebook with whom there is a huge power assymetry. But I think that we should remain critical as well as contribute positively to all this.

In the SIENNA deliverables, it is mentioned that in the case of both Ethics by Design and Ethics as Attention to Context the AI practitioners whom you surveyed were calling for more practical and usable AI ethics. So it seems that in both cases the rationale was quite the same: to give people something more than just a collection of high level principles. The critical (contextual) approach seems to be an engine that helps thinking in the operational terms, doesn’t it?

I think the critical approach has really practical intentions to contribute to practices and decisions that are made. Our deliverable starts with quite a critique of the dominant approach, then makes a proposal and then operationalizes this proposal into a number of practical recommendations addressed to different stakeholders in the field. So there is this operational element.

It is also worth mentioning that sometimes the operational approach alone makes it seem too easy to implement really complex values or notions (e.g. fairness) in technology development. For instance, we have to respect the principle of fairness but on the other hand we know that it really is a major challenge to implement this in a society which is fundamentally unequal and unfair. So I think it is important that the critical approach always highlight that it is not so easy. You need to pay attention to ensure we don’t simply pay lip service to this really important principle.

The Ethics by Design and Ethics as Attention to Context offer different solutions or proposals to operationalize AI ethics and they do they complement each other. We didn’t have a chance to do that in the SIENNA project but we should bring them together. This should probably be the object of a future research project.

You say that Ethics as Attention to Context is also about enabling ethical agency, which is essential when we think about moral reasoning or moral evaluation. On the other hand, some people expect AI ethics to be (semi)automated or translated into a collection of algorithms that could be easily followed without much reflection. Do you believe that those two expectations could go hand in hand or maybe they exclude each other?

This question is becoming increasingly discussed. I am quite radical here. Talking about machines making moral deliberation makes no sense.

Obviously, we do need to acknowledge that norms, values and worldviews are embedded into AI systems. It’s very important to be aware of this. For instance, particular views on gender roles are at stake when voice assistants are systematically given a female voice. There is this famous quote from the historian of technology Melvin Kranzberg that says technology is neither good nor bad, nor is it neutral. We should examine technology because it’s not neutral.

But this is very different from saying that technology, a machine, or an algorithm can do moral deliberation. Humans remain the only moral agents. Especially if we define ethics as attention to context, it cannot be automated. And it is very dangerous to pretend it could. We can for sure embed particular values within a system but it doesn’t mean the system is making moral deliberation. We should really avoid pretending that machine can make it because it’s a way of not taking our responsibility for what we build. I think this is a dangerous slope.

Aside from the artificial moral agents, which is an issue on which I absolutely agree with your point of view, there is another aspect to that question. It might be not so much about creating the moral machines as rather about getting support when one does not have the required background to deal with ethical issues. Engineers might say: Ok, we will start with ethics now but the process will take two or three years instead of one year. So there is a problem how to deal with those questions, especially with people whose expertise in ethics is limited. Is it possible to reach some balance between the automation and reflection here?

Maybe the Ethics by Design framework is a way to seek this balance by providing some resources to do this moral deliberation and by breaking it down into more specific steps. I think it always requires to be rethought and put into perspective.

And the more potentially harmful a system is, the more time and attention it will require to be properly assessed. That’s the risk-based approach that is put forward by the European Commission. And maybe we can consider that for high risk systems we should make sure to have experts in ethical and social considerations in the developing teams. Probably there are systems where it is not so needed. We can have different approaches depending on the level of risk that a system causes. But again this would raise the question who and how would determine the potential risk of a system.

But I do believe that we need to develop AI ethics tools at different levels, starting from training of engineers. We should also have more social scientists and ethicists within teams that develop technology. Ethical issues shouldn’t be all delegated to engineers or developers who are not experts in these aspects.

Ethics as Attention to Context is about technology’s materialities, political realities as well as relations of power. These are all serious issues and very hard to tackle. In my view, to achieve the goals of the critical approach (here I also refer to recent books by Kate Crawford, Peter Dauvergne and Mark Coeckelbergh) we would probably need a sort of revolution in thinking about how the world is organized. So do you believe that certain solutions in this area can be put in practice step by step, or do we need to change the global attitude first?

Indeed, it’s very challenging. I think that the critical perspective needs to be continuously restarted. There can be no end to our efforts because we don’t live in a perfect world. And if we give up on the critical perspective the situation will become really dangerous in many different ways. For AI, it’s now clear that a number of key issues like bias or discrimination cannot be addressed simply from a technical perspective as the biases and discriminatory impacts keep coming back.

I’ve already mentioned the recent image generator from OpenAI called DALL·E 2. It creates really amazing images just from text. But at the same time when it’s asked to represent an image of a nurse or an assistant it brings images of women. How in 2022 can you have technology that brings such reactionary outputs? That’s really disappointing. But AI systems are not racist nor sexist in and of themselves. They just reflect the society. And society so far has been structured this way.

But I am also optimistic. There is an increasing acknowledgment that you cannot neglect those deeply contextual elements anymore and so this more holistic approach is being recognised in the field as necessary. The challenge is to find ways to operationalize this understanding. Bringing in social scientists and ensuring diversity in teams developing these systems can really make a difference.

Where would you expect the shift towards this contextual approach to start from? Among the AI practitioners as a bottom-up trend or maybe as a top-down initiative undertaken by some governmental or industrial bodies?

We need efforts from everyone, from all different kinds of position within the AI field. The SIENNA deliverable 5.4 starts with the Multistakeholder strategy highligthing the need to have all different parts of the society play a role including civil society, media, policy makers, AI developers, ethicists and so on. This really broad approach is neccessary because we are dealing with a technology that is very pervasive and has impact in many different areas of the society.

 

Disclaimer: This interview presents the position of Dr Anaïs Rességuier, based on results from the SIENNA project. It does not present the position of the SIENNA project as a whole, nor of Trilateral Research.

 

* Dr Anaïs Rességuier – Senior Research Analyst at Trilateral Research where she conducts research on the ethics of new and emerging technologies, especially Artificial Intelligence. She carries out this research as part of EU-funded projects, such as TechEthos and SIENNA. She is particularly keen to shift the focus of attention in AI ethics away from high-level abstract principles to concrete practices, contexts and social, political and environmental materialities.
Dr Rességuier is trained in philosophy, ethics and the social sciences. She enjoys working across different disciplines and engaging with a wide range of stakeholders to gain insights on current socio-technical developments and to contribute to shaping these.

 

Przeczytaj polską wersję rozmowy TUTAJ

Skip to content