Close

Bernd C. Stahl:
Human flourishing
is at the core of AI ethics

12 October, 2021
Maciej Chojnowski

If the EU regulation becomes law as planned and gets a stronger requirement for dealing with high-risk AI through things like ethics by design then the adoption rate of this approach will go up because companies will have to comply with it, says Prof. Bernd C. Stahl in conversation with Maciej Chojnowski.

Maciej Chojnowski: In your recent book, you delineate three main areas in today’s complex AI ethical landscape: specific issues related to machine learning, social and political questions concerning living in a digital world, and metaphysical questions about the nature of humanity and status of machines. You also offer a classification of governance mechanism of AI ethics, which are: policy, organisational, and individual mechanisms. Should these issues be seen as separate or complementary?

Bernd C. Stahl*: The discourse around ethics of AI is huge and through this categorisation I try to make it more manageable. But the classification of the ethical issues is not clear-cut. The distinction between the specific issues of machine learning and questions of living in the digital world are not really two separate things. The former have consequences for the latter. So I attempted to put some order on a chaos in a full awareness that this order is temporary and somehow idiosyncratic.

The reason for making this sort of classification is not just theoretical but also practical, isn’t it?

Yes, it makes sense to think in these terms because it allows us to distinguish between similar types of issues and ways of addressing them. Take a look at bias and unfair discrimination. This is a huge societal relation that permeates all societies. But where machine learning comes in we already understand some of the mechanisms which can lead to unfair discrimination, and find ways of dealing with them.

But there are also societal issues that are not subject to technical interventions. Justice of distribution or changing nature of warfare are big issues where no individual agent not even a powerful nation state can solve the problem. These questions have to do with distribution within or between countries. Some of them will be subject to traditional policy interventions from taxation to liability law. Other call for international agreements.

And then there are things I call metaphysical questions. These are sort of perpetual questions. AI triggers us to think about what is the nature of human in this world. I’m not interested in a Terminator scenario but these fundamental questions play an important role in the way we think about technology and human beings integrity. They are important even though they might never be resolved.

Is it possible to create a matrix combining these two taxonomies: the problematic areas and specific governance mechanisms?

To some extent yes, certainly for what I’ve called the specific technical issues of machine learning. There are ways in which we can at least think about how responsibility should be attributed. The proposed regulation by the European Commission tries to do just that: to think about what the problems are, how they can be addressed and who should be responsible, like in the ex ante conformity assessment.

But for other problems it’s much more difficult because they become a part of societal discourse or political negotiations. I don’t think there’s a lot of agreement even within most European societies what exactly would constitute a just distribution of resources. And therefore we can’t hope that we’re going to find a simple solution to the question of extracting surplus value from personal data. These are more open and long going questions and there are lots of players who have a role in that. Governments, companies but also civil society, which can be hugely important in stimulating the debate.

Lately, we’ve seen a lot of AI ethics guidelines being released by companies or international expert groups offering principles concerning AI ethics. But according to Brent Mittelstadt, one of the reasons why the prinicpled approach in AI ethics must cause problems is the lack of common aims among the AI practitioners. This is what distinguishes AI from medicine on whose codes of ethics the AI principles were based. Is human flourishing – seen as a purpose of AI, which is an idea you’ve been writing about recently – an attempt to solve the problem of those divergent aims within the AI community?

The reason for the application of principalism to AI is that the similar approach in biomedical ethics simply was the first one to establish itself as a significant political player. One of the advantages of this approach is that it exists and works to a certain extent. The disadvantage, especially with regard to emerging technologies, is that it was based – in its original context – on the common understanding that medical research is generally a good thing.

That sort of consensus of course is not existing with regard to emerging technologies. It’s not at all obvious that AI is gonna be the solution that society actually would like to put forward. Therefore there’s a fundamental question whether we as a society want to support certain activities. And it’s a question that prinicpalists’ ethics approaches can’t really deal with. So positing of human flourisihing is an attempt to respond to that. We recognize that some of technologies are more ethically beneficial while others are possibly ethically problematic or at least ethically agnostic.

The concept of human flourishing resonates with the recognition that human beings are at the core of this game. We don’t build AI for the sake of building AI but because it’s supposed to do something better for the world. But doing something better for the world is not just about maximizing profits or making sure that machine runs efficiently. It goes beyond that. It has a root in what human beings want to be, how they can live their lives, how they can achieve their aims. That’s why we’ve put this concept of human flourishing in the SHERPA project.

Looking at this concept from the operational point of view, do you think it can be used as a counterbalance to other general approaches to ethics like deontology or utilitarianism? Is there any practical advantage here over the other theories?

In terms of AI, it’s easy to see how you could try to implement a utilitarian approach in AI. That’s what a lot of the thought experiments around autonomous vehicles do, for example: do I kill this young child or these three old people? I think the concept of human flourishing could be a counterpoint to that. We chose it because it may help stimulate reflection at points where it’s needed. But this is only one way of thinking.

A lot of ethics has the reference to the three big groups: the utilitarian, deontology and virtue ethical positions. I don’t see them as contradictory but rather as different ways of looking at the same problem which may lead to different outcomes. But none of them individually has the right to say: this is the only solution we can go on with.

Do you perceive AI ethics as something to be used by engineers or deployers, or do you also find it applicable in the area of so called artifical moral agents? Some researchers, like Aimee van Wynsberghe or Scott Robbins, say we don’t need artificial moral agents at all. What’s your opinion on that?

I agree with it because I don’t think any technology that we currently have is capable of showing any type of meaningful moral agency. You can tell a machine to do something but you can’t tell it to reflect on why it did that and find justifications for that. And even if you could it probably still wouldn’t correspond to what humans do when they do that.

The ethics of human beings has a lot to do with our nature: the fact that we are embodied beings, that we know we are living a finite life, that we will die, that we suffer and that we are among similar beings which share the same nature with us. So I think all of these aspects play into ethics from the human perspective.

So unless we are capable of building machines that have similar characteristics to us, ethics in the machine would be something fundamentally different. And therefore I think the idea of machine moral agency doesn’t really make a lot of sense. I do think it’s an important question to ask what a machine should do in particular contexts like in the autonomous vehicle example. If a vehicle has a chance to make it safe, then these are worthy thoughts to think. But it’s very far away from ethics that we understand as a human point of view.

Therefore I agree with all the people who say: machine ethics doesn’t really exist and arguably it shouldn’t exist. We shouldn’t build anything that actually comes close to us because it will then muddy the waters as to what an ethical human agent is. However I don’t think that in my actual lifetime I need to worry about this because machines are simply nowhere near that.

Speaking of autonomous vehicles, there’s an interesting issue whether we should focus here on ethics alone, or rather get a broader perspective of political philosophy. The former might too much concentrate on individuals whereas the latter makes us think about the society as a whole. Do you think ethics of AI already covers political philosophy, or should we only include it in the AI ethical discourse?

As with many things, it depends on the definition. If you say ethics is around the individual, it wouldn’t be an error. But one of the reasons why I like the concept of flourishing and have certain affinity to virtue ethics is that for the ancient philosophers like Aristotle this distinction between the individual and the political discourse wasn’t really as relevant. And I think that’s certainly true in the AI discourse as well. Concentrating exclusively on individuals is not going to do the job.

So while in the SHERPA project we used the concept of AI ethics mostly because of the existing discourse people can relate to, in most of my work I prefer to use the concept of responsible research and innovation where this idea of having a broader societal context is baked in. It’s not just what the individual does or doesn’t do but it’s also about the societal processes that facilitate, motivate, bring things into being. And then ethics is part of that broader discourse which also includes politics, sociology and covers a lot of academic disciplines and aspects of society.

You’ve mentioned SHERPA but before we dive into details I’d like to ask you about the relations between SHERPA and its sister project SIENNA. In both cases we have deliverables on ethics by design co-authored by Phillip Brey. What’s the relation between these two? If one would like to implement one of these guidelines, which one should they choose?

Philip started this work in SHERPA and then finished it in SIENNA. So the deliverable in SIENNA is the key product. It also will form part of the future European Commision guidelines on AI ethics.

You call the guidelines operational or ready to use. Who’s their intended user and what sort of capabilities would one need to be able to put them into action? Is any ethical background neccessary to use them effectively?

The Twente group has put a lot of effort into shaping them according to existing development methodologies. The idea is that people who develop AI systems through agile processes or whatever else could use those guidelines to map the existing processes to the ethics requirements and build those in through the design process.

On the one hand, it should be relatively straightforward but I think there’s also recognition that if you leave it purely to checklist it may become problematic. So it will rather need to be taken up by an interdisciplinary team that has some affinity to the language used in philosophy.

But we haven’t really tested that in practice. So whether that will work is something I can’t roll out now. Probably we’ll need another research project to test that which I’d be happy to be both in with you.

The SIENNA deliverable suggests that implementing those guidelines may require radical transformation of the processes within organizations. What do you think about the organizational conditions for a successful implementation of ethics by design?

I don’t think there’s a single answer to that question. It depends to a large extent on the actual companies, what they do and what sort of technologies they use. I suspect that for a lot of companies where AI is part of their business processes it would be fairly unproblematic. If you’re an insurance company and you want to use an AI for fraud detection I would assume that ethics by design has the potential to resolve problems like racial discrimination.

But there may be other companies where it becomes much more radical. If a company makes use of AI for surveillance purposes, following the principle of ethics by design and thinking of the ethcial consequences of what they do may in the end result in a decision to just stop doing business.

In the SHERPA report you mention a survey conducted during the project. Only half of respondents confirmed that they had some sort of ethical mechanisms in practice at work. So the real percentage of organisations incorporating ethics in their everyday’s work was probably even lower as the people you surveyed were ethically-aware. Do you expect that once the ethics by design approach reaches a broader audience the rate of organizations adopting it will go up or down? Because with the high-level recommendations you can easily say that you follow general principles but with the complex mechanisms of implementation this might be more challenging. Which scenario do you expect?

I think it will depend to a significant extent on the environment and the sentiments the companies face. If the EU regulation becomes law as planned and maybe even gets some more teeth and a stronger requirement for dealing with high-risk AI through things like ethics by design then the adoption rate of this approach will go up simply because companies will have to comply with it.

Similarily I think there may be external societal effects that lead to higher adoption rates. Like another big scandal where some AI company goes out of business because they’ve done something really stupid and bad and the others will say: Whoops, we should do better.

But on the other hand, I realize that maybe the more specific the requirements, the more difficult it becomes to implement them. It’s also a question of choice. We give the market players ethics guidelines and the idea is that these are broadly applicable across the AI space. But companies may pick different approach like AI impact assessment and say this is how they deal with ethics. Some companies may put this into the corporate social responsibility, some into due dilligence, or human rights compliance. It may never appear under the heading of ethics but still be relevant. There’s lots of ways of talking about similar issues in different ways.

On the other hand, the Horizon Europe will support the ethics by design approach in scientific research on AI. You also mentioned that the European Commission would suggest that this is the way to operationalize the High-level Expert Group Guidelines. That creates opportunity for a broader adoption, doesn’t it?

Within the Horizon Europe – yes, the Commission wants to use this and make it compulsory. They will give it as part of the guidance to applicants who want to do something on AI. And given that there’s around €10 billion that’s going to be AI-related in Horizon Europe, that’s a significant investment into research. If all of the projects drawing on that funding will do ethics by design according to the SHERPA approach then of course this will become relevant.

However in comparison with the overall global AI research and development market, Horizon Europe is still relatively small. If you compare that with the Facebooks, Googles and Apples of this world, their entire research and development budgets are much bigger. Then it will depend on whether it turns out that the ethics by design process works. If companies come across this and say: we’ve seen that it is a good way, then it’s possible that it becomes much more prominent. Firstly on European level, but then possibly also internationally.

So let’s talk about the global landscape. Critics of the European trustworthy approach to AI claim that it would be very difficult for the EU to catch up with the global AI superpowers: the US and China. What’s your opinion on that?

There was a publication by Elsevier a couple of years ago where they looked at the AI landscape and tried to figure out what’s happening in the AI research world. They did the citation analysis and looked at how many AI related papers were written in which regions of the world. If you look at it by individual country then America was by far the biggest and China came after. But when you aggregated all the European countries together they exceeded the publication and citations of both – China and the US. So I’m not convinced that we are so much behind on the scientific level. But we are clearly behind in terms of the big players: Google, Apple, Facebook, Alibaba and so on – they are all American or Chinese.

The other question is: does our approach to the ethics of AI constitutes market impediment? There are people who say: I want a new AI start-up but I’m not going to do this in Europe because it’s too onerous. At this point it’s too early to speculate whether that’s going to be the case because we don’t know what the exact shape of the AI regulation will be.

However it might be instructive to look at similar developments, most notably the GDPR. You have two narratives: one saying it disadvantages European companies because they have to comply with additional bureaucratic hurdles. The competing narrative is that it sets the standard and enforces everybody who wants to be a part of the European market to live up to the expectations formulated by Europe. And you can certainly find evidence for the latter.

We can expect similar competing narratives to emerge in AI. I don’t think there will be one dominant narrative in the end. From the European perspective the question is: what do we want to do? Do we want to do the right thing to our citizens? I think that’s what the present attitude is. And if that’s the case and if AI raises particular issues then we need to address them and the current approaches are best we can come up with.

To conclude, I would like to refer to what Albena Kuyumdzhieva said during the final SHERPA event about incentives to implement AI ethics: that it’s crucial to translate ethcial requirements into competetive advantages. Do you believe it’s going to happen?

Yes, I do. Because people are worried what AI does to them and because there’s been such a high profile discussion of what the possible consequences of unreflected use of AI can be. There’s a high level awareness of at least some of those questions, like the discrimination disadvantages or mass surveillance. These are things people don’t want. And whether it’s through regulation or through voluntary adherence to standards or whatever, if a company can demonstrate that it has considered these issues and is serious about addressing them, it could be a competetive advantage for them.

There may be different mechanisms of communicating the fact that a company takes those concerns seriously. A trust badge on you your website, a certification, or a data base that the Commission wants to set up for all high-risk AI projects. And if they do that and if consumers recognize it, then this may well work out.

 

Bernd Carsten Stahl – Professor of Critical Research in Technology and Director of the Centre for Computing and Social Responsibility at De Montfort University, Leicester, UK. His interests cover philosophical issues arising from the intersections of business, technology, and information. This includes ethical questions of current and emerging of ICTs, critical approaches to information systems and issues related to responsible research and innovation. He serves as Ethics Director of the EU Flagship Human Brain Project, Coordinator of the EU project Shaping the ethical dimensions of information technologies – a European perspective  (SHERPA)  and is Co-PI (with Marina Jirotka, Oxford) and Director of the Observatory for Responsible Research and Innovation in ICT (ORBIT).

 

Przeczytaj polską wersję rozmowy TUTAJ

Skip to content