Close

Philip Brey:
Ethics by Design
is a feasible
approach to AI

4 July, 2022
Maciej Chojnowski

If you want to engineer ethics into AI products and systems then it requires specialized knowledge. It’s not just checking boxes. So if you have a development team add an ethicist, preferably one that is of a more applied persuasion. We also think that Ethics by Design should be part of computer science and engineering training, says Prof. Philip Brey in conversation with Maciej Chojnowski.

Maciej Chojnowski: About a year ago, at the first CET conference there was a discussion about the existing AI ethics tools and the possibility that they would form a basis for the future solutions for other technologies. Now we can see that it has already happened. In the SIENNA project, not only have you developed approaches for specific domains like AI, but also created a generalized methodology that can be used with any technology, haven’t you? In which areas would you expect its further adoption?

Philip Brey*: Currently there is a lot of interest in ethics of artificial intelligence. AI is seen as a powerful technology that is going to change all sectors of society and at the same time is raising significant ethical issues. And while this is true for AI, it’s also true for many other technologies like 5th generation genomics, synthetic biology, neurotechnology, nanotechnology or Internet of Things.

So time and again, we are going to need ways of dealing with the ethical issues and human rights challenges raised by these emerging technologies. Then it would be good if we could avoid reinventing the wheel and learn from previous technologies how to ethically assess them, identify ethical issues and ways to mitigate them at an early stage as well as ensure that this happens in a best possible way.

In the SIENNA project, we have developed ethics guidelines and tools for artificial intelligence, human genomics and human enhancement technologies but we have also developed a general approach that can be used for other technologies. For example, in the TechEthos project, which I am also involved with, we are now looking at neurotechnology, extended reality, and geoengineering. These are also very intrusive, impactful technologies that we need to ethically assess so it helps to already have the experience and the methods of how to do it.

One of the results of the SIENNA project, ETCOM, is a normative model for the coevolution of emerging technology and ethics. Is it an attempt to put an order on the already existing procedures so that they are easier to manage, or does it introduce new solutions?

In the past ten years, there’ve been attempts to create comprehensive models for the ethical guidance of emerging technologies under the heading of responsible research and innovation. But those were broader models aimed at governance and developing technology in a responsible and inclusive way where ethics was only one of the dimensions. Before that there has also been a lot of ethical analysis of emerging technologies but not really comprehensive models looking at ethical guidance and mitigation.

So we wanted to do something more specific for ethics what did not exist yet. And we decided to build a normative model that looked at how at different points in the evolution of an emerging technology ethical interventions can be staged so as to effectively take into account and mitigate for ethical issues in it.

Given that the ETCOM is a new normative model, one should not expect its full implementation yet. However, you probably used some existing components to conceptualize it. Can we then say that AI is already being developed in an ethical manner, or do we need to catch up with the right ethical approach, i.e. the one that you prepared? 

AI is certainly a technology where many of the developments recommended by ETCOM have already been in place. But there is no emerging technology that really wins a prize for how ethics is dealt with. Ethics is still often an afterthought or not implemented in a really comprehensive and systematic way in the development, use and regulation of emerging technologies. But for AI, at least significant efforts have been made along different dimensions to address ethical issues at an early stage.

However, the ETCOM normative model goes certainly further than what has happened in AI. It identifies five stages in a development of an emerging technology where different ethical interventions are recommended. It starts with early ethical analysis and identification of potential ethical issues and implementation of research ethics protocols particular to that technology. Then we have later stages in which ethical guidance is developed for the technology for different actors, where the Ethics by Design approach is used and ethically inspired regulation and ethics standards are developed. And finally there are stages where ethics considerations are made part of ISO or CEN standards and where you have education and training.

So there’s a whole set of actions that one should take when one wants to successfully deal with ethical issues related to an emerging technology. It’s also important to see at what stage what should happen,  what actors should be engaged in it, what knowledge they should have, and what tools they are using. That’s why we created these comprehensive recommendations on how that can be done for a new technology.

One of the most important outcomes of the SIENNA project is Ethics by Design, an approach allowing practical operationalization of high-level ethical principles in the AI design. It relates to a previous project you participated in – SHERPA. How did this EbD approach evolve?

These projects ran roughly in parallel but in SHERPA we had an earlier opportunity to work on Ethics by Design approach, whereas in SIENNA that was planned a little bit later. So that’s why we did an early work on this approach in SHERPA and then continued a more advanced version of it in the SIENNA project. And in SIENNA we also worked together with the European Commission to implement Ethics by Design in their Horizon Europe funding programme.

The European Commission has officially endorsed Ethics by Design, hasn’t it?

They endorse it as a recommended approach. Now it is part of the ethics review procedure for artificial intelligence, big data and robotics projects. We hope that in the future we may expand on this and Ethics by Design will also become a recommended approach for other technology projects.

You’ve previously mentioned TechEthos, another project in which you take part. Are you going to develop the Ethics by Design approach further on in it, or is the SIENNA version the final one?

In TechEthos not much room was reserved for Ethics by Design specifically. The project came into place before our Ethics by Design proposal for AI was presented. But in the SIENNA project we did some work to generalize Ethics by Design. There is a report which makes some proposals on how the approach can be generalized for other technologies. But it would be good at some point if one could do case studies of that for other technologies and develop it a bit more. I’m sure it will happen in other projects at some point.

Are you going to test Ethics by Design in practice?

It will be tested in practice through the different projects in Horizon that will implement the approach. Of course, we have done as much testing as we could during SIENNA but with limited means so it is not thoroughly tested. But it’s based on proven principles from earlier incarnations like Design for Values and Value Sensitive Design and on accepted design methodologies. So we are confident that it’s a feasible approach.

Some people in the AI community expect ethics tasks to be easily manageable. They would like them to be similar to a checklist or even (semi)automated. On the other hand, philosophers like Anaïs Rességuier, with whom you worked in the SIENNA project, put stress on ethical agency and claim that it is essential for moral reasoning. What’s your view on the possibility of automating ethics as part of designing or deploying AI?

Development processes are complex and they require a lot of expertise to perform them well. Anyone with programming skill can develop software but developing sophisticated software, especially in the field of AI, is a very complex process. So incorporating ethical considerations in it certainly is not going to be easy. That has to be taken into account.

If you take safety engineering, which is basically about adding one feature to developed products, you have whole methodologies and trainings for that to ensure that products are safe. We’ve seen the same for ensuring that information technologies are protective of privacy. So it shouldn’t be surprising that if you want to engineer ethics into products and systems then it requires specialized knowledge. It’s not just checking boxes.

Certainly some engineers would rather check boxes and be done with it but our view is that if you want to do ethics well it should be part of the development process, which requires specialized knowledge. So if you have a development team add an ethicist, preferably one that is of a more applied persuasion. But we also think that Ethics by Design is something that should be part of computer science and engineering training.

If one goes deeper into the theory of Design for Values, which is somehow related to Ethics by Design, one quickly realizes that there’s a need for the ability to translate values into design requirements. You said that we would need ethicists on board in AI but do you think that Ethics by Design as presented in SIENNA is ready to use, or will it require some further value translation?

We’ve made a lot of effort to do as much translation into operational procedures as possible. We also indicate how different steps in the development process of AI systems require particular actions or practices. But then there still will be interpretation needed when we recommend for example screening dataset for biases. It may not always be clear what that means in practice. And that is something that we cannot fully specify or else we would have had to write a 500-page handbook.

So it’s also incumbent upon the scientists and engineers to arrive at particular interpretations of the procedures that are proposed in Ethics by Design and decide how to eliminate bias, how to deal with informed consent at the practical level if it’s about privacy protection, or what kinds of AI systems require a high level of transparency and explainability. That’s not something you can all provide in a methodology of Ethics by Design. It requires experience with a technology development that needs to be then combined with the Ethics by Design approach.

In the SIENNA and SHERPA deliverables you also offer a methodology for the ethical deployment and use of AI.

Yes, it is a different approach from that of Ethics by Design because it pertains to different actors. Ethics by Design is for technology developers whereas the ethics of deployment and use is for organisational technology users.

We wanted to investigate what parameters are important here for determining whether the outcomes of deployment and use are ethical or not, and what organisations can do in order to ensure the ethical results. That involves a lot of things like having ethical guidelines in the organisation, careful selection of the right kind of technology when you acquire it or develop it in-house, staff training, and having policies in place.

It’s also important to have a multiactor perspective here and to look at the different tools and protocols that you need to support ethical outcomes. Because it’s not sufficient for an organisation to have ethics guidelines. They need a lot more: apart from policies and training, they may also need an ethics officer, some certification or accounting of their approach, monitoring. So if you want to really take ethics seriously in a deployment and use of new technology in your organisation, you need to look at these models of how to do that in a comprehensive way.

Are there any external tools that should be added to the SIENNA methodology?

The most important tools that we identify are those specific to development as well as deployment and use of the technology that we’ve already discussed.

But at the government level you need adequate regulations and policies. That’s what the European Union is actually doing. They have developed comprehensive regulations for AI which take into account some of the key ethical issues that are at play. On the other hand, CEN, ISO and IEEE are working on standards for AI that also include ethics. Universities are developing research and education programmes in ethics of AI, and that’s also a very important element. Media awareness for ethical issues in AI is another one.

So these are different significant ingredients. Some of them are kind of obvious, not essentially novel. But also for things that people already do like ethics guidelines we have developed a roadmap how to do it well because there are many ways possible.

One of your previous projects – SATORI, has resulted in an ethical impact assessment methodology. Is it also a part of SIENNA?

It is because to successfully deal with and mitigate ethical issues with new technology you need to be able to identify and analyse what the ethical issues are. And especially in the emerging technology, it’s not always clear what they are because some of them are in the future. They depend on the particular development path that is followed, the particular applications that will emerge and how they will be used in society and generate impact. That means that you need an ethics assessment methodology that not only looks at the current technology but also at its possible future incarnation and impacts.

We developed that initially in the SATORI project. And then in SIENNA we developed a more advanced version of that. It’s simply called the SIENNA approach to ethical analysis.

It’s always a good idea to do an ethical impact assessment early on when you have a new technology or when you are at the early stage of the development process of a particular application because you want to do an early screening of potential ethical issues. And it’s not just our recommendation. We’ve seen it in different publications of the European Commission too.

One of the articles you co-wrote with Bernd Stahl and other SHERPA researchers includes a taxonomy of AI ethical issues. You name three different types of issues there: machine learning issues, questions about living in a digital world, and metaphysical questions. To which of those issues would SIENNA solutions apply to?

We advocate for a broad approach to ethical issues. So for any emerging technology we want to look at all the important problems that are at play. If they include environmental issues then those should be considered as well. And machine learning is nowadays possibly at the heart of AI but it’s certainly not the only development in this field.

So if we want to do a broad analysis of a technology, we look at different kinds of products and uses of that technology and then identify what we see as the most important ethical issues. Those could relate to individual rights, fairness, individual or social well-being, or sustainability. Whatever seems pertinent for that particular technology should be taken into consideration.

 

*Philip Brey – professor of philosophy and ethics of technology at the Department of Philosophy, University of Twente, the Netherlands. In his research, he investigates social, political and ethical issues in the development, use and regulation of technology. His focus is on new and emerging technologies, with special attention towards artificial intelligence and digital technologies. Brey is former president of the International Society for Ethics and Information Technology (INSEIT) and of the Society for Philosophy and Technology (SPT). He currently coordinates the 10-year research programme Ethics of Socially Disruptive Technologies that includes seven universities in the Netherlands and over sixty researchers.

 

Przeczytaj polską wersję rozmowy TUTAJ

Skip to content