Close

Henrik Skaug Sætra:
Framing AI impacts
with SDGs
is beneficial

24 November, 2022
Maciej Chojnowski

We have to contextualize our analysis of AI and bring the human and the social back in. Techno-solutionism is all too often subject to a debilitating blindness to wide areas that are crucial to human existence.  Many experiential aspects of human life are lost if we restrict ourselves to machine-like or AI-compatible approaches to science, says Dr Henrik Skaug Sætra in conversation with Maciej Chojnowski.

Maciej Chojnowski: You are an advocate of contextual approach to artificial intelligence and remain critical of so-called isolationism, which is a siloed approach to AI impact assessment. Your approach to assessing AI includes quite a number of different factors. Recently, you have focused your research on the United Nations’ Sustainable Development Goals (SDGs). Are they the ultimate tool for assessing AI?

Henrik Skaug Sætra*: Short answer is no. They are not ultimate, because they don’t cover privacy and a lot of human rights issues directly. So, there are obvious oversights. But they bring some really good benefits with them as well.

They have the legitimacy and spread both in the business world and politically, so it makes them useful for analyzing and not least communicating some of the major impacts of AI. And they are an effective tool because many people are already thinking about how to avoid negative impacts on the SDGs. So framing AI impacts with these goals is beneficial. They are also easy to use pedagogically. It makes sense to focus on these different issues and ask how AI relates to this.

So, although SDGs definitely are not the ultimate tool here, they are still useful and effective because many people already work with them. Particularly in the business world where people communicate everywhere how they deal with sustainability or ESG (Environmental, Social, and Corporate governance).

In the SDGs, there are three main dimensions (economic, social, and environmental), 17 top-level goals, and 169 targets (subgoals). You emphasize that these goals and targets should not be perceived as separate because they cover many interrelated issues. To deal effectively with such a complex set of issues with regard to AI, you need an advanced solution. Can you describe your SDGs-related AI assessment method?

My approach makes it easier to distinguish between different kinds of effects. There are two main ways in which I do that. These are distinguishing between direct and indirect effects and distinguishing between three levels of impact.

Let’s start with direct and indirect effects. If, for example, you use AI to improve teaching, then it could have a direct impact on SDG 4 (education). That could be a good thing in itself, but a quality education for everyone would almost certainly also have indirect beneficial effects on reducing poverty, perhaps on nutritional and health aspects too, as well as on reducing inequality. So here we have these indirect effects of one targeted effort at one goal. I refer to these as ripple effects.

Some of these goals have very heavy ripple effects, and they can be both positive and negative. Take a look at the SDG 8, which is economic growth. A lot of people in the tech sector are saying: “The data economy is built on us, we further economic growth”. But SDG 8 actually says that this growth should be inclusive and sustainable. So it should be beneficial also for those most in need. It’s redistributive in a sense that all the justice-related aspects of economic growth must be there. As such, real contribution to SDG 8 can have positive impacts almost all across the board. And the other around way: if you create more unsustainable and uninclusive economic growth that promotes concentration of wealth among very few people or corporations, then it will have negative ripple effects on almost all the other goals as well.

We simply can’t see technology as an isolated issue aimed at one specific problem. These systems are part of larger social networks and economies. That’s the contextual approach I advocate for.

And then we have three levels of impact.

Yes, these are the micro, meso and macro levels. Let’s take AI and VR in education. For example, using VR in interventions with people with autism, or making AI assistants that aim to support learning processes through the use of generative AI and learning analytics. We can distinguish between positive and negative effects here that hit various individuals – specific persons benefitting from the educational use of either of these technologies. These impacts are at the micro level.

Then we have effects on groups. This is where discrimination and bias come in. Is the AI assistant more effective for one gender, for example. But also the problem that AI developed in one part of the world may be inaccessible to the other. You might have different impacts across groups and regions both in a specific country but also between countries. Take VR again. It is likely that not all schools can afford the latest and most effective VR applications, or have the required infrastructure and competency to use it. If so, technology can increase differences, and this would not be conducive to reaching 4, and certainly not SDG 10. And this would be the meso level.

Finally, we get to the macro level which is the largest. It’s about the economic and political systems, e.g. how does AI relate to distribution of wealth, or to a transparent political institutions.

We need all these levels because they show us where the various impacts are. That prevents us from saying “AI has beneficial effect here so it must be good”. We get a broader picture which is beneficial for the analysis.

When one thinks about the AI ethics implementation, three major issues come to mind. The practical one – namely: the operationalization of AI ethics requirements (methods, tools, best practices, testing etc.), the formal one – which is its institutionalization (ways of ensuring compliance to requirements, like regulation, self-regulation, ESG reporting), and the systemic one which is integration (incorporating different AI impacts into one comprehensive analysis). Which one of them do you find most challenging at the moment and why?

I think it would be integration, or seeing the bigger picture, which is also about not treating AI ethics or regulation as something new. Because AI is often described as a technology that just popped up and we need a brand new approach to deal with it. But we have a long history of talking about intelligent or autonomous technologies. You can still have a lot of insights from Langdon Winner or Jacques Ellul, for example. Winner wrote about autonomous technology in a way that could be published today and people would think that he was talking about AI.

Seeing the bigger picture of contextualizing this in philosophy of technology, computer ethics, or in any other of these well-established disciplines, or even in private ethics, is really crucial. Because there are many old issues that still apply to AI today. I think one of the most pressing challenges is to avoid wasting time on reproducing things that are already there.

However, we still need codification or translation of how these broader kinds of ethical issues apply to problems faced by developers or companies. That requires more regulation and this is what the EU is doing. I like the regulation approach as opposed to the drive for professional ethics, where the industries themselves are the main players. Politics and government is required, because I don’t think the industry actors really have the society’s best interest at mind. At least not always.

The shift towards more integral approach when dealing with technology is also present in your latest paper co-written with John Danaher where you deal with proliferation of ethics in the digital technology domain. Nowadays different experts talk about tech ethics, AI ethics, computer ethics, digital ethics, roboethics etc. This is partly understandable as different technologies need practical solutions to solve particular problems, but you also notice risks stemming from this fragmentation. What are the biggest threats here, and what is the target audience of your appeal for ethics integrity?

First of all, there is a legitimate need for some of the different domains of ethics in terms of operationalization. For example if you are educating public policy makers at the municipal level you can’t assume that they will go and read old computer ethics literature. Contrary, you should explain: this is a new system, it does this and this, and we need to be aware of these impacts.

So, there is definitely an operational need for different domains of ethics in technology. But I also think that ethicists must discuss things on the proper level, and whenever we go one or two levels down (for example from engineering ethics, to computer ethics, to AI ethics) we need to make sure that what’s covered in each new layer or domain is exclusive to that technology. If it isn’t then we don’t need to go there, and we can keep discussing it as a more general implication of technology and facilitate debate across different fields.

For example, a lot of people talk about robot ethics these days but what they are actually talking about is AI or social AI. Why is that? Because the criteria for labelling something as robot ethics should include embodied AI, and this is not always the case.

I think there is a general confusion now because people are labelling the same things very differently and are losing sight of important insights from historical developments in technology ethics or privacy ethics. And that’s also a problem with developers because they don’t have the philosophical background in technology ethics. So we need to foster a debate where we clarify what already exists out there and how this applies to the new technologies.

The most important loss is that we get this duplication of efforts where people make arguably inferior research because they think that they discover a new approach to a problem that’s already been discussed for twenty years and has been further developed somewhere else. In a different discipline or a different domain of ethics.

I’d like to touch the friction between the ethical and the political now. Some people accuse AI ethics of privatizing responsibility. They believe that we should rather use tools from political philosophy to deal with questions concerning the whole public. Do you think that this dichotomy between the ethical and the political makes any sense here? And if so, what’s at the heart of the problem?

I absolutely think there is a need to keep the differences between ethics and politics in mind. Together with Eduard Fosch Villaronga I wrote an article called Research in AI has Implications for Society: How do we Respond? Its subtitle is: The Balance of Power between Science, Ethics, and Politics. These are three very distinct domains which have very specific roles to fulfil.

Sometimes people believe that ethics can be the foundation of politics. I think that is a mistake. Ethics is about highlighting, exploring and trying to understand the morality and the normative implications of particular aspects of our activities and beings. When we develop something, for example, the role of the ethicist is to tease out what the implications of this technology are.

But I think it’s up to politics to say what we as a society want. Of course, ethicists can make guidelines and write papers about what they think about the moral implications, but we really need to respect democracy and political processes and allow the people to make the decisions. And I think that’s good because otherwise we either get a moralized kind of politics, or politicized ethics. So it’s really important to keep those areas a bit distinct. I know this is controversial, though, but the liberal in me still very much feels that politics should not be restricted by a too heavy ethical hand, as pluralism and making space for living different lives where very different values and ideals are upheld is really important.

But it’s also really important to makes sure that science is not completely bound by politics or ethics. That’s more controversial, I know. There must be some kind of restriction on science, of course. But it’s really important to distinguish between the domains. Science explains the world and engenders new technologies and technological change, ethics does the evaluation and tries to figure out what happens when we do this and that, and then politics determines if we should apply the various new discoveries. The latter – the political – should, of course, be based on the advice from the ethicists and proper engagement with and involvement of citizens.

Speaking of science, another paper of yours is about the influence of the scientific discourse about robots on human self-perception. You call this effect robotomorphy, and together with attempts to anthropomorphize machines they seem to be two sides of the same coin. We perceive robots as something similar to us and then we start to look at ourselves as if we were robots-like. Can you explain the mechanics of this process?

The concept of robotomorphy is new but the origin of the term can be found in Arthur Koestler’s book The Ghost in the Machine, where he lamented how behaviorists’ approach to human science led to what he called ratomorphy. As we can’t do many experiments on human beings, we started to experiment on rats. But then we make this error of assuming that we are like rats.  They are subjects of our study, and the foundations of our scientific findings. So we transpose what we work on back to humans, sometimes quite uncritically. This is what the robotomorphy term is inspired by.

The inherent logic of robotomorphy goes in two ways. Firstly, we are trying to figure ourselves out – as humans – by making machines that resemble us. So, we try to make AI based on neural networks, for example, inspired by what we think humans work like. And when we get these machines and they work, then we make the error of seeing machines in ourselves. We start reducing ourselves into what we are able to make in the machines. And that limits humanity. Phenomenology, all kinds of experiences that are an important part of being human become discarded because these are kinds of things we can’t really observe or replicate in the machines we make.

The other aspect of this is that we increasingly start to see machines as benchmarks of what we should be measured against, what we should strive to be, what’s the ideal. For example, we used to measure progress in chess and go programs by how close to human performance they were. But now they are the best, and they’ve become our benchmarks.

In the tradition of Western science we see rationality, optimization, and efficiency as an ideal, and machines are getting increasingly better at it. So we start adapting to what’s optimal for the broader system where machines are playing increasingly more important roles. That means molding humans into something more machine-like as well. Let’s take nudging for example. We try to correct human error because everything human is associated with unreliability and error – the anti-thesis of rationality and optimization.

Let me conclude with a question about big data as an ideology in science. According to another article of yours, big data is proclaimed as a master science of today – sort of an ultimate scientific paradigm. But it all seems like a revival of an old positivist dream. Is it something really new, or just a repetition of things that we had already witnessed over one hundred years ago? 

This quest for a master science is really old and is very much related to positivism and behaviorism, but also to philosophers like Thomas Hobbes writing in the 17th century. It’s about trying to discover general laws and believing that we can see humans as part of the natural world subject to laws that we discover through observations.

My article on this topic was written in 2018, but today this problem has only gotten worse and is now even more pressing. For example, just the other day Yann LeCun and Meta released a new discovery: with deep learning we’ve discovered the dark matter of molecules. AI systems now make these new scientific discovery, and humans are often left guessing how.

That goes back to an old idea that AI will have no theory, no ideology, and won’t be encumbered by human biases or faults. But I think today very few people honestly believe AI is really objective. Once they got at least a touch of the tech ethics debates going on they must have seen that how data is gathered, how it is contextualized, how algorithms are applied – all these things are obviously subjective or related to human decisions that impact how AI is performed. And this human aspect is deeply troubling and is by some seen as noise.

But we need to make sure that science is not only seen as a quest for objective mechanical laws of the universe but as a social and democratic undertaking with really huge social implications and consequences as well. Human decisions and human power are deeply entrenched into what sort of issues are discussed and what kind of framing for these questions we use.

Returning to the first question, isn’t this approach to big data or AI as an ultimate science an example of isolationistic point of view?

It can be related to this, yes. I think it is because it’s based on an approach where people, often unaware, end up excluding everything not covered by or not comprehensible within the paradigm of AI.

We have to contextualize our analysis of AI and bring the human and the social back in. A lot of what can be referred to as techno-solutionism is all too often subject to a debilitating blindness to wide areas that are crucial to human existence. Many experiential aspects of human life are lost if we restrict ourselves to machine-like or AI-compatible approaches to science. We can’t optimize or rationalize everything. We shouldn’t.

*Dr Henrik Skaug Sætra is a sustainability consultant and political scientist with a broad and inter-disciplinary background and approach, mainly focusing on the political, ethical, and social implications of technology. He focuses specifically on the sustainability related impacts of AI, and he has published a book and several articles on AI, ESG, and the UN Sustainable Development Goals.

Przeczytaj polską wersję rozmowy TUTAJ

Skip to content