Close

David Wright:
I am optimistic about
AI standards integration

21 February, 2023
Maciej Chojnowski

I hope the AI Act reaches a workable consensus between the European Council, the Parliament and the Commission and that it is generally welcomed by stakeholders. I think that with its adoption there will be greater harmonization of AI risk assessments as well. The Commission did give greater guidance to data protection impact assessments with the GDPR and they may well do the same thing with the AI Act, says David Wright in conversation with Maciej Chojnowski.

Maciej Chojnowski: In a report “A survey of artificial intelligence risk assessment methodologies” co-produced by Trilateral Research and EY, you refer to risk assessment, impact assessment and risk management as similar but separate approaches. One could also add conformity assessment here. All these terms have something in common but do not cover exactly the same issues. Can you explain the relations between them?

David Wright*:  Impact assessment as a general term is the broadest, of course. And there are different types of impact assessment: privacy, ethical, surveillance, security, and environmental impact assessments. They usually follow a structured approach. However, there are quite a few differences in those approaches even so.

Risk management is a kind of different term although it’s certainly associated with risk assessment. Because in order to manage risks you need to assess them and decide which risks you should prioritize, where you should focus your attention etc.

Conformity assessment on the other hand is something totally different. In the case of Europe, it has to do with meeting EU standards. In order to get a CE mark, you have to conform with certain regulatory requirements. The European Commission is quite advanced in its thinking in terms of developing a cybersecurity seal that would be used for certifying software products and services.

Let’s take a look at today’s AI risk assessment landscape. What has already been done, and what is still to come in terms of effective tools or governance procedures?

Working on the survey for EY, we looked at different types of artificial intelligence risk assessments (or approaches that were very close to what we might term an AI risk assessment). And there are lots of organisations, including lots of companies, who are thinking about these issues already. Artificial intelligence has benefits but it also has risks. And those risks have been well publicized in the media. So companies are certainly aware that they should take them into account or otherwise they might suffer something unpleasant to their reputation.

So in terms of context, it’s very favourable towards the introduction of artificial intelligence risk assessment. I think in 2023 we’re likely to see agreement between the European Council, the Parliament and the Commission in regard to the AI Act. And the adoption of the AI Act will be quite an important factor in the expanded use of AI risk assessments.

In the report, you name two general types of AI risk assessment: identification of specific risks and classification of an overall level of risk. Should the latter be the first step in evaluating AI systems (with the former being the consecutive step), or maybe you can’t determine the overall level of risk without identifying the specific risks first? Is there one right order here, or maybe it all depends on the situation?

I would say certainly the first step should be the identification of specific risks. No question about it. Having identified those specific risks, one could refer back to the AI Act and see whether it falls in one of the four categories of different types of risk, including prohibited activities and high-risk activities. Most likely it will.

So definitely the first step is identification of the risks before seeing how it might be categorized and what steps you need to take in regard to which category it falls into.

Your report contains an overview of different AI risk assessment standards. Do you think that once the AI Act becomes law there will be one right European standard developed (e.g. by CEN), or maybe we should expect a coexistence of a number of them (by ISO and IEEE), including the solutions developed in a research project like SIENNA?

That’s a very good question but answering it would require a little bit of crystal ball gazing, and I am not sure I am the best futurist. But certainly the ISO and in particular Special Committee 42 and Working Group 3 have been looking at these kinds of issues and identified principles that AI systems should comply with. So I think that will have quite a big impact. And CEN is following very closely what ISO is doing.

However while the AI Act is an actual regulation, the ISO standards are voluntary standards. But I am pretty sure as a minimum we are going to have the AI Act and some ISO standards, probably more than one.

In addition to those it’s likely that individual companies or industry associations might also adopt some codes of practice or principles that would be part of that same “cluster” as the AI Act and the ISO standards. So if you take those kinds of activities into account, most likely we will see various types of standards anyway.

Let me continue this thread for a while. I would like to refer to a recent report by the European Commission’s Joint Research Centre on AI standardisation landscape where it reads: “We see standardisation work at the European level as an instrument for integration and supplementation of valuable content from ISO/IEC and IEEE, preventing fragmentation when it comes to standards on AI risk management, hence facilitating the work of AI providers to achieve compliance.” Do you think that this sort of integration will be the result of the CEN work?

I definitely think so. All of those organisations talk to each other. Often it’s the same people going to CEN and ISO meetings and responding to the various consultations that the EC has had in regard to the AI Act. So I would be very optimistic that that will happen definitely.

But do you expect any original input from the work of CEN? Because one can think that if ISO and IEEE have already started working on the standards while CEN has not, then once CEN finally starts they will probably have lots of material to choose from. I am therefore wondering if they are going to add extra value to the existing knowledge, or will they just repeat what has already been said elsewhere.

I couldn’t answer that question. But I know that CEN follows ISO very closely and it has established a working group to deal with some of the ethical issues somewhat similar to the work of ISO SC42 Working Group 3. I am not sure what exactly the status of that working group is right now in CEN. They were discussing its formation the last time I checked it out.

In your pioneering work on impact assessment, you have covered such areas as privacy and surveillance. Considering your experience in the field, what are you greatest hopes and fears as far as the upcoming AI risk assessment systems and AI legislation are concerned? What are the most crucial issues lawmakers and standardization bodies should remember about?

As far as my hopes are concerned, I hope the AI Act reaches a workable consensus between the three institutions and that it is generally welcomed by stakeholders of the AI Act. Of course, some stakeholders will not like it. Non-Europeans may not like its extraterritoriality. But nevertheless, I hope it receives general acceptance.

I think that with the adoption of the AI Act there will be greater harmonization of AI risk assessments as well. The Commission did give greater guidance to data protection impact assessments with the GDPR and they may well do the same thing with the AI Act. However, I think there are some uncertainties with regard to the institutional arrangements at the Member State level, i.e., will Member states form special AI regulators, or will the data protection authorities take AI under their wings. So that’s not quite clear to me how that might develop with certain overlap between those areas.

And my fear would be that the introduction of a major new regulation will have a kind of deterrent effect on innovation and business. I am not sure whether that will be the case because one can also see a positive effect in terms of European companies and non-European companies offering services in Europe saying to their customers that they comply with the AI Act. And I think that will also help create a more level playing field as well.

So overall I am more positive that this year will be one of hope for AI enthusiasts and that the fears with regard to the deterrent effect of AI Act will be somewhat overblown.

 

 

*David Wright is Director of Trilateral Research, a company he founded in 2004. Trilateral has partnered in more than 75 EU-funded projects and coordinated several of those, including the ongoing CC-DRIVER project on the human and technical drivers of cybercrime. He has published more than 70 articles in peer-reviewed journals, and co-edited and co-authored several books, including Privacy Impact Assessment (Springer, 2012) and Surveillance in Europe (Routledge, 2015). He coined the term and published the first article on ethical impact assessment. The ISO standard on privacy impact assessment (ISO 29134) is based on his PIA methodology. Similarly, the European Standardisation Committee Workshop Agreement (CEN CWA) on ethical impact assessment is based on his EIA methodology.

 

Przeczytaj polską wersję rozmowy TUTAJ

Skip to content